url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://trueshelf.org/exercises/156/legendres-theorem/
## Legendre's Theorem Prove the following Legendre's Theorem : Legendre's Theorem : The number $n!$ contains the prime factor $p$ exactly $\sum_{k \geq 1}{\lfloor \frac{n}{p^k} \rfloor}$ times. Source: folklore 0 0 0 0 0 0 0 0 0 0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.959258496761322, "perplexity": 4047.5119354891017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887621.26/warc/CC-MAIN-20180118210638-20180118230638-00267.warc.gz"}
https://www.physicsforums.com/threads/impulse-diffy-eq.363422/
Impulse-Diffy eq. polarbears 23 0 1. The problem statement, all variables and given/known data $$y''+y=\delta (t-2\pi )cos(t)$$ $$y(0)=0,y'(0)=1$$ 2. Relevant equations 3. The attempt at a solution The left side is (s^2+1)Y(s)-1=RHS My problem is the fact that cosine is being multiplied by the delta function. I put it in the form of an intergral but I dont know what to do from there. HallsofIvy 41,626 821 Well, good! Delta functions usually make integrals trivial. What integral did you get? annoymage 364 0 hmm, i dont know what is this question, may i know, what topic should i study for this question? polarbears 23 0 Differential Equations -Laplace transforms OHHHH waitttt does the delta function just determine the bound of my intergral? The Physics Forums Way We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822062611579895, "perplexity": 3015.211864871659}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204300.90/warc/CC-MAIN-20190325194225-20190325220225-00506.warc.gz"}
https://triangle.mth.kcl.ac.uk/?search=au:Marco%20au:Serone
Found 1 result(s) ### 29.09.2021 (Wednesday) #### In search of fixed points in non-abelian gauge theories using perturbation theory Regular Seminar Marco Serone (SISSA, INFN Trieste) at: 13:45 KCLroom online abstract: Four-dimensional gauge theories can flow in the IR to non-trivial CFTs. By employing Borel resummation techniques both to the ordinary perturbative series and to the Banks-Zaks conformal expansion, we first analyze the conformal window of QCD and find substantial evidence that QCD with n_f=12 flavours flows in the IR to a CFT. We then study UV fixed points for SU(n_c) gauge theories with fundamental fermion matter in 4+2epsilon dimensions. Using resummation techniques similar to those used in the 4d QCD case, we provide evidence for the existence of non-supersymmetric CFTs in d=5 space-time dimensions in a certain range of colors and flavours.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595687389373779, "perplexity": 2466.6215433242587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00717.warc.gz"}
https://bahcemiziyetistirmeliyiz.wordpress.com/tag/algebraic-topology/
Articles # Topology as squinting One can describe topology, in more descriptive language, as the study of coarse-grained or qualitative features of space. Instead of asking about distances and angles, one asks about connectivity, compactness, the presence of holes (as captured e.g. by homology or more subtly by cohomology), the kinds of paths that live in the space (as captured by the fundamental group), and so on. Sometimes this loss of precise information is desirable in applications: when we are inundated with data, a more coarse-grained or qualitative perspective can help us see the forest for the trees. Below we describe two topological tools, both built off homology, which demonstrate this guiding philosophy and how it has turned out to be useful in certain applications. ## Persistent homology Homology is the topological tool of choice in many applications. It contains information about connectedness, compactness, and holes in all dimensions, is conceptually more straightforward than cohomology, and is eminently computable using the tools of linear algebra. In applications we often start from a point cloud (i.e. a finite set of points) sampled from what we assume to be an underlying manifold. In order to reconstruct the homology of the underlying manifold, we could e.g. the nerve of an open covering constructed by taking balls based at every point in our point cloud, but are immedaitely faced with the question: how big should we make those balls / how fine a covering should we take? Persistent homology offers a compellingly natural answer to this question: use all possible scales, at once. ### What is persistent homology? Persistent homology, as described by one of its originators (Herbert Edelsbrunner), is “an algebraic tool for measuring topological features of shapes and functions [which] casts the multi-scale organization we frequently observe in nature into a mathematical formalism.” The basic idea is to measure homology “at all possible scales”, and keep track of how homology varies as we pass between these various scales. Below we give more precise formulations of this for Morse functions, and then for triangulable spaces. Given a Morse function $f: \mathbb{R} \to \mathbb{R}$ (i.e. f is smooth with non-degenerate critical points), we can consider the topology of the sublevelsets, and pair up critical points which cause connected component to be added or removed. A persistence diagram is a diagrammatic representation of this pairing (figure from Edelsbrunner-Harer): We can generalize this idea to higher dimensions—for a Morse function $f: M \to \mathbb{R}$ on a n-manifold $M = M^n$, we obtain n persistence diagrams, one in each degree $0, 1, \dots, n-1$ of (nonvanishing) homology. Alternatively, we may define persistence homology on a simplicial complex (or more generally a triangulable space) K by specifying a filtration $\varnothing = K_0 \subset K_1 \subset \dots \subset K_m = K$. The work of Zomorodian and Carlsson, and many others following them, have fleshed out the theory of persistent homology considerably; the Edelsbrunner-Harer survey linked to above is a good starting reference. Some key results: • Persistent homology based on homology with k coefficients can be given a $k[t]$-module structure; • Persistent homology is computable in a wide range of settings. • Persistence is stable under perturbations (the precise formulation here depends on the setting.) ### Applications One of the first applications of persistent homology was for shape recognition, via a generalization of convex hulls known as alpha shapes. Here we start with a point cloud, and would like to deduce the underlying shape from which (we assume) it was drawn. To do this we start forming alpha shapes—but as above there is a question of which scale we should use in this process. Persistent homology resolves this question by saying that we should take all possible scales, and regard features that “persist” across a (relatively) large(r) range of scales as more likely to be “true” features of the underlying shape. The idea of persistent homology being potentially useful wherever homology might be useful, but questions of scale exist, can also be seen e.g. in its application to homological criteria for coverage in sensor networks. Given a network of sensors which can communicate with nearby ones, one can “detect” gaps in coverage as non-trivial elements of the second homology of a flag complex with 1-skeleton a sensor communication graph (vertices are sensors, and there is an edge between two vertices if the corresponding sensor ranges overlap.) The formulation of such a criterion is almost tautological (“there is a hole if the algebra says there is a hole”), but one advantage it has is that second homology can be computed, in a distributed fashion, by the sensors in such a locally-communicating network—while we have not done anything significant conceptually, we have reformulated our intuition in a precise, computable way. To compute it in a robust, noise-proof way which does not require precise knowledge of distance between sensors, however, requires the use of persistent homology (or other tools.) ### Topological data analysis Persistent homology has also been a key tool in topological data analysis, starting with (I believe) the work of Gunnar Carlsson and Afra Zomorodian. The intuition there is that large data sets, while extrinsically huge-dimensional, are often intrinsically low-dimensional in nature, and topology, via persistent homology or other means, provides a good way of effective dimension reduction, because, very roughly speaking, “shape matters”. The (statistical or other) theory to back this intuition is not always present, although at least on the mathematical side there has been considerable development, driven in large part by possible applications in topological data analysis, to put the theory of persistent homology and its application to data analysis on a sound mathematical footing. Stability theorems, in particular, show that persistent homology at least has a chance of being a sufficiently robust invariant to be useful in data analysis. There have been a number of applications of these techniques, and indeed a start-up which has made topological data analysis its killer app, although the use of topological techniques in data analysis is still from systematic: successful applications often seem somewhat ad hoc; the generic attempt to apply topological data analysis seems more likely to result in a case of “I found these cycles in my data—but what does that mean?” ## Euler calculus An easier version of homology—or, really, a collapsed or “decategorified” version of it—is the Euler characteristic, and even with part of its topological power collapsed this can still serve useful functions. For instance, one may notice that the Euler characteristic obeys something that looks like an inclusion-exclusion principle: $\chi(A \cup B) = \chi(A) + \chi(B) - \chi(A \cap B)$. One can formalize this and build a theory of integration “with respect to Euler characteristic” ($d\chi$) by defining $\int_A \chi_A \,d\chi = \chi(A)$ for characteristic functions of (sufficiently nice) sets, and then extending linearly and using limits. The sense in which this is like a “squintier” version of Lebesgue integration (say) can be seen clearly in the axiom underlying that definition: instead of associating to the characteristic function $\chi_A$ of a set A the measure of A, we associate to $\chi_A$ the Euler charateristic of A. The resulting theory is fairly nice—there is a Fubini theorem, Euler integrals can be efficiently computed via the Euler charateristics of level sets and related objects, &c. Not all functions can be integrated, but the ones that tend to be associated with finite point clouds can (a more precise formulation involves o-minimal structures; the functions which can be integrated are those with discrete range and definable preimages according to the o-minimal structure. The theory can be further extended to continuum-valued functions, but that introduces more technicalities, and we do not need it here.) Schapira and Oleg first worked the theory out in the late 1980s, and by now it has found applications in tomography, sensor networks, and so on: we briefly describe a few of these below. ### An application to tomography The Schapira inversion formula is a topological version of the inversion formula for the Radon transform. Suppose we have a compact subset $T \subset \mathbb{R}^3$, and we “scan” T by slicing along hyperplanes and recording the Euler characteristic of the slices of T—which, in the case of a compact set of the plane, is simply the number of connected components minus the number of holes (which equals the number of bounded connected components of the complement—a baby case of Alexander duality.) This yields a function $h: \mathrm{AGr}_2(\mathbb{R}^3) \to \mathbb{Z}$ (the domain here is the affine Grassmannian of all planes in $\mathbb{R}^3$, not necessarily going through the origin.) Then we can recover the set T (via its characteristic function $\chi_T$) as follows: encode the information from the “scan” in the relation $\mathcal{S} \subset \mathbb{R}^3 \times \mathrm{AGr}_2(\mathbb{R}^3)$ by $(x,\pi) \in \mathcal{S}$ if $x \in \pi \cap T$. We may verify h is related to $\chi_T$ by $h(\pi) = \int_{\mathbb{R}^3} \chi_T(x) \chi_{\mathcal{S}}(x,\pi) d\chi(x)$ (a Radon transform, but with $d\chi$ in the place of dx.) Let  $\mathcal{S}_x$ denote the fiber $\{\pi \in \mathrm{AGr}_2 (\mathbb{R}^3) : x \in T \cap \pi\}$; then $\chi(\mathcal{S}_x) = 1$ for all $x \in \mathbb{R}^3$ (each point in T appears on a compact set of affine planes) and $\chi(\mathcal{S}_x \cap \mathcal{S}_y) = 0$ for all $x \neq y \in \mathbb{R}^3$ (given $x, y \in T$, the affine planes which see both of them are exactly the ones which contain the line containing both of them—and there is a circle’s—or technically a $\mathbb{P}^1$‘s—worth of such planes.) The Schapira inversion formula in this case then states that we have $\chi_T(x) = \int_{\mathrm{AGr}_2(\mathbb{R}^3)} h(\pi) \chi_{\mathcal{S}}(x,\pi) d\chi(\pi)$. Thus in particular it is sufficient to record the connectivity data on the slices to recover the original shape, if one knows enough / has sufficient assumptions on the Euler characteristics of the slices. This seems almost too good to be true—until you realize that actually knowing the Euler characteristic for every slice is still a nontrivial problem. Still, it does give us some latitude to deform which was less present in the original, non-topological Radon transform. ### An application to sensor coordination Baryshnikov and Ghrist have applied Euler integration to the problem of target enumeration via distributed sensor networks. Here the setup is this: we have a bunch of enumeration targets (cats, perhaps, or something less fanciful, such as cars) in a space X covered by a sensor field (in actual applications to be approximated by a sufficiently dense network of sensors.) Each sensor $\alpha$ records the total number of targets it senses in its range; the sensors can also communicate with nearby sensors (those with overlapping ranges) to exchange and compare target counts, but that is the limit of their capabilities. Given this, and that the sensor ranges overlap in undescribed ways, how should we go about recovering an accurate global count? Let the target support $U_\alpha$ be the subset of all $x \in X$ s.t. the sensor at x senses $\alpha$, and $h: X \to \mathbb{Z}$ is the counting function which simply returns the number of targets counted by the sensor at x. In a toy case where $X = \mathbb{R}^2$—so that we did not have to worry about boundary effects—and all the target supports were perfect circles of radius R, we would have $\int_X h \,dx = = \int (\sum_\alpha \chi_{U_\alpha}) = \pi R^2 (\#\alpha)$, or $\#\alpha = \frac{1}{\mu(U_\alpha)} \int_X h\,dx$, where here both dx and $\mu$ refer to Lebesgue measure. Now, if all the target supports $U_\alpha$ have the same nonzero Euler characteristic N (e.g. if they are all contractible, so $N = 1$), then we can apply exactly the same logic, but using Euler integration instead of Lebesgue integration, to show that the number of $\#\alpha = \frac 1N \int_X h \,d\chi$. Hence we did not need our precise geometric hypotheses—that we had no boundary effects, and that all of our target supports were geometrically the same shape; we only needed looser, topological analogues of these hypotheses. Standard # Brouwer’s invariance of domain I am recording here an elementary* result used in the proof of the Teichmueller Existence Theorem. Theorem (Brouwer). Any injective continuous map from $\mathbb{R}^n$ to itself is open. *the statement itself sounds elementary enough, but the proof (all known proofs) seem to require some amount of machinery from algebraic topology. Note that one corollary of the theorem is that Euclidean spaces of different dimensions are not homeomorphic, another statement which appears elementary (and intuitive), but is difficult to prove without additional machinery. It seems that the category of continuous maps contains enough curious things—Peano and other space-filling curves and the like—to quite thoroughly upset our intuitions, although going to the rather more restrictive category of smooth things does restore much of our intuition. Proof of Theorem: It suffices to show that any such map sends open balls to open balls, or, more narrowly, that for any injective continuous map $f: B^n \to \mathbb{R}^n$$f(0)$ lies in the interior of $f(B^n)$ (where $B^n$ denotes the closed unit ball.) Let f be as in the hypothesis of the Theorem. $f: B^n \to f(B^n)$ is a continuous bijection between compact Hausdorff spaces and hence a homeomorphism. $f^{-1}: f(B^n) \to B^n$ is continuous; by the Tietze extension theorem, $f^{-1}$ may be extended to a continuous map $G: \mathbb{R}^n \to \mathbb{R}^n$. G has a zero at $f(0)$, and moreover we have the following Lemma: If $\tilde{G}: f(B^n) \to \mathbb{R}^n$ is a continuous map with $\|G - \tilde{G}\|_\infty \leq 1$ (i.e. is a small perturbation of G), then $\tilde{G}$ has a zero in $f(B^n)$. Proof of Lemma: Applying Brouwer’s fixed-point theorem to the function $x \mapsto x - \tilde{G}(f(x)) = (G - \tilde{G})(f(x))$ (from the closed unit ball to itself) yields a $x \in B^n$ s.t. $x = x - \tilde{G}(f(x))$, i.e. $\tilde{G}(f(x)) = 0$. If $f(0)$ were not an interior point of $f(B^n)$, we may construct a small perturbation of G that no longer has a zero on $f(B^n)$, contradicting the above lemma. At this point I refer the interested reader to Terry Tao’s blogpost (where the gist of this proof was also taken from; he was interested in Hilbert’s fifth problem, whose solution also makes use of invariance of domain) for the details of the construction. Corollary Any proper injective continuous map $f: \mathbb{R}^n \to \mathbb{R}^n$ is a homeomorphism. Proof of Corollary: By invariance of domain, f is open. Now it suffices to show that f is surjective (and hence bijective) for it to be a homeomorphism; but surjectivity follows from f being a proper map into a metrizable space, and hence closed—given $E \subset X$ closed and a sequence of points $f(x_n) \in f(E)$ accumulating to $y \in Y$ and $\epsilon > 0$, $f(x_n) \in B(y,\epsilon)$ (the closed ball) for all $n \geq N = N(\epsilon) \gg 0$. By properness, we have, for all $n \geq N$, $x_n \in K \subset E$ with K compact. Then, up to subsequence, $x_n \to x \in K$. By continuity, $f(x_n) \to f(x) = y$, so $y = f(x) \in f(E)$. Standard Snippets # Free groups and topology Any subgroup of a free group is free. This result is quite interesting, because the statement is purely algebraic yet the simplest proof is topological. Namely, any free group G may be realized as the fundamental group of a graph X. The main theorem on covering spaces tells us that every subgroup H of G is the fundamental group of some covering space Y of X; but every such Y is again a graph. Therefore its fundamental group H is free. Standard
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 75, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8805723190307617, "perplexity": 694.2794364377402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320049.84/warc/CC-MAIN-20170623100455-20170623120455-00403.warc.gz"}
http://mathhelpforum.com/advanced-algebra/217214-linear-algebra.html
2. Re: Linear Algebra Originally Posted by raed What have you been able to do so far? -Dan 3. Re: Linear Algebra ??What do you mean by subspaces being "independent"? As sets of vectors, subspaces are never independent. 4. Re: Linear Algebra the subspaces R and N are independent means if we take r in R and n in N and r+n=0 then r=n=0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9713211059570312, "perplexity": 3145.626582093901}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106367.1/warc/CC-MAIN-20170820092918-20170820112918-00022.warc.gz"}
https://ccrma.stanford.edu/realsimple/vir_tube/Wave_Impedance_Reflection_Radius.html
Next  |  Prev  |  Up  |  Top  |  REALSIMPLE Top ### Wave Impedance, Reflection from a Radius Mismatch In the previous section, we identified the right- and left-traveling components of two key quantities describing wave propagation in an acoustic tube: the pressure in the tube ( ), and the volume velocity in the tube ( ). For the right- and left-traveling components, it turns out we can relate them using relatively simple formulas. Using a combination of calculus, Newton's laws of motion, and the law of conservation of matter, it can be shown that the right-traveling pressure and volume velocity components obey the following formula: (7) where is the wave impedance in the tube, given by the following formula: (8) where is the ambient fluid density in the tube, is the velocity of wave propagation (see Equation (5)), and is the cross-sectional area of the tube. Thus, the wave impedance relates pressure to volume-velocity everywhere along a plane wave traveling to the right along the axis of an acoustic tube having cross-sectional area . Similarly, for the left-traveling wave components, it may be shown that (9) It is next interesting to consider what happens to a traveling pressure waveform in an acoustic tube when it encounters a radius mismatch. In other words, what happens when the waveform is traveling through an initial tube with radius , and all-of-a-sudden is transferred into a tube with a second disparate radius ? It turns out that part of the waveform will be reflected back into the first tube, and the strength of the reflection is given by the following formula: (10) where is the wave impedance in the first tube section, and is the wave impedance in the second tube section. Using the previous formulas, it may be further shown that the reflectance is also given by the following formula for cylindrical tubes: (11) where is the radius of the first tube, and is the radius of the second tube. Subsections Next  |  Prev  |  Up  |  Top  |  REALSIMPLE Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415698647499084, "perplexity": 604.0087943981704}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246644526.54/warc/CC-MAIN-20150417045724-00306-ip-10-235-10-82.ec2.internal.warc.gz"}
https://infoscience.epfl.ch/record/206884
Infoscience Thesis # Hydroacoustic Modeling of a Cavitation Vortex Rope for a Francis Turbine Hydraulic machines subject to off-design operation involve the presence of cavitating flow regimes in the draft tube. The cavitation vortex rope at part load conditions is described as an excitation source for the hydraulic system, and interactions between this excitation source and system eigenfrequency may result in resonance phenomena and induce a draft tube surge and electrical power swings. To accurately predict and simulate a part load resonance, proper modeling of the draft tube is critical. The presence of this cavitation vortex rope requires a numerical pipe element taking into account the complexity of the two-phase flow. Among the parameters describing the numerical model of the cavitating draft tube flow, three hydroacoustic parameters require a special attention. The first hydroacoustic parameter is called cavitation compliance. This dynamic parameter represents the variation of the cavitation volume with respect to a variation of pressure and implicitly defines the local wave speed in the draft tube. The second parameter corresponds to the bulk viscosity and is related to internal processes breaking a thermodynamic equilibrium between the cavitation volume and the surrounding liquid. The third parameter is the excitation source induced by the precessing vortex rope. The methodology to identify these hydroacoustic parameters is based on the direct link that exists between the natural frequency of the hydraulic system and the wave speed in the draft tube. First, the natural frequency is identified with the help of an external excitation system. Then, the wave speed is determined thanks to an accurate numerical model of the experimental hydraulic system. By applying this identification procedure for different values of Thoma number, it is possible to quantify the cavitation compliance and the void fraction of the cavitation vortex rope. In order to determine the energy dissipation induced by the cavitation volume, the experimental hydraulic system is excited at the natural frequency. With a Pressure-Time method, the amount of excitation energy is quantified and is injected into the numerical model. A spectral analysis of the forced harmonic response is used to identify the bulk viscosity and the pressure source induced by vortex rope precession. Thus, the identification of the hydroacoustic parameters requires the development of a new numerical draft tube model taking into account the divergent geometry and the convective terms of the momentum equation. Different numerical draft tube models are compared to determine the impact of convective and divergent geometry terms on identification of the hydroacoustic parameters. Furthermore, to predict the hydroacoustic parameters for non-studied operating conditions and to break free from the dependence upon the level setting of the Francis turbine, dimensionless numbers are proposed. They have the advantage of being independent from the selected numerical model and they define a behavior law of hydroacoustic parameters when the cavitation volume oscillates at resonance operating conditions. Finally, to investigate the stability operation of the prototype, the hydroacoustic parameters need to be transposed to the prototype conditions according to transposition laws. By assuming both Thoma similitude and Froude similitude conditions, transposition laws are developed and the hydroacoustic parameters are predicted for the prototype. Thèse École polytechnique fédérale de Lausanne EPFL, n° 6547 (2015) Programme doctoral Energie Faculté des sciences et techniques de l'ingénieur Institut de génie mécanique Laboratoire de machines hydrauliques Jury: Prof. J.R. Thome (président) ; Prof. F. Avellan (directeur) ; Dr A. Bergant, Dr J. Koutnik, Prof. A. Schleiss (rapporteurs) Public defense: 2015-4-17 #### Reference Record created on 2015-03-24, modified on 2017-05-10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8164165616035461, "perplexity": 1620.1754973734069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891543.65/warc/CC-MAIN-20180122213051-20180122233051-00773.warc.gz"}
http://www.rockphysicists.org/publications
### Recent Publications #### The Elastic Properties of Clay in Shales posted Aug 5, 2018, 10:50 AM by Colin Sayers   [ updated Aug 5, 2018, 11:00 AM ] Authors: Colin Sayers and Lennert den Boer Link to JGR article:Abstract: The mechanical properties of clay minerals are important in many diverse scientific disciplines, including soil mechanics, civil engineering, materials science, and petroleum exploration. Rock physics provides a link between the elastic properties of rocks and their constitutive properties such as mineralogic composition, porosity, and pore‐fluid content. To accurately characterize shales, rock physics models must account for the anisotropic properties of clay minerals. Due to more compliant regions between clay particles, the elastic stiffness of clay in shales is significantly less than that of its constituent clay minerals. In this paper, the clay in shales is modeled as anisotropic clay platelets surrounded by a softer interparticle region consisting of clay‐bound water and interparticle contacts. Inverting for the elastic properties of this interparticle region indicates that its effective bulk modulus is like that of water. However, it has a nonzero effective shear modulus that is smaller by an order of magnitude, consistent with the expected shear modulus of clay‐bound water. Owing to its simplicity and robustness, it is anticipated that this model of shales, based on the properties of clay minerals and the interparticle medium, will find use in many rock physics applications, including seismic imaging, seismic inversion, and geomechanics. #### Laboratory micro-seismic signature of shear faulting and fault slip in shale posted Nov 29, 2017, 11:39 PM by IARP Admin   [ updated Nov 29, 2017, 11:39 PM ] Authors: Joel Sarout, Yves Le Gonidec, Audrey Ougier-Simonin, Alexandre Schubnel, Yves Guéguen, David N. Dewhurst Link to the Elsevier repossitory: Abstract: This article reports the results of a triaxial deformation experiment conducted on a transversely isotropic shale specimen. This specimen was instrumented with ultrasonic transducers to monitor the evolution of the micro-seismic activity induced by shear faulting (triaxial failure) and subsequent fault slip at two different rates. The strain data demonstrate the anisotropy of the mechanical (quasi-static) compliance of the shale; the P-wave velocity data demonstrate the anisotropy of the elastic (dynamic) compliance of the shale. The spatio-temporal evolution of the micro-seismic activity suggests the development of two distinct but overlapping shear faults, a feature similar to relay ramps observed in large-scale structural geology. The shear faulting of the shale specimen appears quasi-aseismic, at least in the 0.5 MHz range of sensitivity of the ultrasonic transducers used in the experiment. Concomitantly, the rate of micro-seismic activity is strongly correlated with the imposed slip rate and the evolution of the axial stress. The moment tensor inversion of the focal mechanism of the high quality micro-seismic events recorded suggests a transition from a non-shear dominated to a shear dominated micro-seismic activity when the rock evolves from initial failure to larger and faster slip along the fault. The frictional behaviour of the shear faults highlights the possible interactions between small asperities and slow slip of a velocity-strengthening fault, which could be considered as a realistic experimental analogue of natural observations of non-volcanic tremors and (very) low-frequency earthquakes triggered by slow slip events. #### Stress-dependent permeability and wave dispersion in tight cracked rocks: Experimental validation of simple effective medium models posted Nov 29, 2017, 11:37 PM by IARP Admin   [ updated Nov 29, 2017, 11:38 PM ] Authors: Joel Sarout, Emilie Cazes, Claudio Delle Piane, Alessio Arena, Lionel Esteban Link to the AGU repository: Abstract: We experimentally assess the impact of microstructure, pore fluid, and frequency on wave velocity, wave dispersion, and permeability in thermally cracked Carrara marble under effective pressure up to 50 MPa. The cracked rock is isotropic, and we observe that (1) P and S wave velocities at 500 kHz and the low-strain (<105) mechanical moduli at 0.01 Hz are pressure-dependent, (2) permeability decreases asymptotically toward a small value with increasing pressure, (3) wave dispersion between 0.01 Hz and 500 MHz in the water-saturated rock reaches a maximum of ~26% for S waves and ~9% for P waves at 1 MPa, and (4) wave dispersion virtually vanishes above ~30 MPa. Assuming no interactions between the cracks, effective medium theory is used to model the rock’s elastic response and its permeability. P and S wave velocity data are jointly inverted to recover the crack density and effective aspect ratio. The permeability data are inverted to recover the cracks’ effective radius. These parameters lead to a good agreement between predicted and measured wave velocities, dispersion and permeability up to 50 MPa, and up to a crack density of ~0.5. The evolution of the crack parameters suggests that three deformation regimes exist: (1) contact between cracks’ surface asperities up to ~10 MPa, (2) progressive crack closure between ~10 and 30 MPa, and (3) crack closure effectively complete above ~30 MPa. The derived crack parameters differ significantly from those obtained by analysis of 2-D electron microscope images of thin sections or 3-D X-ray microtomographic images of millimeter-size specimens. #### The elastic anisotropy of clay minerals posted Aug 15, 2016, 8:36 AM by IARP Admin Colin M. Sayers1 and Lennert D. den Boer21Schlumberger, Houston, Texas, USA. E-mail: [email protected], Calgary, Alberta, Canada. E-mail: [email protected] layered structure of clay minerals produces large elastic anisotropy due to the presence of strong covalent bonds within layers and weaker electrostatic bonds in between. Technical difficulties associated with small grain size preclude experimental measurement of single-crystal elastic moduli. However, theoretical calculations of the complete elastic tensors of several clay minerals have been reported, using either first-principle calculations based on density functional theory or molecular dynamics. Because of the layered microstructure, the elastic stiffness tensor obtained from such calculations can be approximated to good accuracy as a transversely isotropic (TI) medium. The TI-equivalent elastic moduli of clay minerals indicate that Thomsen’s anisotropy parameters ϵϵ and γγ are large and positive, whereas δδ is small or negative. A least-squares inversion for the elastic properties of a best-fitting equivalent TI medium consisting of two isotropic layers to the elastic properties of clay minerals indicates that the shear modulus of the stiffest layer is considerably larger than the softest layer, consistent with the expected high compliance of the interlayer region in clay minerals. It is anticipated that the elastic anisotropy parameters derived from the best-fitting TI approximation to the elastic stiffness tensor of clay minerals will find applications in rock physics for seismic imaging, amplitude variation with offset analysis, and geomechanics.http://library.seg.org/doi/abs/10.1190/geo2016-0005.1 #### Measurements of elastic and electrical properties of an unconventional organic shale under differential loading posted Jul 27, 2015, 12:43 AM by IARP Admin
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8324725031852722, "perplexity": 3891.8175283821774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525659.27/warc/CC-MAIN-20190718145614-20190718171614-00286.warc.gz"}
https://www.physicsforums.com/threads/difference-in-planet-mass.92246/
# Difference in Planet Mass 1. Oct 4, 2005 ### Serena_Greene I think I have done this problem correctly, The mass of a robot is 6680 kg. This robot weighs 3070 N more on planet A than it does on planet B. Both planets have the same radius of 4.00 x 10^8. What is the difference Ma - Mb in the masses of these plantes? I used this equation F = G * (m1*m2)/(r^2) F = 3070 N m1 = robot mb = extra mass of planet A r = 4.00 x 10^8 3070 = 9.673 x 10^-11 * 6680m2/ (4.00 x 10^8)^2 3070 = (2.786 x 10^-24)m 1.1019 x 10^27 = m Is this correct or do I have it totally wrong? -Serena 2. Oct 4, 2005 ### Päällikkö I got a different answer, you seem to have typed it wrong into your calculator, the equation's the same as mine. EDIT: Never mind. I had used the gravitational constant you typed, which was wrong :). I got the same answer now. Last edited: Oct 4, 2005 3. Oct 4, 2005 ### El Hombre Invisible I tried it, though not using your technique, and your answer seems correct. Same answer from two different methods is usually a good sign. 4. Oct 4, 2005 5. Oct 4, 2005 ### Serena_Greene Thanks!! I just wanted to make sure before I typed it into GradePlus. It was correct (I had already used up a try as I couldn't get the entire number in, but found out I could use Exponents) I asked my Physic teacher if I did the problem correctly and he started to do the problem, and he couldn't do the problem. -Serena Similar Discussions: Difference in Planet Mass
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8853367567062378, "perplexity": 1685.2635926496994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948568283.66/warc/CC-MAIN-20171215095015-20171215115015-00513.warc.gz"}
https://www.physicsforums.com/threads/how-to-deal-with-non-uniform-cross-section-bending.308507/
# How to deal with non uniform cross section bending? 1. Apr 19, 2009 ### Vr6Fidelity How to deal with non uniform cross section bending? How does one approach a solution for bending of a shaft having multiple diameters. I have one particular shaft in mind that has 11 diameters to get to the midpoint. I know the deflection at the midpoint, and I would like to calculate the force required to cause that deflection. Since the load is a point load, The moment diagram is a nice simple triangle. I have calculated I for all sections, I just need to know how to approach the solution. Some of the 11 sections are tapered. Any words from the wise? Ill my equations in my bag of tricks are for uniform cross section. 2. Apr 20, 2009 ### ericgrau I'm too lazy to derive it at the moment. But if you learn the generic beam equation you can derive it for any cross section, even one that was continuously changing: http://en.wikipedia.org/wiki/Beam_equation It'll help you conceptually at the very least, even if you don't need to do the calculus to solve it. 3. Apr 20, 2009 ### Vr6Fidelity Ok I worked on this till 5 am last night. here is what I came up with, let me know if i have gone terribly wrong somewhere. The shaft has variable sections but only one point load in the center (imbalance force). Therefore the moment diagram is a rather simple triangle. Now here is where it gets interesting: If i break every section of shaft down to individual little shafts, I can apply the fraction of the total moment present at that section. Basically I non dimensionalized the moment from zero to one. So then I use this moment to find the bending deflection in terms of THETA in radians. Where Theta = (ML)/(2EI) That is the deflection of one end. Total angular deflection is 2 Theta per section. Since there is no axial load, the length of the sections remains constant so the deflection if you were to keep one end from moving, would be: 2 Theta * (PI/180) to go to degrees then deflection = Sin(degrees)*L where L is the section length (hypotenuse) So if you are following me I now have the deflection for each section, and it seems valid. I then Sum all the deflections to get the total deflection. I can now run this program in excel and guess values of moment to achieve the known deflection. It seems to work extremely well. I can then take that moment I just figured out, and find the bending stress anywhere. Seem valid to you? 4. Apr 29, 2009 ### ericgrau Seems like it should be right, though I haven't double checked it thoroughly. Just remember that the beam equation is only valid for small deflections. For bending stress I assume you're using S = Mc/I. 5. Apr 29, 2009 ### Vr6Fidelity Yeah, It is right. I already submitted the paper and presented it.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.944359540939331, "perplexity": 821.0689336975456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148758.73/warc/CC-MAIN-20160205193908-00019-ip-10-236-182-209.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/11311/how-to-include-a-document-into-another-document?answertab=active
How to include a document into another document? I have two documents A and B. Both of them are separate documents. But document A also has to include document B. Now if I use \include{B} i get the following error: ! LaTeX Error: Can be used only in preamble. See the LaTeX manual or LaTeX Companion for explanation. Type H <return> for immediate help. ... l.1 \documentclass [11pt]{article} ? So how can I force to include document B with the "style" and "template" of document A? - Similar questions: tex.stackexchange.com/q/11221/2975 and tex.stackexchange.com/q/8198/2975. – Martin Scharrer♦ Feb 17 '11 at 16:11 Cut the content (the part between \begin{document}...\end{document} of B.tex into a new file B-content.tex. Change B.tex to be: \documentclass{...} \begin{document} \include{B-content} \end{document} Then put \include{B-content} into A.tex. - So easy but useful. Thanks! – RoflcoptrException Feb 17 '11 at 15:55 You can use the standalone, docmute or subfiles package to make LaTeX ignore the second preamble. Simply load the standalone package in the main file and \input or \include the document. This is a good way if the to-be-included documents just holds a picture which should also be compiled standalone. In this case having main files for every picture file would be annoying. % A.tex \documentclass{article} \usepackage{standalone} \begin{document} % ... \input{B-content}% or \include % ... \end{document} % B.tex (for normal text) \documentclass{article} \begin{document} \end{document} or if B should hold some diagram only (note the different class): % B.tex \documentclass{standalone} \begin{document} \end{document} - standalone looks like a very nice idea. I'll have to keep it in mind. – Matthew Leingang Feb 17 '11 at 18:56 @Matthew: Thanks, I wrote it because I have a lot of TikZ pictures and I hated the long re-compiling runs during the creation of the more complicated ones when they are inside a document. They are used multiple times across my papers, presentations and thesis anyway. Having extra main files for each was to cumbersome for me. Have a look at the options: you can collect all the "sub-preambles" automatically! – Martin Scharrer Feb 17 '11 at 19:02 As far as I know, include simply inserts the text wherever it is used. So you can't have a preamble in document B. At a quick glance, I would create a wrapper document C and use \include{B} in both after editing B so that it only contains your desired output text - You can try using the combine class but be warned this is not what LaTeX is designed for! An alternative that I haven't tried is to use the newclude package and write \includedoc{fileB.tex} This latter approach assumes that all of the packages, etc., you need are loaded by the first file. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9382168054580688, "perplexity": 2925.0883186883148}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705884968/warc/CC-MAIN-20130516120444-00012-ip-10-60-113-184.ec2.internal.warc.gz"}
http://crypto.stackexchange.com/questions/12328/help-with-linear-cryptanalysis
# Help with linear cryptanalysis I am new to linear cryptanalysis, so I decided to try to break a toy cipher that was designed to be vulnerable to linear cryptanalysis. Unfortunately, I can't get it to work no matter how hard I try. I've read the Wikipedia article and several papers, but they always seem vague on how to turn equations that hold over the sbox into ones that hold with high probability over the entire cipher, and I am stuck on that. What am I doing wrong? First off, the cipher. # RustleJimmy 2 Block Cipher sbox = [ ((2 * i + 1) * 0x4d / 2) & 0xFF for i in range(256) ] sinv = sorted(range(256), key=lambda i: sbox[i]) def T(block): # bit transpose of 8 bytes x = sum(block[i] << (8 * i) for i in xrange(8)) t = (x ^ (x >> 7)) & 0x00AA00AA00AA00AAL x = x ^ t ^ (t << 7) t = (x ^ (x >> 14)) & 0x0000CCCC0000CCCCL x = x ^ t ^ (t << 14) t = (x ^ (x >> 28)) & 0x00000000F0F0F0F0L x = x ^ t ^ (t << 28) return [ (x >> (8 * i)) & 0xFF for i in xrange(8) ] def R(byte, n): return (byte >> n) | ((byte & ((1 << n) - 1)) << (8 - n)) def encode(block, key): block = [ord(b) for b in block] key = [ord(b) for b in key] for i in xrange(8): block = [ block[j] ^ key[(i + j) & 0x7] for j in xrange(8) ] block = [ sbox[block[j]] for j in xrange(8) ] block = [ R(block[j], j) for j in xrange(8) ] block = T(block) block = [ block[j] ^ block[i] if i != j else block[j] for j in xrange(8) ] block = [ block[j] ^ key[j] for j in xrange(8) ] return ''.join(chr(b) for b in block) It's a block cipher with a 64bit key that operates on 64 bits. There is no key scheduling; the entire key is used for each round. The sbox is very simple, in fact the three least significant bits are just a linear function of the input. Unfortunately, the linear portion of each round mixes and rotates all the bits so it is not obvious how to take advantage of this. Here is what I've tried so far. The sbox is given by $y = 77x + 38 \mod 256$ where x is the input and y is the output. Scaling and rearranging this gives $5y + 2 = 129x + 192$, allowing the equality to be expressed using only xors and 5 nonlinear carry bits. I believe 5 is the minimum possible since the fourth bit is nonlinear and it has to propagate the rest of the way. Each carry bit can be written using the majority function on three inputs. $$c_{xyz} = majority(x,y,z) = x \wedge y \oplus x \wedge z \oplus y \wedge z$$ This can also be written as the sum of a linear approximation and an error term. $$c_{xyz} = x \oplus y \oplus z \oplus 1 \oplus e_{xyz}$$ Where $e_{xyz}$ is 1 with probability $\frac{1}{4}$. Given these equations, plugging them into the full 8 round cipher and simplifying gives 64 linear equations relating ciphertext to plaintext and key bits. However, since these equations have error terms, they are not guaranteed to hold. Assuming the error terms are independent (they aren't but for simplicity I had to assume that), then the probability of an equation with $n$ error terms holding is $\frac{1}{2} + \frac{1}{2^n}$. Therefore, we need to find equations with very few error terms. Unfortunately, the equations produced above had 80-140 error terms each. Using the greedy algorithm to find linear equations with fewer errors resulted in a reduced set with 70-123 terms. Unfortunately, this means that the probability advantage is still only $2^{-70}$, meaning it is much slower than brute force. So at this point I am stuck. What am I doing wrong? With such a weak sbox and few rounds, it doesn't seem like it should be this hard to break the cipher. - For this cipher, I suggest finding all possible linear approximations by simply enumerating them. If $S$ is the S-box, the bias of the linear approximation $\alpha \cdot x = \beta \cdot S(x)$ is given by $$b(\alpha,\beta) = |2 \Pr[\alpha \cdot x = \beta \cdot S(x)] - 1|.$$ Notice that you can compute $b(\alpha,\beta)$ for a single value of $\alpha,\beta$ with $2^8$ evaluations of the S-box. Also, there are only $2^{16}$ values of $\alpha,\beta$, so it is easy to enumerate all of them and compute $b(\alpha,\beta)$ for all of them. This should help you find high-quality approximations for the S-box. Your next step will be to piece these together into one-round characteristics and then multi-round characteristics. You can do that by hand, but for your cipher, you might want to look at Matsui's algorithm, which is designed to find the highest-bias linear characteristic for multiple rounds, given the biases for approximations of the S-box. As a starting point, it may be helpful to be familiar with the notion of active S-box. A S-box is considered active in a particular linear characteristic if it is being approximated by some linear approximation other than the trivial approximation $\alpha=\beta=0$. The linear diffusion layer places some constraints on the set of active S-boxes in round $r+1$ as a function of the set of active S-boxes in round $r$. Generally speaking, the more active S-boxes, the lower the bias. (Non-active S-boxes are free: the approximation holds with probability 1 for them, i.e., with bias 1.) Serge Vaudenay has suggested the following approach for finding high-bias linear approximations: summarize a one-round linear characteristic by a vector $a \in \{0,1\}^8$, where $a_i=1$ if the $i$th S-box is active in that round. Now we can look at pairs $(a,a')$ where $a$ is the summary for a linear characteristic in one round and $a'$ is the summary for a linear characteristic in the next round. Not all pairs $(a,a')$ will be feasible, but some will be. In this way, we can build a graph where the vertices are elements of $\{0,1\}^8$ and there is an edge $a \to a'$ whenever the pair $(a,a')$ is feasible. Now we can look for a path of length 8 in this graph that minimizes the total Hamming weight of the vertices visited along this path; this will correspond to looking for an 8-round linear characteristic with the smallest possible number of total active S-boxes. More precisely, it gives us an upper-bound on the bias of any such 8-round linear characteristic; if the number of active S-boxes in the path is small, there might be a good 8-round linear characteristic with high bias; but if the number of active S-boxes is too large, there's no hope for a good 8-round linear characteristic. We can even push this idea a bit further. We can annotate each edge $a \to a'$ with the log of the maximum bias of any 2-round approximation where $a$ represents the S-boxes active in the first round of that characteristic and $a'$ the S-boxes active in the second round. Then, we can look for a path of length 8, where we try to minimize the sum of the labels on the edges in the path. This corresponds to a all-pairs shortest-paths path problem in a graph with 256 vertices, and can be solved efficiently using the Floyd-Warshall algorithm. To test whether the edge $a \to a'$ is feasible, here is a technique that might be helpful. Let $\alpha,\alpha' \in \{0,1\}^{64}$ denote the linear approximation for the first round that we are considering, i.e., $\alpha \cdot x = \alpha' \cdot R_k(x)$ where $R$ is the round function. Note that $a$ is a deterministic function of $\alpha$; i.e., $a_i=0$ if and only if $\alpha_{8i}=\alpha_{8i+1}=\dots=\alpha_{8i+7}=0$. Now we can break $R_k$ into three pieces: the key xor, the S-box application, and a linear function, i.e., $R_k(x)=L(S(x\oplus k))$ where $S(\cdot)$ denotes parallel application of 8 S-boxes. Now both the key xor and the S-box preserve the set of active S-boxes, so we really want to look at the set of feasible linear approximations $\alpha,\alpha'$ for $L$, i.e., $\alpha \cdot x = \alpha' \cdot L(x)$. Given $a,a'$ it is easy to determine whether there exists $\alpha,\alpha'$ that are consistent with $a,a'$ and such that the approximation $\alpha \cdot x = \alpha' \cdot L(x)$ has non-zero bias. You can solve this using Gaussian elimination: for each $i$ such that $a_i=0$, add linear constraints $\alpha_{8i}=\alpha_{8i+1}=\dots=\alpha_{8i+7}=0$, and similarly for $a',\alpha'$; now treat the remaining bits of $\alpha,\alpha'$ as unknowns. Now notice that $\alpha \cdot x = \alpha' \cdot L(x)$ has bias 1 if $\alpha = L(\alpha')$, and bias 0 otherwise. This gives additional linear constraints on the unknowns. Now use Gaussian elimination to check whether there exists a non-zero solution to this set of linear equations. Or, you can look for multi-round characteristics by hand. That works, too. I am assuming you have read some basic tutorials on linear cryptanalysis, e.g., You can also read some advanced literature that establishes the theoretical foundations underlying linear cryptanalysis, e.g., The following paper introduces Matsui's algorithm for finding linear characteristics for DES: - I've read several of those papers, as well as Matsui's original paper. But I haven't seen anything better than what I'm already doing. I'm beginning to suspect that the cipher I'm trying to break just has too much mixing and too many rounds to be vulnerable to linear crpytanalysis, despite the weak sbox. Comparing the design to DES, which linear crypto was invented to attack, it seems that DES doesn't have the mixing stage (T and the extra xor) that rj2 does. With DES, it's just a straight permutation, but here the intermediate bits are being xored together. –  Antimony Dec 17 '13 at 2:21 @Antimony - yup, that's certainly possible! The only thing I couldn't tell was: what is the best characteristic you've gotten? How many rounds, and with what bias (or what probability)? It's possible that if you asked a new question giving that specific characteristic and asking if anyone can do better, maybe someone would be inspired to try to find a better one and see if they can beat what you got. Anyway, great question -- sorry I wasn't able to give a more specific answer focused on this particular cipher. –  D.W. Dec 17 '13 at 6:11 Using a more accurate probability calculation, my best equation has an estimated bias of around $2^{-74.6}$, which is obviously worse than brute force. –  Antimony Dec 17 '13 at 7:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8267594575881958, "perplexity": 500.6509763161673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131304625.62/warc/CC-MAIN-20150323172144-00059-ip-10-168-14-71.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/83967-geometrical-application-differentiation.html
# Math Help - Geometrical Application of Differentiation 1. ## Geometrical Application of Differentiation Hey guys A window frame has the shape of a rectangle surmounted by a semi-circle. The perimeter of the frame is constant. Show that, for max. area, the height of the rectangle is equal to the radius of the semi-circle. Could someone please show me how to do this question? Thanx! 2. Ok, I took the time to work through this problem and got the solution. I don't want to just blurt out the steps and answers for ya, but I can hopefully point your way to understanding how to solve these kinds of things. First, you need to work out some equations for the problem. You have two equations you can put together from that info. You know an equation for the perimeter, and for the area. So first, find those. Just to get you on the path after that, this is a maximization problem, so you want to take the derivative of your two equations. For example, finding dA / dr (derivative of A (area) with respect to r (the radius). But take care with h (the height of the rectangle) that should be in the equations! If you've done implicit differentiation, then you should know how to handle this. Can you figure it out after this, as well? If you get stuck again, let me know and I'll coach you through to the end.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9201920628547668, "perplexity": 247.21592684351828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646849.21/warc/CC-MAIN-20141024030046-00138-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.minet.uni-jena.de/Math-Net/reports/shadows/99-47report.html
On more general Lipschitz spaces Preprint series: 99-47, Analysis The paper is published: Z. Anal. Anwendungen, 19(3), 781-799, 2000. MSC: 26A16 Lipschitz (Holder) classes 46E35 Sobolev spaces and other spaces of smooth'' functions, embedding theorems, trace theorems 26A15 Continuity and related questions (modulus of continuity, semicontinuity, discontinuities, etc.), {For properties determined by Fourier coefficients, See 42A16; for those determined by approximation properties, See 41A25, 41A27} Abstract: The present paper deals with (logarithmic) Lipschitz spaces of type $Lip^{(1,-\alpha)}_{p,q}$, $1\leq p\leq\infty$, $0<q\leq\infty$, $\alpha>1/q$. We study their properties and derive some (sharp) embedding results. In that sense this paper can be regarded as some continuation and extension of some of our earlier papers, but there are also connections with some recent work of Triebel concerning Hardy inequalities and sharp embeddings. Recall that the nowadays almost classical' forerunner of investigations of this type is the Br\'ezis-Wainger result about the almost' Lipschitz continuity of elements of the Sobolev spaces $H^{1+n/p}_p(R^n)$ when $1<p<\infty$. Keywords: limiting embeddings, Lipschitz spaces, function
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9591305255889893, "perplexity": 2455.0822503202567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648113.87/warc/CC-MAIN-20180323004957-20180323024957-00793.warc.gz"}
http://math.stackexchange.com/questions/321546/why-is-mathbbr2-not-a-subset-and-or-a-subspace-of-mathbbr3
# Why is $\mathbb{R}^2$ not a subset and /or a subspace of $\mathbb{R}^3$? One thing this suggests--at least to me--is that the x-y plane and $\mathbb{R}^2$ are not necessarily equivalent. For example, I could define the following: $X = \left\{ \begin{bmatrix} x\\y\\z\end{bmatrix} x,y \in \mathbb{R} \land z = 0\right\}$. Am I wrong to think, one, that this is a subset of $\mathbb{R}^3$? As I write this it occurs to me that while scalar multiplication is closed under the above rules, addition doesn't pass the smell test for a subspace... so, OK, it's certainly not a subspace. I would welcome any insight readers of this query can provide. - What is true is that $\mathbb{R}^2$ is isomorphic to a subspace of $\mathbb{R}^3$. For instance via $(x,y)\longmapsto (x,y,0)$. – julien Mar 5 at 16:04 what kind of equivalence are you looking for ? it is certainly equivalent as a vector space (and also as a topological vector space) – magguu Mar 5 at 16:05 What you wrote is in fact a subspace of $\mathbb{R}^3$, namely the $x$-$y$ plane, and it is isomorphic to $\mathbb{R}^2$ (just not equal, as Asaf points out.) – Trevor Wilson Mar 5 at 16:06 Appreciate the clarifications -- implication for me is that we can arbitrarily constrain $z = 0$, even under addition. I guess that's obvious, but I wasn't sure: thanks, again. – user10756 Mar 5 at 16:14 The elements of $\Bbb R^2$ are vectors of two coordinates; and the elements of $\Bbb R^3$ are vectors of three coordinates. (One can easily think of those vectors as $2$-tuples and $3$-tuples, for example.) Assuming mathematics is consistent, $2\neq 3$. Therefore no element of $\Bbb R^2$ is an element of $\Bbb R^3$. It follows that $\Bbb R^2$ is not a subset of $\Bbb R^3$. And in order to be a subspace, one first has to be a subset. So it's not a subspace either. What you have defined as $X$ is isomorphic to $\Bbb R^2$, but just as well you could decide that $y$ is $0$, and the identification would still be natural. $X$ is a subset of $\Bbb R^3$ and indeed a subspace, but it is not $\Bbb R^2$ as a set, it is just isomorphic to it in a very obvious way. While isomorphism is an equivalence relation, and we often think of it almost as identity, it is still not set equality which is a stricter notion. - Thanks very much -- I'm beginning to see the light. – user10756 Mar 5 at 16:11 Uhh, that's a train. You're standing on the tracks. :-) – Asaf Karagila Mar 5 at 16:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909977912902832, "perplexity": 247.21307067885039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703108201/warc/CC-MAIN-20130516111828-00020-ip-10-60-113-184.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/289236/why-resample-by-centering-vs-the-sample-estimate-not-null-parameter-value?noredirect=1
Why resample by centering vs. the sample estimate, not null parameter value? In a classic paper on conducting bootstrapped hypothesis tests, Hall & Wilson (1991) present the following guideline, which I am trying to understand: On first read, this seemed obvious: $$\widehat{\theta}^{*}$$ are being resampled under $$H_0$$, so are consistent for $$\theta_0$$, whereas $$\widehat{\theta}$$ is consistent for $$\theta$$. Thus, $$\widehat{\theta}^{*} - \widehat{\theta}$$ is consistent for $$\theta_0 - \theta$$, which makes sense. In contrast, $$\widehat{\theta}^{*} - \theta_0$$ is consistent for $$\theta_0 - \theta_0 = 0$$, which is not useful. But then I realized, on rereading the paragraph above the First Guideline and a subsequent example, that Hall & Wilson are computing $$\widehat{\theta}^{*}$$ by resampling directly from the sample, rather than under the null. In that case, I have no idea why they recommend resampling $$\widehat{\theta}^{*} - \widehat{\theta}$$, which is consistent for 0! References Hall, P., & Wilson, S. R. (1991). Two guidelines for bootstrap hypothesis testing. Biometrics, 757-762. The idea is that irrespective of the choice of $\theta_0$ the bootstrap distribution of $\widehat\theta^*$ - $\widehat\theta$ resembles the sampling distribution of $\widehat\theta$ - $\theta$. If you instead use $\theta_0$ you will be pulling the estimate closer to $\theta_0$ and thus lose power under the alternative. I think this is essentially what Hall and Wilson are saying but possibly a little simpler and hopefully clearer for you. • Thanks -- I think I understand. Is it correct to say that this algorithm allows us to conduct a hypothesis test under $H_0$ while avoiding the difficulties of resampling under $H_0$ (because we're just resampling from the original)? Jul 7 '17 at 2:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774264097213745, "perplexity": 551.3730291147195}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.5/warc/CC-MAIN-20220123172206-20220123202206-00023.warc.gz"}
http://mathhelpforum.com/math-puzzles/87756-problem-counterfeit-coin.html
# Math Help - Problem: The counterfeit coin 1. ## Problem: The counterfeit coin There are 9 coins. 1 coin is counterfeit. You have a set of brass weighing scales (the ones that look like this: http://chantelt.files.wordpress.com/...over_white.jpg) You know that the counterfeit coin weighs considerably less than the others. If you placed the counterfeit coin one one end of the scale and a real coin on the other, you would notice the difference. The coins all look exactly the same. You cannot tell which coin is the lightest by simply holding them in your hands. The only way to tell is to use the scales. Explain how, in exactly TWO uses of the scales, the counterfeit coin can be found. 2. Split them in triples. Easy from there 3. Originally Posted by blueirony There are 9 coins. 1 coin is counterfeit. You have a set of brass weighing scales (the ones that look like this: http://chantelt.files.wordpress.com/...over_white.jpg) You know that the counterfeit coin weighs considerably less than the others. If you placed the counterfeit coin one one end of the scale and a real coin on the other, you would notice the difference. The coins all look exactly the same. You cannot tell which coin is the lightest by simply holding them in your hands. The only way to tell is to use the scales. Explain how, in exactly TWO uses of the scales, the counterfeit coin can be found. Originally Posted by Rebesques Split them in triples. Easy from there Right, Rebesques. Put two groups of three on the scale. If it's balanced, then the counterfeit coin is in the third group. If not balanced, then the counterfeit is on the side of the scale that tipped up. In either case, this leads you to one set of three coins, one of which is the bogus one. Pick any two and place them on each side of the scale. If it is balanced, then you are holding the bad coin. If it is not balanced, then it's pretty easy to tell where that bad boy is. 4. Both of you are correct
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625187277793884, "perplexity": 741.6922167527432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165697.9/warc/CC-MAIN-20160205193925-00318-ip-10-236-182-209.ec2.internal.warc.gz"}
https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Physics_9C__Electricity_and_Magnetism/1%3A_Electrostatic_Fields/1.7%3A_Using_Gauss's_Law
# 1.7: Using Gauss's Law $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ ## Symmetry Avoids Integrals The great irony of Gauss's law is that the surface integral looks incredibly daunting, but this law is only really useful because no integration actually needs to be performed. As we will see, we will be able to use this law to compute electric fields of distributions of charge in cases where some degree of symmetry is present. The basic approach is this: Construct an imaginary closed surface (called a gaussian surface) around some collection of charge, then apply Gauss's law for that surface to determine the electric field at that surface. This is a rather vague description, and glosses over a lot of important details, which we will learn through several examples. There are two ingredients to the symmetry that need to be present to make using Gauss's law so powerful: 1. A gaussian surface must exist where the electric field is either parallel or perpendicular to the surface vector. This makes the cosines in all the dot products equal to simply zero or one. 2. The electric field that passes through the parts of the gaussian surface where the flux is non-zero has a constant magnitude. These two conditions allow us to avoid an integral entirely, because the $$cos\theta$$ in the integral goes away, and the electric field magnitude can be taken out of the integral, leaving only an integral of $$dA$$, which is just the area of the surface. Then applying Gauss's law is simple. ## Field of an Infinite Plane of Charge This is a problem we have already solved (Equation 1.3.22). We did it by computing the field of a disk of charge on the axis, then taking the limit as the radius of the disk goes to infinity. That was a lot of math! Let's see how we can do it with Gauss's law. It's clear that an infinite plane of positive charge must create a field that points away from, and perpendicular to, the plane in both directions. Let's choose as our gaussian surface a cylinder whose axis is perpendicular to the plane of charge, with a cross-sectional area $$A$$. Figure 1.7.1 – Gaussian Surface for a Plane of Charge One of the hardest ideas to grasp for students learning for the first time about how to use Gauss's law seems to be the idea that the gaussian surface is something we construct ourselves as a problem-solving tool. There is no actual surface present, nor is there a specific unique surface that must be used. To make the solution as simple as possible, the surface should have the two properties given above, and the trick to these problems is conceiving of a surface that does this. We note that the electric field only passes through the ends of the cylinder, which means that there is no flux through the sides. Also, the field that passes through the ends is parallel to the area vector, so $$\cos\theta=1$$ everywhere on that surface. The electric field strength is the same value everywhere on the surface, so it can be pulled out of the integral, which then gives simply the area of the end of the cylinder. The flux is out (positive) at both ends and equal, so they provide equal contributions to the total flux. The total flux out of the cylinder then is simply: $\Phi_E = \cancelto{0}{\Phi_E\left(sides\right)} + \Phi_E\left(left\;end\right) + \Phi_E\left(right\;end\right) \;\;\; \Rightarrow \;\;\;\Phi_E =2EA$ Now we apply Gauss's law. The amount of charge enclosed in this cylinder is the surface density of the charge multiplied by the area cut out of the plane by the cylinder (like a cookie-cutter), which is clearly equal to $$A$$, the area of the ends of the cylinder. Applying Gauss's law gives: $\Phi_E = \dfrac{Q_{encl}}{\epsilon_o} \;\;\; \Rightarrow \;\;\; 2EA = \dfrac{\sigma A}{\epsilon_o} \;\;\; \Rightarrow \;\;\; E = \dfrac{\sigma}{2\epsilon_o}$ This is exactly the answer we got before! Notice that the final answer comes out to be independent of the length of the cylinder, which means that the field is uniform, and it comes out to be independent of the area of the cylinder as well. One might well ask, "What if the cylinder didn't have straight sides? That is, what if it bulged in the middle, causing it to enclose more charge? Won't this give a different answer?" The answer is that if the sides of the cylinder aren't straight, then the electric field will pierce the gaussian surface through the sides as well as the ends. The flux through the ends would be the same as before, and the additional flux through the sides would account for the additional enclosed charge. ## Field Outside an Infinite Charged Conducting Plane We have already solved this problem as well (Equation 1.5.6). Solving it with Gauss's law is almost identical to the case above, with one exception: We don't know what the field looks like on both sides of the conductor – we only know that one side of it is charged. But that's fine, because this time we choose our gaussian cylinder so that one end surface is outside the conductor, and the other is inside the metal. Figure 1.7.2 – Gaussian Surface for a Conducting Plane of Charge The computation follows exactly as before, with one exception: We know that conductors have zero electric field inside the metal (we are assuming electrostatics here), so there is no electric field flux through that end of the cylinder. The enclosed charge is the same as before, so we get: $\Phi_E = \dfrac{Q_{encl}}{\epsilon_o} \;\;\; \Rightarrow \;\;\; EA = \dfrac{\sigma A}{\epsilon_o} \;\;\; \Rightarrow \;\;\; E = \dfrac{\sigma}{\epsilon_o}$ Once again, the same answer that we got previously. But there is additional value in this solution that we didn't have before. In our previous approach to this, we made some specific assumptions about the shape of the conducting slab. With Gauss's law, we can even work with a curved surface, for the following reason: When a surface is curved, that curvature is only noticeable when a sufficient amount of that surface is taken into account (e.g. the Earth's surface appears to be flat until you get far enough away from it). In this gauss's law approach, we can make the cross-sectional area of the cylinder as arbitrarily small as we like, and the answer doesn't change. As soon as we make the cross-sectional area "small enough" that the curved conducting surface is effectively flat (i.e. the electric field is constant over the entire end surface of the cylinder), then the answer obtained applies. This means that this answer applies at every conducting surface, if the density is evaluated at a specific position on the surface. In other words, if the charge density on the surface of a conductor at position $$x$$ is $$\sigma\left(x\right)$$, then the electric field magnitude at that same position in space is: $E\left(x\right)=\dfrac{\sigma\left(x\right)}{\epsilon_o}$ An as we already found, the field is perpendicular to the conducting surface at that point). ## Field of an Infinite Line of Charge Yet another problem we have already solved! As we will see, this one is different from the previous two, in that the field will end up depending upon the dimensions of our gaussian surface. This gives us a field that is not uniform (which it isn't!). Once again, the trick is to define a gaussian surface where the field lines pass through parts of it at right angles, and other parts not at all. The obvious choice is therefore a cylinder. Figure 1.7.3 – Gaussian Surface for an Infinite Line of Charge We know from symmetry arguments we have already made in the past that the field points radially outward from the line, which means that the field lines don't pass through the ends of the cylinder, contributing nothing to the total flux. Though the curved surface of the cylinder, the electric field is perpendicular everywhere, and since the cylinder is centered at the line of charge, the field strength is the same everywhere. The total flux is therefore the electric field strength at the cylinder wall multiplied by its area: $\Phi_E = \cancelto{0}{\Phi_E\left(top\right)} + \cancelto{0}{\Phi_E\left(bottom\right)} + \Phi_E\left(sides\right) \;\;\; \Rightarrow \;\;\;\Phi_E =EA=2\pi rlE$ The enclosed charge is the charge contained between the two ends of the cylinder, which is the linear charge density multiplied by the length of the segment, which is the length of the cylinder. Applying Gauss's law therefore gives: $\Phi_E = \dfrac{Q_{encl}}{\epsilon_o} \;\;\; \Rightarrow \;\;\; 2\pi rlE = \dfrac{\lambda \;l}{\epsilon_o} \;\;\; \Rightarrow \;\;\; E = \dfrac{\lambda}{2\pi\epsilon_o\;r}$ Again this is in agreement with the answer previously obtained (Equation 1.3.21). ## Fields Within Charge Distributions The reader should not get the impression that electric fields only exist outside of charge distributions, though so far every example has been of this variety. Indeed Gauss's law is very useful for finding fields within charge distributions, and the process is really no different from what is outlined above. Consider the case of a sphere of charge with a uniform density $$\rho$$ and a radius $$R$$. We can use Gauss's law to compute the electric field at points within the region of the charge distribution ($$r<R$$), as well as outside the sphere ($$r>R$$). The latter calculation is as simple as those above – the field has spherical symmetry (is radially outward), so we choose a spherical gaussian surface (through which the field will pass orthogonally, and on which the field strength is constant), giving: $EA_{sphere}=\dfrac{Q_{encl}}{\epsilon_o} \;\;\; \Rightarrow \;\;\; E\left(r\right) = \dfrac{Q_{encl}}{4\pi\epsilon_or^2}$ Yes, the field looks exactly like that of a point charge! This will be true for the empty space outside of all spherically symmetric charge distributions, even if the charge density varies with respect to $$r$$. As we are not given the value of $$Q$$, we are not yet finished with this problem. The density is constant, so the total charge is just the density multiplied by the volume of the charge. Note that this is not the volume of our gaussian surface, which resides outside the sphere, so: $E\left(r\right) = \dfrac{\rho V}{4\pi\epsilon_or^2} = \dfrac{\rho \frac{4}{3}\pi R^3}{4\pi\epsilon_or^2} = \dfrac{\rho R^3}{3\epsilon_or^2}$ Okay, so what about within the charge distribution? The solution is performed in precisely the same way, except that now the spherical gaussian surface has a radius $$r$$ that is less than $$R$$. So how does this change the answer? Well, there is less charge enclosed than in the previous case. Specifically, this time the entire gaussian surface is filled with charge. Plugging in this new, smaller volume gives: $E\left(r\right) = \dfrac{\rho V}{4\pi\epsilon_or^2} = \dfrac{\rho \frac{4}{3}\pi r^3}{4\pi\epsilon_or^2} = \dfrac{\rho}{3\epsilon_o}r$ Rather than getting weaker with an inverse-square dependence as it gets farther from the center, this field actually gets stronger linearly. This happens until $$r$$ reaches the outer surface of the sphere of charge, then after that it follows the point-charge-like inverse-square weakening behavior. Whenever one solves a problem that includes multiple regions like this one (one region being inside the charge, and the other outside the charge), it is a good idea to check to make sure that the field is continuous at the boundary. Indeed, in this case, if we plug $$r=R$$ into both the interior and exterior solutions, we get the same result. We will see that this is also sometimes used as a condition that we impose to help us solve the problem. Let's take a moment here to demonstrate how problems where we are looking for fields within charge distributions can also be solved using the local form of Gauss's law. Using this method to solve for fields in empty space is fraught with mathematical nuance that we will avoid, but for regions containing charge it is quite workable, and perhaps even preferable in some cases. Returning to the uniform sphere of charge, the spherical symmetry suggests that we write the divergence of the spherically-symmetric field in spherical coordinates. For vector fields that are only functions of $$r$$ we have: $\overrightarrow \nabla \cdot \overrightarrow E\left(r\right) = \dfrac{1}{r^2}\dfrac{d}{dr}\left[r^2 E\left(r\right)\right]$ We now apply Gauss's law and integrate. Note that this is an indefinite integral, which requires the introduction of an unknown constant of integration. To solve for this constant, we will need to know the boundary condition for the charge distribution. This is a universal feature of this method. $\dfrac{1}{r^2}\dfrac{d}{dr}\left[r^2 E\right] = \dfrac{\rho}{\epsilon_o} \;\;\; \Rightarrow \;\;\; r^2 E = \dfrac{\rho}{\epsilon_o} \int r^2dr = \dfrac{\rho r^3}{3\epsilon_o}+\beta \;\;\; \Rightarrow \;\;\; E\left(r\right) = \dfrac{\rho}{3\epsilon_o}r+\dfrac{\beta}{r^2}$ Using the solution for outside the charge that we found above, and plugging in $$r=R$$ (the boundary), we find that our constant of integration comes out to be zero in this case. Note that we end up with the same field that we found using the integral version. One thing to note about these two methods is that when the density is not constant, an integral has to be performed either way. Either the charge density appears in the integration of the divergence, or it appears in an integral to compute the charge enclosed within the volume enclosing (note that in this particular case of constant density we only had to multiply the density by the volume, but we will not always be so lucky). Example $$\PageIndex{1}$$ A very long insulating cylinder is hollow with an inner radius of $$a$$ and an outer radius of $$b$$. Within the insulating material the volume charge density is given by: $$\rho(R) = \alpha/R$$, where $$\alpha$$ is a positive constant and $$R$$ is the distance from the axis of the cylinder. Choose appropriate gaussian surfaces and use Gauss’s law to find the electric field (magnitude and direction) everywhere. Solution There are three distinct regions: ($$0<r<a$$), ($$a<r<b$$), and ($$b<r<\infty$$). For all of these regions, the radial symmetry of the charge distribution ensures that wherever there is electric field, it must point radially outward or inward, and its magnitude must be the same at every point at any fixed radius. A gaussian surface in the shape of a cylinder of radius $$r$$ centered within the empty center region would therefore result in a flux of EA, where A is the area of the curved part of this surface, since the electric field is parallel to the area vector everywhere and constant in magnitude. We take each region in turn: *** $$0<r<a$$ *** The enclosed charge is zero, and since the area isn’t zero, the electric field must be zero for every $$r$$ in that empty region. *** $$a<r<b$$ *** The gaussian surface has a radius $$r$$ and a length $$l$$. The total electric flux is therefore: $\Phi_E=EA=2\pi rlE \nonumber$ To apply Gauss's law, we need the total charge enclosed by the surface. We have the density function, so we need to integrate it over the volume within the gaussian surface to get the charge enclosed. We use a volume in cylindrical coordinates ($$dV=RdR\;d\theta\;dz$$), and the limits of integration are: $$R:a\rightarrow r$$, $$\theta:0\rightarrow 2\pi$$, $$z:0\rightarrow l$$: $Q_{encl} = \int \rho dV = \int\limits_0^l dz \int\limits_0^{2\pi} d\theta \int\limits_a^r \dfrac{\alpha}{R} RdR = 2\pi \alpha l \left(r-a\right) \nonumber$ Applying Gauss's law gives: $E\left(r\right) = \dfrac{Q_{encl}}{\epsilon_o A} = \dfrac{2\pi \alpha l \left(r-a\right)}{\epsilon_o 2\pi r l} = \boxed{\dfrac{\alpha}{\epsilon_o}\left(1-\dfrac{a}{r}\right)} \nonumber$ *** $$b<r<\infty$$ *** With the gaussian surface now outside the entire charge distribution, the enclosed charge distribution is all of the charge. We can re-use the work above by simply changing the upper limit in the integral for enclosed charge from $$r$$ to $$b$$. This gives: $Q_{encl} = 2\pi \alpha l \left(b-a\right) \;\;\; \Rightarrow \;\;\; E\left(r\right) = \boxed{\dfrac{\alpha}{\epsilon_o}\left(\dfrac{b-a}{r}\right)}\nonumber$ Example $$\PageIndex{2}$$ Repeat the previous example for the outer two regions using the local form of Gauss's law. You can assume that you have already determined that $$E=0$$ in the hollow cavity, and use this as a boundary condition. Solution In cylindrical coordinates, the divergence of a vector field that is only a function of the distance from the $$z$$-axis is given by: $\overrightarrow \nabla \cdot \overrightarrow E\left(r\right) = \dfrac{1}{r}\dfrac{d}{dr}\left[rE\left(r\right)\right] \nonumber$ Now for each of the two regions we apply Gauss's law: *** $$a<r<b$$ *** We are given the function of the charge density in this region, so plugging that into the divergence formula gives: $\overrightarrow \nabla \cdot \overrightarrow E\left(r\right) = \dfrac{\rho}{\epsilon_o} \;\;\; \Rightarrow \;\;\; \dfrac{1}{r}\dfrac{d}{dr}\left[rE\left(r\right)\right] = \dfrac{\alpha}{\epsilon_o r} \;\;\; \Rightarrow \;\;\; \dfrac{d}{dr}\left[rE\left(r\right)\right] = \dfrac{\alpha}{\epsilon_o}\nonumber$ Now perform the indefinite integral (don't forget the constant of integration! – I will call it $$\beta$$): $rE\left(r\right) = \int \dfrac{\alpha}{\epsilon_o}dr = \dfrac{\alpha}{\epsilon_o}r + \beta \;\;\; \Rightarrow \;\;\; E\left(r\right) = \dfrac{\alpha}{\epsilon_o}+\dfrac{\beta}{r} \nonumber$ The boundary condition at $$r=a$$ requires that the electric field is continuous there, which means that it must equal zero there. This allows us to solve for the constant of integration: $E\left(a\right) = 0 = \dfrac{\alpha}{\epsilon_o}+\dfrac{\beta}{a} \;\;\; \Rightarrow \;\;\; \beta = -\dfrac{\alpha a}{\epsilon_o} \nonumber$ Plugging this back in gives us the electric field in the region of the insulator, which agrees with the answer from the previous example: $E\left(r\right) = \dfrac{\alpha}{\epsilon_o}\left(1-\dfrac{a}{r}\right) \nonumber$ *** $$b<r<\infty$$ *** There is no charge in this region, so the charge density is zero. Plugging this into the divergence formula gives: $\overrightarrow \nabla \cdot \overrightarrow E\left(r\right) = 0 \;\;\; \Rightarrow \;\;\; \dfrac{d}{dr}\left[rE\left(r\right)\right] = 0 \;\;\; \Rightarrow \;\;\; rE\left(r\right)=constant =\beta \;\;\; \Rightarrow \;\;\; E\left(r\right) = \dfrac{\beta}{r} \nonumber$ Once again we need to apply a boundary condition to determine $$\beta$$. In this case, we match the solution outside the cylinder to that inside the insulator region at $$r=b$$: $E\left(b\right)=\dfrac{\alpha}{\epsilon_o}\left(1-\dfrac{a}{b}\right) = \dfrac{\beta}{b} \;\;\; \Rightarrow \;\;\; \beta = \dfrac{\alpha}{\epsilon_o}\left(b-a\right) \;\;\; \Rightarrow \;\;\; E\left(r\right) = \dfrac{\alpha}{\epsilon_o}\left(\dfrac{b-a}{r}\right)\nonumber$ This again agrees with the answer obtained above. ## Hollow Conducting Shells Another common type of problem one can solve with Gauss's law involves no symmetry. A hollow conducting shell (of any shape) has an interesting property: Since the electric field within the metal of such a shell must vanish, then constructing a gaussian surface within the metal, we can see that there must be zero net flux through that surface. This means that the charge enclosed by that shell must be zero. But what if an electric charge is placed in the hollow space? How can Gauss's law then be satisfied? The only way is for charge in the conductor to migrate. If there is a positive charge in the hollow space, then an equal amount negative charge moves to the inside surface of the conducing shell, bringing the total charge within the gaussian surface to zero. If the shell was originally neutrally-charged, then the positive charge abandoned by the migrating negative charge resides on the outer surface of the shell. Example $$\PageIndex{3}$$ A closed, hollow conductor contains a smaller, closed hollow conductor and a point charge of $$+1Q$$ (see the diagram). Free charge trapped within the interior space of the smaller conductor is unknown, but the smaller conductor itself carries a net charge of $$-3Q$$, and the larger conductor carries a net charge of $$+5Q$$. If a charge of $$-2Q$$ is found on the outside surface of the larger conductor, find how much charge resides on the inside surface of the smaller conductor. Assume all the charges on the conductors are at rest at equilibrium. Solution The total charge on the outer conductor must reside on its surfaces, so if $$-2Q$$ is on the outer surface, then there must be $$+7Q$$ on its inner surface. Now construct a gaussian surface within the metal of the outer conductor. The zero electric field within the conductor (the charges are static) results in zero flux out of this gaussian surface, which means that there must be no net charge enclosed. The enclosed charge comes in many pieces, and is the sum of the charge on the inner surface of the outside conductor ($$+7Q$$), the free charge outside the smaller conductor ($$+1Q$$), the total charge on the smaller conductor ($$-3Q$$), and the unknown free charge within the smaller conductor. For all of these to add up to zero, the unknown charge must be $$-5Q$$. There is also no flux through the inner conductor, so the charge enclosed within gaussian surface constructed within its metal must also be zero. Now that we know the previously-unknown charge is $$-5Q$$, there must be a charge of $$+5Q$$ on the inner surface of the smaller conductor. This page titled 1.7: Using Gauss's Law is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Tom Weideman directly on the LibreTexts platform.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9764723181724548, "perplexity": 168.60085156441332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00759.warc.gz"}
http://repository.unm.edu/handle/1928/282/browse?order=ASC&rpp=20&sort_by=1&etal=-1&offset=20&type=title
Now showing items 21-40 of 229 • #### [2012-04-13]Common, multiple and parametric Lyapunov functions for a class of hybrid dynamical systems  This paper considers Lyapunov stability of discontinuous dynamical systems. It is assumed that discontinuities in the system dynamics are caused by some internal (e.g. component failures), and/or external (e.g. controller ... • #### [2012-03-29]Communicating with microwave-propelled sails  We describe a communication channel for a microwave-propelled sail, a novel concept for a deep-space scientific probe. We suggest techniques to recover the great loss introduced by the large distances, and we have conducted ... • #### [2012-03-24]Complex Networked Control Systems  This special section focuses on the study of network architectures and their formation as well as on the description of dynamical processes that take place over networks. A common thread throughout the five articles is the ... • #### [2012-04-06]Computational complexity of determining resource loops in re-entrant flow lines  This paper presents a comparison study of the computational complexity of the general job shop protocol and the more structured flow line protocol in a flexible manufacturing system. It is shown that the representative ... • #### [2012-03-20]Conditions for tracking in networked control systems  In this paper we obtain information theoretical conditions for tracking in linear time-invariant control systems. We consider the particular case where the closed loop contains a channel in the feedback loop. The mutual ... • #### [2012-04-12]Constructive function approximation: theory and practice  In this paper we study the theoretical limits of finite constructive convex approximations of a given function in a Hilbert space using elements taken from a reduced subset. We also investigate the trade-off between the ... • #### [2012-04-12]CONTINUOUS AND DISCRETE TIME SPR DESIGN USING FEEDBACK  This paper presents necessary and sufficient conditions for the existence of a feedback compensator that will render a given continuous-time or discrete-time linear system SPR. When these conditions hold, the controller ... • #### [2012-01-26]Control in Computing Systems: Part I  This is the first part of a paper that provides an overview of some applications of control theory to computing systems. With the advent of cloud computing and more affordable computing infrastructures, computing engineers ... • #### [2012-01-26]A control theory approach on the design of a Marx generator network  A Marx generator is a well-known type of electrical circuit first described by Erwin Otto Marx in 1924. It has been utilized in numerous applications in pulsed power with resistive or capacitive loads. To-date the vast ... • #### [2012-04-26]Coordination of Multiple Agents in 2D using an Internet-Like Protocol  This work presents an Internet-Like Protocol (ILP) to coordinate the formation of n second-order agents in a two dimensional (2D) space. The trajectories are specified trough via points and a desired formation at each ... • #### [2012-03-24]Creating Online Graduate Engineering Degrees at the University of New Mexico  This paper describes the motivation, strategies, and implementation details that lead to the creation of online graduate-level degree programs in the Department of Electrical & Computer Engineering at the University of New ... • #### [2012-03-24]Data Rates Conditions for Network Control System Stabilization  In this paper we present sufficient conditions on the rate of a packet network to guarantee asymptotic stability of unstable discrete LTI system with linear state feedback control. Two types of network control systems are ... • #### [2012-04-07]Delay effects on static output feedback stabilization  This paper addresses conditions for characterizing static output feedback controllers including delays for some proper (finite-dimensional) transfer functions. The interest of such study is in controlling systems which can ... • #### [2012-04-19]Delayed positive feedback can stabilize oscillatory systems  This paper expands on a method proposed in [1] for stabilizing oscillatory system with positive, delayed feedback. The closed-loop system obtained is shown (using the Nyquist criterion) to be stable for a range of delays. • #### [2012-04-19]Design of strictly positive real, fixed-order dynamic compensators  The authors present sufficient conditions for the design of strictly positive real (SPR), fixed-order dynamic compensators. The primary motivation for designing SPR compensators is for application to positive real (PR) ... • #### [2006-03-10]Digital Signal Computers and Processors  University Libraries MSC05 3020 1 University of New Mexico Albuquerque NM 87131 505.277.9100
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8044093251228333, "perplexity": 1966.5987972739167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094629.80/warc/CC-MAIN-20150627031814-00097-ip-10-179-60-89.ec2.internal.warc.gz"}
http://www.talkstats.com/threads/high-correlation-but-different-median-scores-how-can-this-be-possible.71008/
# High correlation but different median scores. How can this be possible? #### awkwardquark ##### New Member I am writing a research paper. We asked the same patients to fill out a form at two different times. Data is non-normal. I used Spearman's correlation, which gave me a moderate correlation between the two scores. I also used weighted kappa and this too gave me a moderate agreement between both scores. But when I used the Wilcoxon signed-rank test, it told me the medians were statistically different. How can I explain this result if all the other questionnaires had low agreement, low correlation and same median scores? #### Karabiner ##### TS Contributor But when I used the Wilcoxon signed-rank test, it told me the medians were statistically different. The Wilcoxon is not a test for medians, but anyway - it is important to note that there is no connection between the size of a correlation and degree of (dis)similarity of central tendencies. E.g. look at these 5 pairs: 1 11 2 12 3 13 4 14 5 15 There is a perfect correlation between the first and second column, but the means are largely apart. They march in step, but on different levels. With kind regards Karabiner
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859649658203125, "perplexity": 1287.252572703264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949036.99/warc/CC-MAIN-20180427041028-20180427061028-00401.warc.gz"}
https://kintali.wordpress.com/2010/11/20/type-sensitive-depth-and-karchmer-wigderson-games/
# Type Sensitive Depth and Karchmer Wigderson Games Throughout this post, we will be considering circuits over the basis $\{\vee,\wedge,\neg\}$ where $\{\vee,\wedge\}$-gates have fanin 2 and $\neg$-gates are only applied to input variables. Let $f : \{0,1\}^n \rightarrow \{0,1\}$ be a boolean function on $n$ variables and $G_n$ be a circuit computing $f$. For an output gate $g$, let $g_l$ and $g_r$ be the sub-circuits, whose outputs are inputs to $g$. Let $d(G_n)$ be the depth of circuit $G_n$ and $d(f)$ be the minimum depth of a circuit computing $f$. Karchmer and Wigderson [KW’90] showed an equivalence between circuit depth and a related problem in communication complexity. It is a simple observation that we can designate the two players as an “and-player” and an “or-player”. Let $S_0, S_1 \subseteq \{0,1\}^n$ such that $S_0 \cap S_1 = \emptyset$. Consider the communication game between two players ($P_{\wedge}$ and $P_{\vee}$), where $P_{\wedge}$ gets $x \in S_1$ and $P_{\vee}$ gets $y \in S_0$. The goal of the players to find a coordinate $i$ such that $x_i \neq y_i$. Let $C(S_1,S_0)$ represent the minimum number of bits they have to communicate in order for both to agree on such coordinate. Karchmer-Wigderson Theorem : For every function $f : \{0,1\}^n \rightarrow \{0,1\}$ we have $d(f) = C(f^{-1}(1),f^{-1}(0))$. Karchmer and Wigderson used the above theorem to prove that ‘monotone circuits for connectivity require super-logarithmic depth’. Let $C_{\wedge}(S_1,S_0)$ (resp. $C_{\vee}(S_1,S_0)$) represent the minimum number of bits that $P_{\wedge}$ (resp $P_{\vee}$) has to communicate. We can define type-sensitive depths of a circuit as follows. Let $d_{\wedge}(G_n)$ (resp. $d_{\vee}(G_n)$) represent the AND-depth (resp. OR-depth) of $G_n$. AND-depth : AND-depth of an input gate is defined to be zero. AND-depth of an AND gate $g$ is max($d_{\wedge}(g_l), d_{\wedge}(g_r)$) + 1. AND-depth of an OR gate $g$ is max($d_{\wedge}(g_l), d_{\wedge}(g_l)$). AND-depth of a circuit $G_n$ is the AND-depth of its output gate. OR-depth is defined analogously. Let $d_{\wedge}(f)$ (resp. $d_{\vee}(f)$) be the minimum AND-depth (resp. OR-depth) of a circuit computing $f$. Observation : For every function $f : \{0,1\}^n \rightarrow \{0,1\}$ we have that $C_{\wedge}(f^{-1}(1),f^{-1}(0))$ corresponds to the AND-depth and $C_{\vee}(f^{-1}(1),f^{-1}(0))$ corresponds to the OR-depth of the circuit constructed by Karchmer-Wigderson. Open Problems : • Can we prove explicit non-trivial lower bounds of $d_{\wedge}(f)$ (or $d_{\vee}(f)$) of a given function $f$ ? This sort of “asymmetric” communication complexity is partially addressed in [MNSW’98]. • A suitable notion of uniformity in communication games is to be defined to address such lower bounds. More on this in future posts. References : • [KW’90] Mauricio Karchmer and Avi Wigderson : Monotone circuits for connectivity require super-logarithmic depth. SIAM Journal on Discrete Mathematics, 3(2):255–265, 1990. • [MNSW’98] Peter Bro Miltersen, Noam Nisan, Shmuel Safra, Avi Wigderson: On Data Structures and Asymmetric Communication Complexity. J. Comput. Syst. Sci. 57(1): 37-49 (1998)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 49, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680196642875671, "perplexity": 669.1043219965657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608648.25/warc/CC-MAIN-20170526071051-20170526091051-00188.warc.gz"}
https://madhavamathcompetition.com/category/nature-of-mathematics/page/2/
# Miscellaneous questions: part II: solutions to tutorial practice for preRMO and RMO Refer the blog questions a few days before: Question 1: Let $a_{1}, a_{2}, \ldots, a_{10}$ be ten real numbers such that each is greater than 1 and less than 55. Prove that there are three among the given numbers which form the lengths of the sides of a triangle. Without loss of generality, we may take $1…..call this relation (i). Let, if possible, no three of the given numbers be the lengths of the sides of a triangle. (That is, no three satisfy the triangle inequality. Note that when we say three numbers a, b and c satisfy the triangle inequality —- it means all the following three inequalities have to hold simultaneously: $a+b>c$, $a+c>b$ and $b+c>a$). We will consider triplets $a_{i}, a_{i+1}, a_{i+2}$ and $1 \leq i \leq 8$. As these numbers do not form the lengths of the sides of a triangle, the sum of the smallest two numbers should not exceed the largest number, that is, $a_{i}+a_{i+1} \leq a_{i+2}$. Hence, we get the following set of inequalities: $i=1$ gives $a_{1}+a_{2} \leq a_{3}$ giving $2 < a_{3}$. $i=2$ gives $a_{2}+a_{3} \leq a_{4}$ giving $3 < a_{4}$ $i=3$ gives $a_{3}+a_{4} \leq a_{5}$ giving $5 < a_{5}$ $i=4$ gives $a_{4}+a_{5} \leq a_{6}$ giving $8 < a_{6}$ $i=5$ gives $a_{5}+a_{6} \leq a_{7}$ giving $13 < a_{7}$ $i=6$ gives $a_{6}+a_{7} \leq a_{8}$ giving $21 < a_{8}$ $i=7$ gives $a_{7}+a_{8} \leq a_{9}$ giving $34 < a_{9}$ $i=8$ gives $a_{8}+a_{9} \leq a_{10}$ giving $55 contradicting the basic hypothesis. Hence, there exists three numbers among the given numbers which form the lengths of the sides of a triangle. Question 2: In a collection of 1234 persons, any two persons are mutual friends or enemies. Each person has at most 3 enemies. Prove that it is possible to divide the collection into two parts such that each person has at most 1 enemy in his sub-collection. Let C denote the collection of given 1234 persons. Let $\{ C_{1}, C_{2}\}$ be a partition of C. Let $e(C_{1})$ denote the total number of enemy pairs in $C_{1}$. Let $e(C_{2})$ denote the total number of enemy pairs in $C_{2}$. Let $e(C_{1}, C_{2})= e(C_{1})+e(C_{2})$ denote the total number of enemy pairs corresponding to the partition $\{ C_{1}, C_{2}\}$ of C. Note $e(C_{1}, C_{2})$ is an integer greater than or equal to zero. Hence, by Well-Ordering Principle, there exists a partition having the least value of $e(C_{1}, C_{2})$. Claim: This is “the” required partition. Proof: If not, without loss of generality, suppose there is a person P in $C_{1}$ having at least 2 enemies in $C_{1}$. Construct a new partition $\{D_{1}, D_{2}\}$ of C as follows: $D_{1}=C_{1}-\{ P \}$ and $D_{2}=C_{2}- \{P\}$. Now, $e(D_{1}, D_{2})=e(D_{1})+e(D_{2}) \leq \{ e(C_{1})-2\} + \{ e(C_{2})+1\}=e(C_{1}, C_{2})-1$. Hence, $e(D_{1}, D_{2}) contradicting the minimality of $e(C_{1}, C_{2})$. QED. Problem 3: A barrel contains 2n balls, numbered 1 to 2n. Choose three balls at random, one after the other, and with the balls replaced after each draw. What is the probability that the three element sequence obtained has the properties that the smallest element is odd and that only the smallest element, if any is repeated? The total number of possible outcomes is $N=2n \times 2n \times 2n=8n^{3}$. To find the total number of favourable outcomes we proceed as follows: Let a be any odd integer such that $1 \leq a \leq 2n-1$ and let us count the sequences having a as least element. (i) There is only one sequence $(a,a,a)$ with a repeated thrice. (ii) There are $2n-a$ sequences of the form $(a,a,b)$ with $a. For each such sequence there are three distinct permutations possible. Hence, there are in all $3(2n-a)$ sequences with a repeated twice. iii) When $n>1$, for values of a satisfying $1 \leq a \leq (2n-3)$, sequences of the form $(a,b,c,)$ with $a are possible and the number of such sequences is $r=1+2+3+\ldots+(2n+a-1)=\frac{1}{2}(2n-a)(2n-a-1)$. For each such sequence, there are six distinct permutations possible. Hence, there are $6r=3(2n-a)(2n-a-1)$ sequences in this case. Hence, for odd values of a between 1 and $2n-1$, the total counts of possibilities $S_{1}$, $S_{2}$, $S_{3}$ in the above cases are respectively. $S_{1}=1+1+1+\ldots+1=n$ $S_{2}=3(1+3+5+\ldots+(2n-1))=3n^{2}$ $3(2 \times 3 + 4 \times 5 + \ldots+ (2n-2)(2n-1))=n(n-1)(4n+1)$. Hence, the total number A of favourable outcomes is $A=S_{1}+S_{2}+S_{3}=n+3n^{2}+n(n-1)(4n+1)=4n^{3}$. Hence, the required probability is $\frac{A}{N} = \frac{4n^{3}}{8n^{3}} = \frac{1}{2}$. QED> Cheers, Nalin Pithwa # Miscellaneous questions: part I : solutions: tutorial practice preRMO and RMO The following questions were presented in an earlier blog (the questions are reproduced here) along with solutions. Please compare your attempts/partial attempts too are to be compared…that is the way to learn: Problem 1: The sixty four squares of a chess board are filled with positive integers one on each in such a way that each integer is the average of the integers in the neighbouring squares. (Two squares are neighbours if they share a common edge or vertex. Thus, a square can have 8,5 or 3 neighbours depending on its position.) Show that all the sixty four squares are in fact equal. Solution 1: Consider the smallest value among the 64 entries on the board. Since it is the average of the surrounding numbers, all those numbers must be equal to this number as it is the smallest. This gives some more squares with the smallest value. Continue in this way till all the squares are covered. Problem 2: Let T be the set of all triples $(a,b,c)$ of integers such that $1 \leq a < b < c \leq 6$. For each triple $(a,b,c)$ in T, take the product abc. Add all these products corresponding to all triples in T. Prove that the sum is divisible by 7. Solution 2: For every triple $(a,b,c)$ in T, the triple $(7-c,7-b,7-a)$ is in T and these two are distinct as $7 \neq 2b$. Pairing off $(a,b,c)$ with $(7-c,7-b,7-a)$ for each $(a,b,c) \in T$, 7 divides $abc-(7-a)(7-b)(7-c)$. Problem 3: In a class of 25 students, there are 17 cyclists, 13 swimmers, and 8 weight lifters and no one is all the three. In a certain math examination 6 students got grades D or E. If the cyclists, swimmers and weight lifters all got grade B or C, determine the number of students who got grade A. Also, find the number of cyclists who are swimmers. Solution 3: Let S denote the set of all 25 students in the class, X the set of swimmers in S, Y the set of weight lifters in S, and Z the set of all cyclists. Since students in $X\bigcup Y \bigcup Z$ all get grades B and C, and six students get grades D or E, the number of students in $X\bigcup Y \bigcup Z \leq 25-6=19$. Now assign one point to each of the 17 cyclists, 13 swimmers and 8 weight lifters. Thus, a total of 38 points would be assigned among the students in $X \bigcup Y \bigcup Z$. Note that no student can have more than 2 points as no one is all three (swimmer, cyclist and weight lifter). Then, we should have $X \bigcup Y \bigcup Z \geq 19$ as otherwise 38 points cannot be accounted for. (For example, if there were only 18 students in $X \bigcup Y \bigcup Z$ the maximum number of points that could be assigned to them would be 36.) Therefore, $X \bigcup Y \bigcup Z=19$ and each student in $X \bigcup Y \bigcup Z$ is in exactly 2 of the sets X, Y and Z. Hence, the number of students getting grade $A=25-19-6=0$, that is, no student gets A grade. Since there are $19-8=11$ students who are not weight lifters all these 11 students must be both swimmers and cyclists. (Similarly, there are 2 who are both swimmers and weight lifters and 6 who are both cyclists and weight lifters.) Problem 4: Five men A, B, C, D, E are wearing caps of black or white colour without each knowing the colour of his cap. It is known that a man wearing a black cap always speaks the truth while a man wearing a white cap always lies. If they make the following statements, find the colour of the cap worn by each of them: A: I see three black and one white cap. B: I see four white caps. C: I see one black and three white caps. D: I see four black caps. Solution 4: Suppose E is wearing a white cap. Then, D is lying and hence must be wearing a white cap. Since D and E both have white caps, A is lying and hence, he must be wearing white cap. If C is speaking truth, then C must be having a black cap and B must be wearing a black cap as observed by C. But then B must observe a cap on C. Hence, B must be lying. This implies that B is wearing a white cap which is a contradiction to C’s statement. On the other hand, if C is lying, then C must be wearing a white cap. Thus, A, C, D and E are wearing white caps which makes B’s statement true. But, then B must be wearing a black cap and this makes C statement correct. Thus, E must be wearing a black cap. This implies that B is lying and hence, must be having a white cap. But then D is lying and hence, must be having a white cap since B and D have white caps. A is not saying the truth. Hence, A must also be wearing a white cap. These together imply that C is truthful. Hence, C must be wearing a black cap. Thus, we have the following: A: white cap; B: white cap; C:black cap; D:white cap; E: black cap. Problem 5: Let f be a bijective function from the set $A=\{ 1,2,3,\ldots,n\}$ to itself. Show that there is a positive integer $M>1$ such that $f^{M}(i)=f(i)$ for each $i \in A$. Here $f^{M}$ denotes the composite function $f \circ f \circ \ldots \circ f$ repeated M times. Solution 5: Let us recall the following properties of a bijective function: a) If $f:A \rightarrow A$ is a bijective function, then there is a unique bijective function $g: A \rightarrow A$ such that $f \circ g = g \circ f=I_{A}$ the identity function on A. The function g is called the inverse of f and is denoted by $f^{-1}$. Thus, $f \circ f^{-1}=I_{A}=f^{-1}\circ f$ b) $f \circ I_{A} = f = I_{A} \circ f$ c) If f and g are bijections from A to A, then so are $g \circ f$ and $f \circ g$. d) If f, g, h are bijective functions from A to A and $f \circ g = f \circ h$, then $g=h$. Apply $f^{-1}$ at left to both sides to obtain $g=h$. Coming to the problem at hand, since A has n elements, we see that the there are only finitely many (in fact, n!) bijective functions from A to A as each bijective function f gives a permutation of $\{ 1,2,3,\ldots, n\}$ by taking $\{ f(1),f(2), \ldots, f(n)\}$. Since f is a bijective function from A to A, so is each of the function in the sequence: $f^{2}, f^{3}, \ldots, f^{n}, \ldots$ All these cannot be distinct, since there are only finitely many bijective functions from A to A. Hence, for some two distinct positive integers m and n, $m > n$, say, we must have $f^{m}=f^{n}$ If $n=1$, we take $M=m$, to obtain the result. If $n>1$, multiply both sides by $(f^{-1})^{n-1}$ to get $f^{m-n+1}=f$. We take $M=m-n+1$ to get the relation $f^{M}=f$ with $M>1$. Note that this means $f^{M}(i)=f(i)$ for all $i \in A$. QED. Problem 6: Show that there exists a convex hexagon in the plane such that : a) all its interior angles are equal b) its sides are 1,2,3,4,5,6 in some order. Solution 6: Let ABCDEF be an equiangular hexagon with side lengths 1,2,3,4,5,6 in some order. We may assume without loss of generality that $AB=1$. Let $BC=a, CD=b, DE=c, EF=d, FA=e$. Since the sum of all angles of a hexagon is equal to $(6-2) \times 180=720 \deg$, it follows that each interior angle must be equal to $720/6=120\deg$. Let us take A as the origin, the positive x-axis along AB and the perpendicular at A to AB as the y-axis. We use the vector method: if the vector is denoted by $(x,y)$ we then have: $\overline{AB}=(1,0)$ $\overline{BC}=(a\cos 60\deg, a\sin 60\deg)$ $\overline{CD}=(b\cos{120\deg},b\sin{120\deg})$ $\overline{DE}=(c\cos{180\deg},c\sin{180\deg})=(-c,0)$ $\overline{EF}=(d\cos{240\deg},d\sin{240\deg})$ $\overline{FA}=(e\cos{300\deg},e\sin{300\deg})$ This is because these vectors are inclined to the positive x axis at angles 0, 60 degrees, 120 degrees, 180 degrees, 240 degrees, 300 degrees respectively. Since the sum of all these six vectors is $\overline{0}$, it implies that $1+\frac{a}{2}-\frac{b}{2}-c-\frac{d}{2}+\frac{e}{2}=0$ and $(a+b-d-e)\frac{\sqrt{3}}{2}=0$ That is, $a-b-2c-d+e+2=0$….call this I and $a+b-d-e=0$….call this II Since $(a,b,c,d,e)=(2,3,4,5,6)$, in view of (II), we have $(a,b)=(2,5), (a,e)=(3,4), c=6$….(i) $(a,b)=(5,6), (a,e)=(4,5), c=2$…(ii) $(a,b)=(2,6), (a,e)=(5,5), c=4$…(iii) The possibility that $(a,b)=(3,4), (a,e)=(2,5)$ in (i), for instance, need not be considered separately, because we can reflect the figure about $x=\frac{1}{2}$ and interchange these two sets. Case (i): Here $(a-b)-(d-e)=2c-2=10$. Since $a-b=\pm 3, d-e=\pm 1$, this is not possible. Case (ii): Here $(a-b)-(d-e)=2c-2=2$. This is satisfied by $(a,b,d,e)=(6,3,5,4)$ Case (iii): Here $(a-b)-(d-e)=2c-2=6$ Case (iv): This is satisfied by $(a,b,d,e)=(6,2,3,5)$. Hence, we have (essentially) two different solutions: $(1,6,3,2,5,4)$ and $(1,6,2,4,3,5)$. It may be verified that I and II are both satisfied by these sets of values. Aliter: Embed the hexagon in an appropriate equilateral triangle, whose sides consist of some sides of the hexagon. Solutions to the remaining problems from that blog will have to be tried by the student. Cheers, Nalin Pithwa. # Miscellaneous questions: part II: tutorial practice for preRMO and RMO Problem 1: Let $a_{1}, a_{2}, \ldots, a_{10}$ be ten real numbers such that each is greater than 1 and less than 55. Prove that there are three among the given numbers which form the lengths of the sides of a triangle. Problem 2: In a collection of 1234 persons, any two persons are mutual friends or enemies. Each person has at most 3 enemies. Prove that it is possible to divide this collection into two parts such that each person has at most 1 enemy in his subcollection. Problem 3: A barrel contains 2n balls numbered 1 to 2n. Choose three balls at random, one after the other, and with the balls replaced after each draw. What is the probability that the three element sequence obtained has the properties that the smallest element is odd and that only the smallest element, if any, is repeated? That’s all, folks !! You will need to churn a lot…!! In other words, learn to brood now…learn to think for a long time on a single hard problem … Regards, Nalin Pithwa # Miscellaneous questions: Part I: tutorial practice for preRMO and RMO Problem 1: The sixty four squares of a chess board are filled with positive integers one on each in such a way that each integer is the average of the of the integers on the neighbouring squares. (Two squares are neighbours if they share a common edge or vertex. Thus, a square can have 8,5 or 3 neighbours depending on its position). Show that all sixty four entries are in fact equal. Problem 2: Let T be the set of all triples (a,b,c) of integers such that $1 \leq a < b < c \leq 6$. For each triple (a,b,c) in T, take the product abc. Add all these products corresponding to all triples in I. Prove that the sum is divisible by 7. Problem 3: In a class of 25 students, there are 17 cyclists, 13 swimmers, and 8 weight lifters and no one in all the three. In a certain mathematics examination, 6 students got grades D or E. If the cyclists, swimmers and weight lifters all got grade B or C, determine the number of students who got grade A. Also, find the number of cyclists, who are swimmers. Problem 4: Five men A, B, C, D, E are wearing caps of black or white colour without each knowing the colour of his cap. It is known that a man wearing a black cap always speaks the truth while a man wearing a white cap always lies. If they make the following statements, find the colour of the cap worn by each of them: A: I see three black and one white cap. B: I see four white caps. C: I see one black and three white caps. D: I see four black caps. Problem 5: Let f be a bijective (one-one and onto) function from the set $A=\{ 1,2,3,\ldots,n\}$ to itself. Show that there is a positive integer $M>1$ such that $f^{M}(i)=f(i)$ for each $i \in A$. Note that $f^{M}$ denotes the composite function $f \circ f \circ f \ldots \circ f$ repeated M times. Problem 6: Show that there exists a convex hexagon in the plane such that: a) all its interior angles are equal b) its sides are 1,2,3,4,5,6 in some order. Problem 7: There are ten objects with total weights 20, each of the weights being a positive integer. Given that none of the weights exceed 10, prove that the ten objects can be divided into two groups that balance each other when placed on the pans of a balance. Problem 8: In each of the eight corners of a cube, write +1 or -1 arbitrarily. Then, on each of the six faces of the cube write the product of the numbers written at the four corners of that face. Add all the fourteen numbers so writtein down. Is it possible to arrange the numbers +1 and -1 at the corners initially so that this final sum is zero? Problem 9: Given the seven element set $A = \{ a,b,c,d,e,f,g\}$ find a collection T of 3-element subsets of A such that each pair of elements from A occurs exactly in one of the subsets of T. Try these !! Regards, Nalin Pithwa # Towards Baby Analysis: Part I: INMO, IMO and CMI Entrance $\bf{Reference: \hspace{0.1in}Introductory \hspace{0.1in} Real Analysis: \hspace{0.1in} Kolmogorov \hspace{0.1in} and \hspace{0.1in} Fomin; \hspace{0.1in}Dover \hspace{0.1in }Publications}$ $\bf{Equivalence \hspace{0.1in} of \hspace{0.1in} Sets \hspace{0.1in} The \hspace{0.1in}Power \hspace{0.1in }of \hspace{0.1in }a \hspace{0.1in}Set}$ $\bf{Section 1}$: $\bf{Finite \hspace{0.1in} and \hspace{0.1in} infinite \hspace{0.1in} sets}$ The set of all vertices of a given polyhedron, the set of all prime numbers less than a given number, and the set of all residents of NYC (at a given time) have a certain property in common, namely, each set has a definite number of elements which can be found in principle, if not in practice. Accordingly, these sets are all said to be $\it{finite}$.$\it{Clearly \hspace{0.1in} we \hspace{0.1in}can \hspace{0.1in} be \hspace{0.1in} sure \hspace{0.1in} that \hspace{0.1in} a \hspace{0.1in} set \hspace{0.1in}is \hspace{0.1in}finite \hspace{0.1in} without \hspace{0.1in} knowing \hspace{0.1in} the \hspace{0.1in} number \hspace{0.1in} of elements \hspace{0.1in}in \hspace{0.1in}it.}$ On the other hand, the set of all positive integers, the set of all points on the line, the set of all circles in the plane, and the set of all polynomials with rational coefficients have a different property in common, namely, $\it{if \hspace{0.1in } we \hspace{0.1in}remove \hspace{0.1in} one \hspace{0.1in} element \hspace{0.1in}from \hspace{0.1in}each \hspace{0.1in}set, \hspace{0.1in}then \hspace{0.1in}remove \hspace{0.1in}two \hspace{0.1in}elements, \hspace{0.1in}three \hspace{0.1in}elements, \hspace{0.1in}and \hspace{0.1in}so \hspace{0.1in}on, \hspace{0.1in}there \hspace{0.1in}will \hspace{0.1in}still \hspace{0.1in}be \hspace{0.1in}elements \hspace{0.1in}left \hspace{0.1in}in \hspace{0.1in}the \hspace{0.1in}set \hspace{0.1in}in \hspace{0.1in}each \hspace{0.1in}stage}$. Accordingly, sets of these kind are called $\it{infinite}$ sets. Given two finite sets, we can always decide whether or not they have the same number of elements, and if not, we can always determine which set has more elements than the other. It is natural to ask whether the same is true of infinite sets. In other words, does it make sense to ask, for example, whether there are more circles in the plane than rational points on the line, or more functions defined in the interval [0,1] than lines in space? As will soon be apparent, questions of this kind can indeed be answered. To compare two finite sets A and B, we can count the number of elements in each set and then compare the two numbers, but alternatively, we can try to establish a $\it{one-\hspace{0.1in}to-\hspace{0.1in}one \hspace{0.1in}correspondence}$ between the elements of set A and set B, that is, a correspondence such that each element in A corresponds to one and only element in B, and vice-versa. It is clear that a one-to-one correspondence between two finite sets can be set up if and only if the two sets have the same number of elements. For example, to ascertain if or not the number of students in an assembly is the same as the number of seats in the auditorium, there is no need to count the number of students and the number of seats. We need merely observe whether or not there are empty seats or students with no place to sit down. If the students can all be seated with no empty seats left, that is, if there is a one-to-one correspondence between the set of students and the set of seats, then these two sets obviously have the same number of elements. The important point here is that the first method(counting elements) works only for finite sets, while the second method(setting up a one-to-one correspondence) works for infinite sets as well as for finite sets. $\bf{Section 2}$: $\bf{Countable \hspace{0.1in} Sets}$. The simplest infinite set is the set $\mathscr{Z^{+}}$ of all positive integers. An infinite set is called $\bf{countable}$ if its elements can be put into one-to-one correspondence with those of $\mathscr{Z^{+}}$. In other words, a countable set is a set whose elements can be numbered $a_{1}, a_{2}, a_{3}, \ldots a_{n}, \ldots$. By an $\bf{uncountable}$ set we mean, of course, an infinite set which is not countable. We now give some examples of countable sets: $\bf{Example 1}$: The set $\mathscr{Z}$ of all integers, positive, negative, or zero is countable. In fact, we can set up the following one-to-one correspondence between $\mathscr{Z}$ and $\mathscr{Z^{+}}$ of all positive integers: (0,1), (-1,2), (1,3), (-2,4), (2,5), and so on. More explicitly, we associate the non-negative integer $n \geq 0$ with the odd number $2n+1$, and the negative integer $n<0$ with the even number $2|n|$, that is, $n \leftrightarrow (2n+1)$, if $n \geq 0$, and $n \in \mathscr{Z}$ $n \leftrightarrow 2|n|$, if $n<0$, and $n \in \mathscr{Z}$ $\bf{Example 2}$: The set of all positive even numbers is countable, as shown by the obvious correspondence $n \leftrightarrow 2n$. $\bf{Example 3}$: The set 2,4,8,$\ldots 2^{n}$ is countable as shown by the obvious correspondence $n \leftrightarrow 2^{n}$. $\bf{Example 4}: The set$latex \mathscr{Q}$of rational numbers is countable. To see this, we first note that every rational number $\alpha$ can be written as a fraction $\frac{p}{q}$, with $q>0$ with a positive denominator. (Of course, p and q are integers). Call the sum $|p|+q$ as the “height” of the rational number $\alpha$. For example, $\frac{0}{1}=0$ is the only rational number of height zero, $\frac{-1}{1}$, $\frac{1}{1}$ are the only rational numbers of height 2, $\frac{-2}{1}$, $\frac{-1}{2}$, $\frac{1}{2}$, $\frac{2}{1}$ are the only rational numbers of height 3, and so on. We can now arrange all rational numbers in order of increasing “height” (with the numerators increasing in each set of rational numbers of the same height). In other words, we first count the rational numbers of height 1, then those of height 2 (suitably arranged), then those of height 3(suitably arranged), and so on. In this way, we assign every rational number a unique positive integer, that is, we set up a one-to-one correspondence between the set Q of all rational numbers and the set $\mathscr{Z^{+}}$ of all positive integers. $\it{Next \hspace{0.1in}we \hspace{0.1in} prove \hspace{0.1in}some \hspace{0.1in}elementary \hspace{0.1in}theorems \hspace{0.1in}involving \hspace{0.1in}countable \hspace{0.1in}sets}$ $\bf{Theorem1}$. $\bf{Every \hspace{0.1in} subset \hspace{0.1in}of \hspace{0.1in}a \hspace{0.1in}countable \hspace{0.1in}set \hspace{0.1in}is \hspace{0.1in}countable}$. $\bf{Proof}$ Let set A be countable, with elements $a_{1}, a_{2}, a_{3}, \ldots$, and let set B be a subset of A. Among the elements $a_{1}, a_{2}, a_{3}, \ldots$, let $a_{n_{1}}, a_{n_{2}}, a_{n_{3}}, \ldots$ be those in the set B. If the set of numbers $n_{1}, n_{2}, n_{3}, \ldots$ has a largest number, then B is finite. Otherwise, B is countable (consider the one-to-one correspondence $i \leftrightarrow a_{n_{i}}$). $\bf{QED.}$ $\bf{Theorem2}$ $\bf{The \hspace{0.1in}union \hspace{0.1in}of \hspace{0.1in}a \hspace{0.1in}finite \hspace{0.1in}or \hspace{0.1in}countable \hspace{0.1in}number \hspace{0.1in}of \hspace{0.1in}countable \hspace{0.1in}sets \hspace{0.1in}A_{1}, A_{2}, A_{3}, \ldots \hspace{0.1in}is \hspace{0.1in}itself \hspace{0.1in}countable.}$ $\bf{Proof}$ We can assume that no two of the sets $A_{1}, A_{2}, A_{3}, \ldots$ have any elements in common, since otherwise we could consider the sets $A_{1}$, $A_{2}-A_{1}$, $A_{3}-(A_{1}\bigcup A_{2})$, $\ldots$, instead, which are countable by Theorem 1, and have the same union as the original sets. Suppose we write the elements of $A_{1}, A_{2}, A_{3}, \ldots$ in the form of an infinite table $\begin{array}{ccccc} a_{11} & a_{12} & a_{13} & a_{14} &\ldots \\ a_{21} &a_{22} & a_{23} & a_{24} & \ldots \\ a_{31} & a_{32} & a_{33} & a_{34} & \ldots \\ a_{41} & a_{42} & a_{43} & a_{44} & \ldots \\ \ldots & \ldots & \ldots & \ldots & \ldots \end{array}$ where the elements of the set $A_{1}$ appear in the first row, the elements of the set $A_{2}$ appear in the second row, and so on. We now count all the elements in the above array “diagonally”; that is, first we choose $a_{11}$, then $a_{12}$, then move downwards, diagonally to “left”, picking $a_{21}$, then move down vertically picking up $a_{31}$, then move across towards right picking up $a_{22}$, next pick up $a_{13}$ and so on ($a_{14}, a_{23}, a_{32}, a_{41}$)as per the pattern shown: $\begin{array}{cccccccc} a_{11} & \rightarrow & a_{12} &\hspace{0.1in} & a_{13} & \rightarrow a_{14} & \ldots \\ \hspace{0.1in} & \swarrow & \hspace{0.1in} & \nearrow & \hspace{0.01in} & \swarrow & \hspace{0.1in} & \hspace{0.1in}\\ a_{21} & \hspace{0.1in} & a_{22} & \hspace{0.1in} & a_{23} \hspace{0.1in} & a_{24} & \ldots \\ \downarrow & \nearrow & \hspace{0.1in} & \swarrow & \hspace{0.1in} & \hspace{0.1in} & \hspace{0.1in} & \hspace{0.1in}\\ a_{31} & \hspace{0.1in} & a_{32} & \hspace{0.1in} & a_{33} & \hspace{0.1in} & a_{34} & \ldots \\ \hspace{0.1in} & \swarrow & \hspace{0.1in} & \hspace{0.1in} & \hspace{0.1in} & \hspace{0.1in} & \hspace{0.1in} & \hspace{0.1in}\\ a_{41} & \hspace{0.1in} & a_{42} &\hspace{0.1in} & a_{43} &\hspace{0.1in} &a_{44} &\ldots\\ \ldots & \hspace{0.1in} & \ldots & \hspace{0.1in} & \ldots & \hspace{0.1in} & \ldots & \hspace{0.1in} \end{array}$ It is clear that this procedure associates a unique number to each element in each of the sets $A_{1}, A_{2}, \ldots$ thereby establishing a one-to-one correspondence between the union of the sets $A_{1}, A_{2}, \ldots$ and the set $\mathscr{Z^{+}}$ of all positive integers. $\bf{QED.}$ $\bf{Theorem3}$ $\bf{Every \hspace{0.1in}infinite \hspace{0.1in}subset \hspace{0.1in}has \hspace{0.1in}a \hspace{0.1in}countable \hspace{0.1in}subset.}$ $\bf{Proof}$ Let M be an infinite set and $a_{1}$ any element of M. Being infinite, M contains an element $a_{2}$ distinct from $a_{1}$, an element $a_{3}$ distinct from both $a_{2}$ and $a_{1}$, and so on. Continuing this process, (which can never terminate due to “shortage” of elements, since M is infinite), we get a countable subset $A= \{ a_{1}, a_{2}, a_{3}, \ldots, a_{n}, \ldots\}$ of the set $M$. $\bf{QED.}$ $\bf{Remark}$ Theorem 3 shows that countable sets are the “smallest” infinite sets. The question of whether there exist uncountable (infinite) sets will be considered below. $\bf{Section3}$ $\bf{Equivalence \hspace{0.1in} of \hspace{0.1in} sets}$ We arrived at the notion of a countable set M by considering one-to-one correspondences between set M and the set $\mathscr{Z^{+}}$ of all positive integers. More generally, we can consider one-to-one correspondences between any two sets M and N. $\bf{Definition}$ Two sets M and N are said to be $\bf{equivalent}$ (written $M \sim N$) if there is a one-to-one correspondence between the elements of M and the elements of N. The concept of equivalence is applicable both to finite and infinite sets. Two finite sets are equivalent if and only if they have the same number of elements. We can now define a countable set as a set equivalent to the set $\mathscr{Z^{+}}$ of all positive integers. It is clear that two sets are equivalent to a third set are equivalent to each other, and in particular that any two countable sets are equivalent. $\bf{Example1}$ The sets of points in any two closed intervals$[a,b]$and$[c,d]\$ are equivalent; you can “see’ a one-to-one correspondence by drawing the following diagram: Step 1: draw cd as a base of a triangle. Let the third vertex of the triangle be O. Draw a line segment “ab” above the base of the triangle; where “a” lies on one side of the triangle and “b” lies on the third side of the third triangle. Note that two points p and q correspond to each other if and only if they lie on the same ray emanating from the point O in which the extensions of the line segments ac and bd intersect. $\bf{Example2}$ The set of all points z in the complex plane is equivalent to the set of all points z on a sphere. In fact, a one-to-one correspondence $z \leftrightarrow \alpha$ can be established by using stereographic projection. The origin is the North Pole of the sphere. $\bf{Example3}$ The set of all points x in the open unit interval $(0,1)$ is equivalent to the set of all points y on the whole real line. For example, the formula $y=\frac{1}{\pi}\arctan{x}+\frac{1}{2}$ establishes a one-to-one correspondence between these two sets. $\bf{QED}$. The last example and the examples in Section 2 show that an infinite set is sometimes equivalent to one of its proper subsets. For example, there are “as many” positive integers as integers of arbitrary sign, there are “as many” points in the interval $(0,1)$ as on the whole real line, and so on. This fact is characteristic of all infinite sets (and can be used to define such sets) as shown by: $\bf{Theorem4}$ $\bf{Every \hspace{0.1in} infinite \hspace{0.1in} set \hspace{0.1in}is \hspace{0.1in} equivalent \hspace{0.1in} to \hspace{0.1in}one \hspace{0.1in}of \hspace{0.1in}its \hspace{0.1in}proper \hspace{0.1in}subsets.}$ $\bf{Proof}$ According to Theorem 3, every infinite set M contains a countable subset. Let this subset be $A=\{a_{1}, a_{2}, a_{3}, \ldots, a_{n}, \ldots \}$ and partition A into two countable subsets $A_{1}=\{a_{1}, a_{3}, a_{5}, \ldots \}$ and $A_{2}=\{a_{2}, a_{4}, a_{6}, \ldots \}$. Obviously, we can establish a one-to-one correspondence between the countable subsets A and $A_{1}$ (merely let $a_{n} \leftrightarrow a_{2n-1}$). This correspondence can be extended to a one-to-one correspondence between the sets $A \bigcup (M-A)=M$ and $A_{1} \bigcup (M-A)=M-A_{2}$ by simply assigning x itself to each element $x \in M-A$. But $M-A_{2}$ is a proper subset of M. $\bf{QED}$. More later, to be continued, Regards, Nalin Pithwa # Find a flaw in this proof: RMO and PRMO tutorial What ails the following proof that all the elements of a finite set are equal? The following is the “proof”; All elements of a set with no elements are equal, so make the induction assumption that any set with n elements has all its elements equal. In a set with n elements, the first and the last n are equal by induction assumption. They overlap at n, so all are equal, completing the induction. End of “proof: Regards, Nalin Pithwa # Problem Solving approach: based on George Polya’s opinion: Useful for RMO/INMO, IITJEE maths preparation I have prepared the following write-up based on George Polya’s classic reference mentioned below: UNDERSTANDING THE PROBLEM First. “You have to understand the problem.” What is the unknown ? What are the data? What is the condition? Is it possible to satisfy the condition? Is the condition sufficient to determine the unknown? Or is it insufficient? Or redundant ? Or contradictory? Draw a figure/diagram. Introduce a suitable notation. Separate the various parts of the condition. Can you write them down? Second. DEVISING A PLAN: Find the connection between the data and the unknown. You may be obliged to consider auxiliary problems if an immediate connection cannot be found. You should eventually obtain a plan for the solution.” Have you seen it before? Or have you seen the problem in a slightly different form? Do you know a related problem? Do you know a theorem that could be useful? Look at the unknown! And try to think of a familiar problem having the same or a similar unknown. Here is a problem related to yours and solved before. Could you use it? Could you use its result? Could you use its method? Should you restate it differently? Go back to definitions. If you cannot solve the proposed problem, try to solve some related problem. Could you imagine a more accessible related problem? A more general problem? A more special problem? An analogous problem? Could you solve a part of the problem? Keep only a part of the condition, drop the other part, how far is the unknown then determined, how can it vary? Could you derive something useful from the data? Could you think of other data appropriate to determine the unknown? Could you change the unknown of the data, or both, if necessary, so that the new unknown and the new data are nearer to each other? Did you use all the data? Did you use the whole condition? Have you taken into account all essential notions involved in the problem? Carrying out your plan of the solution, check each step. Can you clearly see that the step is correct? Can you prove that it is correct? Fourth. LOOKING BACK. Examine the solution. Can you check the result? Can you check the argument? Can you derive the result differently? Can you see it at a glance? Can you see the result, or the method, for some other problem? ************************************************************************** Reference: How to Solve It: A New Aspect of Mathematical Method — George Polya. https://www.amazon.in/How-Solve-Aspect-Mathematical-Method/dp/4871878309/ref=sr_1_1?crid=2DXC1EM1UVCPW&keywords=how+to+solve+it+george+polya&qid=1568334366&s=books&sprefix=How+to+solve%2Caps%2C275&sr=1-1 The above simple “plan” can be useful even to crack problems from a famous classic, Problem Solving Strategies, by Arthur Engel, a widely-used text for training for RMO, INMO and IITJEE Advanced Math also, perhaps. Reference: Problem-Solving Strategies by Arthur Engel; available on Amazon India # Concept of order in math and real world 1. Rise and Shine algorithm: This is crazy-sounding, but quite a perfect example of the need for “order” in the real-world: when we get up in the morning, we first clean our teeth, finish all other ablutions, then go to the bathroom and first we have to remove our pyjamas/pajamas and then the shirt, and then enter the shower; we do not first enter the shower and then remove the pyjamas/shirt !! 🙂 2. On the number line, as we go from left to right: $a, that is any real number to the left of another real number is always “less than” the number to the right. (note that whereas the real numbers form an “ordered field”, the complex numbers are only “partially ordered”…we will continue this further discussion later) . 3. Dictionary order 4. Alphabetical order (the letters $A \hspace{0.1in} B \ldots Z$ in English. 5. Telephone directory order 6. So a service like JustDial certainly uses “order” quite intensely: let us say that you want to find the telephone clinic landline number of Dr Mrs Prasad in Jayanagar 4th Block, Bengaluru : We first narrow JustDial to “Location” (Jayanagar 4th Block, Bengaluru), then narrow to “doctors/surgeons” as the case may be, and then check in alphabetic order, the name of Dr Mrs Prasad. So, we clearly see that the “concept” and “actual implementation” of order (in databases) actually speeds up so much the time to find the exact information we want. 7. So also, in math, we have the concept of ordered pair; in Cartesian geometry, $(a,b)$ means that the first component $a \in X-axis$ and $b \in Y-axis$. This order is generalized to complex numbers in the complex plane or Argand’s diagram. 8. There is “order” in human “relations” also: let us $(x,y)$ represents x (as father) and y (as son). Clearly, the father is “first” and the son is “second”. 9. So, also any “tree” has a “natural order”: seed first, then roots, then branches. Regards, Nalin Pithwa. # Why do we need proofs? In other words, difference between a mathematician, physicist and a layman Yes, I think it is a very nice question, which kids ask me. Why do we need proofs? Well, here is a detailed explanation (I am not mentioning the reference I use here lest it may intimidate my young enthusiastic, hard working students or readers. In other words, the explanation is not my own; I do not claim credit for this…In other words, I am just sharing what I am reading in the book…) Here it goes: What exactly is the difference between a mathematician, a physicist, and a layman? Let us suppose that they all start measuring the angles of hundreds of triangles of various shapes, find the sum in each case and keep a record. Suppose the layman finds that with one or two exceptions, the sum in each case comes out to be 180 degrees. He will ignore the exceptions and say “the sum of the three angles in a triangle  is 180 degrees.” A physicist will be more cautious in dealing with the exceptional cases. He will examine them more carefully. If he finds that the sum in them is somewhere between 179 degrees to 180 degrees, say, then he will attribute the deviation to experimental errors. He will then state a law: The sum of three angles of any triangle is 180 degrees. He will then watch happily as the rest of the world puts his law to test and finds that it holds good in thousands of different cases, until somebody comes up with a triangle in which the law fails miserably. The physicist now has to withdraw his law altogether or else to replace it by some other law which holds good in all cases tried. Even this new law may have to be modified at a later date. And, this will continue without end. A mathematician will be the fussiest of all. If there is even a single exception he will refrain from saying anything. Even when millions of triangles are tried without a single exception, he will not state it as a theorem that the sum of the three angles in ANY triangle is 180 degrees. The reason is that there are infinitely many different types of triangles. To generalize from a million to infinity is as baseless to a mathematician as to generalize from one to a million. He will at the most make a conjecture and say that there is a strong evidence suggesting that the conjecture is true. But that is not the same thing as a proving a theorem. The only proof acceptable to a mathematician is the one which follows from earlier theorems by sheer logical implications (that is, statements of the form : If P, then Q). For example, such a proof follows easily from the theorem that an external angle of a triangle is the sum of the other two internal angles. The approach taken by the layman or the physicist is known as the inductive approach whereas the mathematician’s approach is called the deductive approach. In the former, we make a few observations and generalize. In the latter, we deduce from something which is already proven. Of course, a question can be raised as to on what basis this supporting theorem is proved. The answer will be some other theorem. But then the same question can be asked about the other theorem. Eventually, a stage is reached where a certain statement cannot be proved from any other earlier proved statement(s) and must, therefore, be taken for granted to be true. Such a statement is known as an axiom or a postulate. Each branch of math has its own axioms or postulates. For examples, one of the axioms of geometry is that through two distinct points, there passes exactly one line. The whole beautiful structure of geometry is based on 5 or 6 axioms such as this one. Every theorem in plane geometry or Euclid’s Geometry can be ultimately deduced from these axioms. PS: One of the most famous American presidents, Abraham Lincoln had read, understood and solved all of Euclid’s books (The Elements) by burning mid-night oil, night after night, to “sharpen his mental faculties”. And, of course, there is another famous story (true story) of how Albert Einstein as a very young boy got completely “addicted” to math by reading Euclid’s proof of why three medians of a triangle are concurrent…(you can Google up, of course). Regards, Nalin Pithwa
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 291, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9624462723731995, "perplexity": 523.0204454113017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737233.51/warc/CC-MAIN-20200807231820-20200808021820-00324.warc.gz"}
http://austinrochford.com/posts/2014-02-08-memorylessness-exponential.html
Imagine you’re a teller at a bank. No customers have been arriving, so you’re bored, and decide to investigate the distribution of the time it takes for the each customer to arrive (you’re a very analytically-minded bank teller). To this end, you decide to track the number of customers that arrive in the next hour. Forty-five minutes in, you are getting impatient, as no customers have arrived. At this point, what is the probability that a single customer will arrive before the end of the hour? It seems reasonable that this probability should be the same as that of an arrival during the first fifteen minutes of the experiment. This post is devoted to showing the remarkable fact that this reasonable and seemingly small assumption about the distribution of interarrival times actually completely specifies their probability distribution (along with the mean of the interarrival times). Let $$T$$ be the arrival time of the first customer. The situation in the introduction leads to the identity $P(T > 60 | T > 45) = P(T > 15).$ This identity generalizes to $P(T > s + t | T > t) = P(T > s),$ for $$s, t > 0$$. Any distribution which satisfies this requirement is called memoryless. In this post, we will show that the exponential distribution is the only (continuous) memoryless distribution. We can rewrite this identity as \begin{align*} \frac{P(T > s + t \textrm{ and } T > t)}{P(T > t)} & = P(T > s), \\ P(T > s + t) & = P(T > s) P(T > t), \end{align*} since $$T > s + t$$ implies $$T > t$$. It is this identity connection addition and multiplication that leads to the exponential distribution. To begin to see this, let’s to calculate $$P(T > 2)$$: $P(T > 2) = P(T > 1 + 1) = P(T > 1) P(T > 1) = P(T > 1)^2.$ Similarly, $$P(T > 3) = P(T > 1)^3$$, and for any natural number $$n$$, $$P(T > n) = P(T > 1)^n$$. For reasons which will become clear, define $$\lambda = - \ln P(T > 1)$$, so that $$P(T > 1) = e^{-\lambda}$$, and $$P(T > n) = e^{-\lambda n}$$. Continuing on, let’s calculate $$P(T > \frac{1}{2})$$: $e^{-\lambda} = P\left(T > 1\right) = P\left(T > \frac{1}{2} + \frac{1}{2}\right) = P\left(T > \frac{1}{2}\right)^2,$ so $$P(T > \frac{1}{2}) = \exp(-\frac{\lambda}{2})$$. This sort of calculation can be extended to any rational number $$\frac{m}{n}$$ as follows $e^{-\lambda m} = P(T > m) = P\left(T > \underbrace{\frac{m}{n} + \cdots + \frac{m}{n}}_{n \textrm{ times}}\right) = P\left(T > \frac{m}{n}\right)^n,$ so $$P(T > \frac{m}{n}) = \exp(-\lambda \frac{m}{n})$$. All that remains is to extend this result to the irrational numbers. Fortunately, the rational numbers are dense in the rals, every irrational, $$t$$ number is the limit of an increasing sequence of rational numbers, $$(q_i)$$. Additonally, the survival function, $$t \mapsto P(T > t)$$ must be left-continuous, so $P(T > t) = \lim P(T > q_i) = \lim \exp(-\lambda q_i) = e^{-\lambda t},$ since the exponential function is continuous. Now that we know that $$P(T > t) = e^{-\lambda t}$$ for all $$t > 0$$, we see that this is exactly the survival function of an exponentially distributed random variable with mean $$\lambda^{-1}$$. It is truly astounding that such a seemingly small assumption about the arrival time completely specifies its distribution. The memorylessness of the exponential distribution is immensely important in the study of queueing theory, as it implies that the homogeneous Poisson process has stationary increments. Tags: Probability
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796788096427917, "perplexity": 204.01146023794098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.84/warc/CC-MAIN-20170423031207-00406-ip-10-145-167-34.ec2.internal.warc.gz"}
https://science.sciencemag.org/content/332/6032/925.summary
PerspectiveAtmospheric Science # Subtropical Rainfall and the Antarctic Ozone Hole See allHide authors and affiliations Science  20 May 2011: Vol. 332, Issue 6032, pp. 925-926 DOI: 10.1126/science.1206834 ## Summary For more than 100 years, researchers have understood that ozone in the stratosphere, the atmospheric layer between 10 and 50 km above Earth's surface, plays an important role in absorbing ultraviolet radiation and protecting life on Earth (1). In 1985, scientists and the public became alarmed when Farman et al. (2) reported that, during the Antarctic spring, stratospheric ozone concentrations over the continent were declining by as much as 50%, indicating the presence of a polar “ozone hole.” Implementation of the 1987 Montreal protocol, an international agreement that phased out the use of some chlorofluorocarbons and other compounds that destroy stratospheric ozone, has led to the first stage of recovery (3). Researchers, however, had not widely recognized the ozone hole's impact on the climate of the troposphere (the lowest 10 km of the atmosphere) until recent observational (4) and state-of-the-art climate modeling studies (58). These studies showed that ozone depletion has a large influence during the Antarctic summer, when it drives a major air current called the mid-latitude westerly jet to a higher latitude, closer to Antarctica; this reduces sea level pressure over the continent, cooling much of the continental interior, coinciding with a warming of the Antarctic Peninsula. On page 951 of this issue, Kang et al. (9) expand our understanding of ozone depletion's impact on climate. Using a series of carefully designed climate model experiments, they show that ozone-induced climate change is not confined just to the vicinity of Antarctica but extends over much of the Southern Hemisphere, even reaching the tropics, where it appears to have resulted in increased summer precipitation in the subtropics. View Full Text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8099798560142517, "perplexity": 2954.730819335574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735964.82/warc/CC-MAIN-20200805183003-20200805213003-00257.warc.gz"}
http://superuser.com/questions/340650/type-math-formulas-in-microsoft-word-the-latex-way
# Type math formulas in Microsoft Word the LaTeX way? I wonder if there is some free solutions for typing math formulas the LaTeX way in Microsoft Word document (Office 2007)? - Does the internal formula writer not work for you? It's pretty handy whenever I go to do integrals and/or summations. –  kobaltz Sep 28 '11 at 4:58 It is very slow compared to Latex. –  Tim Sep 28 '11 at 5:46 Wouldn't it be simple to write everything in LaTeX? –  N.N. Sep 28 '11 at 8:55 –  Tobias Kienzler Oct 22 '12 at 13:28 The question asked about "typing math formulas in LaTeX way in Microsoft word document (Office 2007)." But the responses and comments to date actually answer a different question -- how to embed an actual LaTeX processor into Word. Very few people realize that the built-in equation editor in Word 2007 actually understands LaTeX-style equation entry. Simply insert a new equation, and then type LaTeX into it. As you type, Word will built up a graphical representation of the equation. Once it appears in the GUI, you can no longer edit it as LaTeX. Word does not have an embedded TeX processor -- it's just doing pattern matching to convert simple LaTeX syntax into the native equation format. You should therefore not expect to get perfect fidelity for super-complex LaTeX equations. However, it's enough for probably anyone but a mathematician, and it's a lot faster than clicking elements with the mouse. The Word 2007 equation editor also has a linear equation entry format, which is fairly intuitive and does not require familiarity with LaTeX. For example, typing in (a+b)/(c+d) will result in a nicely-formatted fraction. Notes: 1. Many scientific journals will not accept Word documents with the new equation format -- even if you save as .doc instead of .docx. 2. This doesn't turn Word into LaTeX. It just does the equations themselves, and nothing else. You don't even get equation numbering. - Key point is LaTeX style, as usual ms has their own 'best' way of doing things. +1 for the post, it does make it easier if you are stuck with word. –  BAR Oct 1 '13 at 3:59 You can use Latex in Word. It provides macros for Microsoft Word that allow the use of LaTeX input to create equations images in both inline and display modes without having to install any software on the local computer. As far as I know, this is the only free alternative to the paid programs like Aurora and TexPoint. For Office 2007, go to Latex in Word Project Page on Source Forge, and click on Word 2007 under Files. LaTeX in Word is a GPL-licensed tool that allows equations to be used in Microsoft Word documents. The client-side of the program is implemented as VBA macros in the document "LaTeXinWord_v_0_3_1.docm" along with instructions. Hence, this file contains the source code, implementation, and documentation. - I prefer TeXsword over Latex in Word (http://sourceforge.net/projects/texsword/). It has all the features of Latex in Word, plus gives handling of equation references. And it doesn't require the Internet connection, which I see as a feature not a limitation: MikeTeX isn't that big after all, and having the LaTeX locally allows you typing your document when traveling. - Write your Math formulas in LaTeX → Transform the LaTeX formulas in MathML Code → Copy/Paste the MathML Code in Word (after paste click CTRL and then T). Voila! ### EXAMPLE: Lets take for example this Formula: This is the LaTeX source Code from the above Formula: 0 \leq \lim_{n\to \infty}\frac{n!}{(2n)!} \leq \lim_{n\to \infty} \frac{n!}{(n!)^2} = \lim_{k \to \infty, k = n!}\frac{k}{k^2} = \lim_{k \to \infty}\frac{1}{k} = 0. Now open a Editor and put the above source code between the signs like this: <!DOCTYPE html> <html> <script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> <title>tex texample</title> <body> $$0 \leq \lim_{n\to \infty}\frac{n!}{(2n)!} \leq \lim_{n\to \infty} \frac{n!}{(n!)^2} = \lim_{k \to \infty, k = n!}\frac{k}{k^2} = \lim_{k \to \infty}\frac{1}{k} = 0.$$ </body> </html> Save the file as .html file and open it with a browser like Chrome. Right Click on the Formula and Choose Show MathML As → MathML Code. <math xmlns="http://www.w3.org/1998/Math/MathML" display="block"> <mn>0</mn> <mo>&#x2264;<!-- ≤ --></mo> <munder> <mo form="prefix" movablelimits="true">lim</mo> <mrow class="MJX-TeXAtom-ORD"> <mi>n</mi> <mo stretchy="false">&#x2192;<!-- → --></mo> <mi mathvariant="normal">&#x221E;<!-- ∞ --></mi> </mrow> </munder> <mfrac> <mrow> <mi>n</mi> <mo>!</mo> </mrow> <mrow> <mo stretchy="false">(</mo> <mn>2</mn> <mi>n</mi> <mo stretchy="false">)</mo> <mo>!</mo> </mrow> </mfrac> <mo>&#x2264;<!-- ≤ --></mo> <munder> <mo form="prefix" movablelimits="true">lim</mo> <mrow class="MJX-TeXAtom-ORD"> <mi>n</mi> <mo stretchy="false">&#x2192;<!-- → --></mo> <mi mathvariant="normal">&#x221E;<!-- ∞ --></mi> </mrow> </munder> <mfrac> <mrow> <mi>n</mi> <mo>!</mo> </mrow> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo>!</mo> <msup> <mo stretchy="false">)</mo> <mn>2</mn> </msup> </mrow> </mfrac> <mo>=</mo> <munder> <mo form="prefix" movablelimits="true">lim</mo> <mrow class="MJX-TeXAtom-ORD"> <mi>k</mi> <mo stretchy="false">&#x2192;<!-- → --></mo> <mi mathvariant="normal">&#x221E;<!-- ∞ --></mi> <mo>,</mo> <mi>k</mi> <mo>=</mo> <mi>n</mi> <mo>!</mo> </mrow> </munder> <mfrac> <mi>k</mi> <msup> <mi>k</mi> <mn>2</mn> </msup> </mfrac> <mo>=</mo> <munder> <mo form="prefix" movablelimits="true">lim</mo> <mrow class="MJX-TeXAtom-ORD"> <mi>k</mi> <mo stretchy="false">&#x2192;<!-- → --></mo> <mi mathvariant="normal">&#x221E;<!-- ∞ --></mi> </mrow> </munder> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <mo>=</mo> <mn>0.</mn> [/itex] Now Copy/Paste the MathML Code in Word 2013 (or 2007) and click sequentially CTRL and then T (Paste Options: keep the text only) or go to the small Ctrl image at the end of the MathML Code you pasted and select the option manually. This is how the formula looks at the end in Word 2013: - Answer mentioned above is correct but there is a also shortcut builtin which is math auto correct. It is much like LaTeX. By default its inactive but you can activate it and is really helpful if you want to write big equations. For eg if you want to type H2 than you just have to type H_2 etc and many more options are available like for superscript character following caret (^) sign will be converted as superscript. Many such shortcuts are covered in this video or you can simply search for How to insert mathematical equation (like LaTeX) in Ms-Office: Tips and tricks on youtube. This method will be especially helpful if you are a fast at typing. Moreover it will save your time which is lost while switching between keyboard and mouse and searching proper option in word. - Taoyue mentioned that in his answer two years ago, with screenshots. –  Ben Voigt Dec 6 '14 at 0:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936052918434143, "perplexity": 3253.142527291103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115856041.43/warc/CC-MAIN-20150124161056-00055-ip-10-180-212-252.ec2.internal.warc.gz"}
https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsFSQ12020
# Measurement of the underlying event activity in pp collisions at the LHC using leading tracks at √s = 7 TeV and comparison with √s = 0.9 TeV Contents: ## Abstract A measurement of the underlying event activity is performed on proton-proton collisions for = 0.9 TeV and 7 TeV at CMS. The charged particles in the azimuthal region transverse to the leading track with > 0.5 GeV/c and within are studied. The aim of the study is to compare the measurements with various MC predictions to get extra constraints for the tuning parameters, as well as getting a better understanding for the underlying event phenomena. A significant increase in the average multiplicity and the scalar in the transverse region is observed up to a leading track of few GeV/c followed by a slower rate of increase. Both quantities are observed to increase with . ## Results at √s=7 TeV Nch profile plot profile plot description The corrected measurements of charged particles with pT > 0.5 GeV/c and \η\ < 0.8 in the transverse region,  < < , as a function of the of the leading track: The average multiplicity per unit of pseudorapidity and per radian in (upper left), and the average scalar per unit of pseudorapidity per radian (upper right). The inner error bars represent the statistical uncertainty, and the outer error bars show the systematic and statistical uncertainties added in quadrature. Predictions of {\sc PYTHIA6} (tunes Z1 and D6T), {\sc PYTHIA8} tune 1 and the default tune of Herwig++ are compared to the data and the ratio of the MC prediction over the data are presented in the lower row. ## Results for √s=0.9 TeV Nch profile plot profile plot description Fully corrected measurements of charged particles with pT > 0.5 GeV/c and \η\ < 0.8 in the transverse region,  < < : (upper left) The average multiplicity, and (upper right) the average scalar , per unit of pseudorapidity per radian, as functions of the leading track , for data at TeV. The inner error bars represent the statistical uncertainty, and the outer error bars represent systematic and statistical uncertainties added in quadrature. The lower plots show the ratio of the MC predictions over the data, as functions of the leading track , with pT > 0.5 GeV/c and \η\ < 0.8 in the transverse region for the average multiplicity (lower left) and the average scalar (lower right). The inner bands correspond to the average statistical uncertainty and the outer error bands correspon to the total experimental uncertainty (statistical and systematic uncertainties added in quadrature). The outer error bands are hardly visible since the regions of high leading track where the statistical uncertainty dominates in the results shown. ## Center-of-mass energy dependence Nch profile plot profile plot description Fully corrected measurements of charged particles with pT > 0.5 GeV/c and \η\ < 0.8 in the transverse region : The average multiplicity (upper left plot), and the average scalar (upper right plot), per unit of pseudorapidity and per radian, as functions of the leading track , for data at TeV and =7 TeV. The inner error bars represent statistical uncertainty, and the outer error bars represent systematic and statistical uncertainties added in quadrature except for correlated systematic uncertainty sources. The lower row shows the ratio of the hadronic activity in the transverse region for the two center of mass energies: The average multiplicity (lower left plot) and the average scalar (lower right plot). ## Comparison with ALICE Nch profile plot profile plot description Fully corrected measurements of charged particles with pT > 0.5 GeV/c and \η\ < 0.8 in the transverse region compared to ALICE data: (left plots) The average multiplicity, and (right plots) the average scalar , per unit of pseudorapidity and per radian, as functions of the leading track , for data at = 0.9 TeV (lower row) and =7 TeV (upper row). a link to the document: http://cdsweb.cern.ch/record/1478982?ln=en -- MohammedZakaria - 26-Dec-2011 -- MohammedZakaria - 25-Sep-2012 Topic attachments I Attachment History Action Size Date Who Comment png CommonPlots_Profile_Nch_pT_09TeV.png r1 manage 18.2 K 2012-09-25 - 05:30 MohammedZakaria png CommonPlots_Profile_Nch_pT_7TeV.png r1 manage 17.5 K 2012-09-25 - 05:30 MohammedZakaria png CommonPlots_Profile_SumpT_pT_09TeV.png r1 manage 19.3 K 2012-09-25 - 05:30 MohammedZakaria png CommonPlots_Profile_SumpT_pT_7TeV.png r1 manage 18.8 K 2012-09-25 - 05:30 MohammedZakaria png Profile_Nch_pT_09TeV.png r1 manage 22.4 K 2012-09-25 - 05:31 MohammedZakaria png Profile_Nch_pT_7TeV.png r1 manage 22.4 K 2012-09-25 - 05:31 MohammedZakaria png Profile_Nch_pT_7TeV_09TeV.png r1 manage 19.5 K 2012-09-25 - 05:30 MohammedZakaria png Profile_Nch_pT_7over09TeV.png r1 manage 16.2 K 2012-09-25 - 05:30 MohammedZakaria png Profile_Nch_pT_MCoD_09TeV.png r1 manage 23.3 K 2012-09-25 - 05:30 MohammedZakaria png Profile_Nch_pT_MCoD_7TeV.png r1 manage 23.7 K 2012-09-25 - 05:32 MohammedZakaria png Profile_Sum_pT_7TeV_09TeV.png r1 manage 21.3 K 2012-09-25 - 05:30 MohammedZakaria png Profile_Sum_pT_7over09TeV.png r1 manage 16.6 K 2012-09-25 - 05:30 MohammedZakaria png Profile_Sum_pT_MCoD_09TeV.png r1 manage 23.2 K 2012-09-25 - 05:30 MohammedZakaria png Profile_Sum_pT_MCoD_7TeV.png r1 manage 23.6 K 2012-09-25 - 05:32 MohammedZakaria png Profile_SumpT_pT_09TeV.png r1 manage 23.1 K 2012-09-25 - 05:31 MohammedZakaria png Profile_SumpT_pT_7TeV.png r1 manage 24.1 K 2012-09-25 - 05:31 MohammedZakaria Edit | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | More topic actions Topic revision: r2 - 2014-10-30 - XavierJanssen Create a LeftBar Cern Search TWiki Search Google Search CMSPublic All webs Copyright &© 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors. or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9591765999794006, "perplexity": 4854.753265329082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361723.15/warc/CC-MAIN-20210228175250-20210228205250-00024.warc.gz"}
https://cs.stackexchange.com/questions/117648/is-there-a-model-of-zf%C2%ACc-where-some-program-always-terminates-but-has-no-loop-va
# Is there a model of ZF¬C where some program always terminates but has no loop variant? Wikipedia has a proof that every loop that terminates has a loop variant—a well-founded relation on the state space such that each iteration of the loop results in a state that is less than the previous iteration's state under the relation. Here, well-founded refers to the usual classical definition of a well-founded relation: every nonempty subset has a minimal element. The proof given in the linked article is as follows: 1. Let the loop variant be the "iteration" relation, i.e. the reflexive transitive closure of the transition relation. 2. Since the loop always terminates, the loop variant has no infinite descending chains. 3. Apply the axiom of choice to conclude that the loop variant is well-founded. My question is about step 3. Using the full, uncountable axiom of choice here feels like swatting a fly with an atom bomb. Elsewhere on Wikipedia, we have the following: Equivalently, assuming the axiom of dependent choice, a relation is well-founded if it contains no countable infinite descending chains: that is, there is no infinite sequence $$x_0, x_1, x_2, \dots$$ of elements of $$X$$ such that $$x_{n+1}\ R\ x_n$$ for every natural number $$n$$. So the much weaker axiom of dependent choice is sufficient. It seems like it might be possible to weaken this assumption further. The state space of a computer program is not an arbitrary set from the entire Von Neumann universe of ZF. Maybe countable choice suffices, since the state space of any program is countable? On the other hand, if dependent choice is required and countable choice will not suffice, then (assuming ZF is consistent) there must exist a model of ZF + countable choice where there is some program that (a) always terminates, (b) has an iteration relation with no infinite descending chains, yet (c) has no well-founded loop variant. This seems deeply weird. My question is: 1. Is there a model of ZF where a program always terminates but has no loop variant? 2. If the answer to 1 is yes, then what is the weakest choice principle that, when added to ZF, changes the answer to no? 3. If the answer to 1 is yes, is it possible to write down an explicit example of such a program (a la Harvey Friedman's explicit formulas equivalent to the strengths of ordinals), or does such a program necessarily correspond to a non-standard natural number? • This sounds a tad advanced for this site. – Yuval Filmus Nov 26 '19 at 23:36 • @Yuval I may re-ask it at cstheory if it gets no traction here. But I thought it would basically be a simple reference answer if someone with the right knowledge sees it. – Aaron Rotenberg Nov 27 '19 at 1:04 I think you are really asking a question about the definition of the notion of well-foundedness. I think the notion of loop variants is a bit of a red herring here: I would argue that any reasonable definition of well-foundedness should enable proving that a loop is terminating iff there is a well-founded relation which acts as a variant for it, almost as a tautology. The issue is that the classical definition of a well-founded order $$<$$ on $$X$$: There are no infinite sequences $$x_1>x_2>x_3>\ldots$$ is not a very nice definition, either from a constructive standpoint or when one is uncomfortable with the use of the axiom of (dependent, thanks Andrej!) choice. Assuming the latter, this definition is equivalent to the much nicer definition: Every non-empty subset $$P\subseteq X$$ has a minimal element, that is, some $$x\in P$$ such that $$y \not< x$$ for every $$y\in P$$. This definition is already much nicer, and I think it can be used to prove the variant lemma without choice. Finally, the constructive version of well-foundedness is this: For every $$P\subseteq X$$, if for every $$x\in X$$, $$\{\ y\ |\ y < x\ \}\subseteq P$$ implies $$x\in P$$, then $$P = X$$. This definition seems more unwieldy, but it is actually the one you want: it enables induction over well-founded orders, without using either choice or excluded middle. Assuming excluded middle, it is equivalent to the previous one. Finally, the business in the wikipedia article about ordinals is not really necessary for any technical analysis of termination, and, in addition, choice is not required if you define $$\omega_1$$ to be the order type of the set of countable ordinals (ordered by the prefix relation). • The first Wikipedia article I linked to uses termination to prove your definition 1, then applies the axiom of choice to that to prove definition 2. How would one go from termination to your definitions 2 or 3 without using any choice principle? – Aaron Rotenberg Nov 27 '19 at 15:25 • You probably mean to say that every inhabited subset has a minimal element? – Andrej Bauer Nov 27 '19 at 16:11 • If memory serves me right, to prove all three equivalent we need excluded middle and Dependent Choice. – Andrej Bauer Nov 27 '19 at 16:11 • @AndrejBauer good catch! And yes, I think Dependent Choice is correct, at least intuitively. – cody Nov 27 '19 at 19:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9591661691665649, "perplexity": 306.7368228481484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00514.warc.gz"}
https://www.physicsforums.com/threads/wave-on-a-string-problem.213272/
# Wave on a string problem 1. ### Sdawg1969 27 1. The problem statement, all variables and given/known data If y(x,t)=(6.0mm)sin(kx+(600rad/s)t+$$\Phi$$) describes a wave travelling along a string, how much time does any given point on the string take to move between displacements y= +2.0mm and y= -2.0mm? 2. Relevant equations I think y(t)=ym sin($$\omega$$t) ? 3. The attempt at a solution well if I plug in 2.0mm for y, 6.00mm for ym and 600rad/s for $$\omega$$ I come up with the equation 2.0mm=6.00mm sin (600rad/s * t). Where do I go from here? are my assumptions correct so far? Other things as I am thinking- 600rad/s is about 95.5Hz so each complete cycle from +6mm to -6mm should take .01s or so, so my answer should be less then that. Last edited: Feb 5, 2008 27 anyone? 3. ### Sdawg1969 27 I think I have it- if I then take sin$$^{}-1$$(2/6)=600*T1? sin$$^{}-1$$(-2/6)=600*T2? then $$\Delta$$T is T1-T2? does anyone have any input here? 4. ### Sdawg1969 27 pretty sure I've got it- y1(x,t)=ym*sin(kx+600t1+$$\Phi$$) 2.00mm=6.00mm*sin(kx+600t1+$$\Phi$$) sin$$^{}-1$$(1/3)=kx+600t1+$$\Phi$$ y2(x,t)=ym*sin(kx+600t2+$$\Phi$$) -2.00mm=6.00mm*sin(kx+600t2+$$\Phi$$) sin$$^{}-1$$(-1/3)=kx+600t2+$$\Phi$$ so... [sin$$^{}-1$$(1/3)]-[sin$$^{}-1$$(-1/3)]=(kx+600t1+$$\Phi$$)-(kx+600t2+$$\Phi$$) and... [sin$$^{}-1$$(1/3)]-[sin$$^{}-1$$(-1/3)]=600t1-600t2 finally, ([sin$$^{}-1$$(1/3)]-[sin$$^{}-1$$(-1/3)])/600=t1-t2 do the math and $$\Delta$$t is .00113s I think this is solved 5. ### Puchinita5 169 no one ever responded to this guys problem, and now i'm actually trying to solve this as well and i tried doing the method he ended up using but i am not getting a correct answer. Although his method for the most part looks right and makes sense to me, the only thing I figure would be the problem is that x isn't a constant so they should cancel i don't think....but i'm not sure what else to do with so many unknown variables...any help??? 6. ### Puchinita5 169 Similar discussions for: Wave on a string problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8674635291099548, "perplexity": 1684.9022375443833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207926828.58/warc/CC-MAIN-20150521113206-00316-ip-10-180-206-219.ec2.internal.warc.gz"}
https://mvngu.wordpress.com/category/symbolic-computation/
### Archive Archive for the ‘symbolic computation’ Category ## Sage Math weekly Facebook update: 2010-09-08 Here is this week’s summary for the Sage Math Facebook page: • 335 monthly active users; 17 since last week • 1,752 people like this; 15 since last week • 13 wall posts and comments this week; no change since last week • 460 visits this week; 87 since last week ## Typesetting Sage code listings in LaTeX 27 June 2010 1 comment I needed to typeset Sage code listings in a LaTeX document. I have used the listings package before, but I want to customize with colours for keywords, comments, strings, etc. Borrowing some customization from William Stein’s ICMS 2010 paper, I think I have my customization just about right. The relevant code snippet for the preamble is: %% For typesetting code listings \usepackage{listings} \lstdefinelanguage{Sage}[]{Python} {morekeywords={False,sage,True},sensitive=true} \lstset{ frame=none, showtabs=False, showspaces=False, showstringspaces=False, keywordstyle={\ttfamily\color{dbluecolor}\bfseries}, stringstyle={\ttfamily\color{dgraycolor}\bfseries}, language=Sage, basicstyle={\fontsize{10pt}{10pt}\ttfamily}, aboveskip=0.3em, belowskip=0.1em, numbers=left, numberstyle=\footnotesize } \definecolor{dblackcolor}{rgb}{0.0,0.0,0.0} \definecolor{dbluecolor}{rgb}{0.01,0.02,0.7} \definecolor{dgreencolor}{rgb}{0.2,0.4,0.0} \definecolor{dgraycolor}{rgb}{0.30,0.3,0.30} \newcommand{\dblue}{\color{dbluecolor}\bf} \newcommand{\dred}{\color{dredcolor}\bf} \newcommand{\dblack}{\color{dblackcolor}\bf} And here’s an example \begin{lstlisting} sage: R.<x> = ZZ[] sage: type(R.an_element()) <type 'sage.rings...Polynomial_integer_dense_flint'> sage: R.<x,y> = ZZ[] sage: type(R.an_element()) <type 'sage.rings...MPolynomial_libsingular'> sage: R = PolynomialRing(ZZ, 'x', implementation='NTL') sage: type(R.an_element()) # this is a comment <type 'sage.rings...Polynomial_integer_dense_ntl'> sage: def abc(): ... """ ... This should be a very long comment. ... That should span multiple lines. ... To illustrate what colour Sage comments look like. ... To get a feel for the color when rendered using LaTeX. ... """ ... return 2 \end{lstlisting} This renders as follows: ## Pushing towards 90% doctest coverage for Sage 5.0 This is an edited version of my post to sage-devel. One of the main goals of the upcoming Sage 5.0 release is to get doctest coverage of the Sage library up to at least 90%. As of Sage 4.4.4.alpha0, the overall weighted coverage is 82.7%. To get a sense of which modules in the Sage library need work on their coverage scores, you could use the coverage script as follows: $./sage -coverage /path/to/module.py[x] Or you could do the following to get the coverage scores of all modules, including a coverage summary: $ ./sage -coverageall You might be interested in knowing which modules have a certain coverage percentage, in which case you could save the output of -coverageall to a text file and then grep that file for certain coverage scores. At this repository is a script to generate various types of coverage analysis reports. You can also find the script here. The script currently supports the following reports 1. The coverage summary of all modules. 2. Modules with 100% coverage. 3. Modules with zero coverage. 4. Modules with between 1% and 9% coverage. 5. Modules with between 10% and 19% coverage. 6. Modules with between 20% and 29% coverage. 7. Modules with between 30% and 39% coverage. 8. Modules with between 40% and 49% coverage. 9. Modules with between 50% and 59% coverage. 10. Modules with between 60% and 69% coverage. 11. Modules with between 70% and 79% coverage. 12. Modules with between 80% and 89% coverage. 13. Modules with between 90% and 99% coverage. Each report has links to detailed reports for individual modules. To run the script, copy it to the SAGE_ROOT of a Sage source or binary installation and do [mvngu@sage sage-4.4.4.alpha0]$./coverage-status.py Coverage report of all modules... Summary of doctest coverage... Modules with 0% coverage... Modules with 100% coverage... Coverage reports within certain ranges... Detailed coverage report for all modules... Format the detailed coverage reports... Format the summary reports... Generate index.html... And you’re done. Here is a report generated by the script. The idea is to provide an overview of which modules need work. I’d be interested to know what other types of doctest coverage reports people would like to see. Comments, suggestions, critiques, etc. are welcome. ## Hill cipher in Sage 2 June 2010 1 comment Let’s first consider how Hill cipher encryption is commonly presented in introductory texts on cryptography or even Wikipedia. Let ${M}$ be a ${3 \times 3}$ invertible matrix over ${\mathbf{Z}_{26}}$ and let ${P}$ be a ${3 \times n}$ matrix also over ${\mathbf{Z}_{26}}$. We call ${M}$ the encryption key and ${P}$ is referred to as the plaintext. The ciphertext ${C}$ corresponding to ${P}$ is given by $\displaystyle C = MP \pmod{26}.$ According to this scheme of encryption, given $\displaystyle M = \begin{bmatrix} 6 & 24 & 1 \\ 13 & 16 & 10 \\ 20 & 17 & 15 \end{bmatrix} \ \ \ \ \ (1)$ and $\displaystyle P = \begin{bmatrix} 0 \\ 2 \\ 19 \end{bmatrix} \ \ \ \ \ (2)$ then the ciphertext is $\displaystyle C = \begin{bmatrix} 6 & 24 & 1 \\ 13 & 16 & 10 \\ 20 & 17 & 15 \end{bmatrix} \begin{bmatrix} 0 \\ 2 \\ 19 \end{bmatrix} = \begin{bmatrix} 15 \\ 14 \\ 7 \end{bmatrix}.$ Hill cipher encryption in Sage works differently from that presented above. If ${M}$ is the encryption matrix key and ${P}$ is the plaintext matrix, then the ciphertext is the matrix ${PM}$. Here, ${M}$ is still a square (${3 \times 3}$) matrix and ${P}$ is an ${n \times 3}$ matrix where the entries are filled from left to right, top to bottom. According to this scheme of encryption, with ${M}$ and ${P}$ as in (1) and (2), respectively, we get $\displaystyle C = P^T M = \begin{bmatrix} 0 & 2 & 19 \end{bmatrix} \begin{bmatrix} 6 & 24 & 1 \\ 13 & 16 & 10 \\ 20 & 17 & 15 \end{bmatrix} = \begin{bmatrix} 16 & 17 & 19 \end{bmatrix}.$ Or using Sage: sage: version() Sage Version 4.4.1, Release Date: 2010-05-02 sage: H = HillCryptosystem(AlphabeticStrings(), 3) sage: M = Matrix(IntegerModRing(26), [[6,24,1], [13,16,10], [20,17,15]]) sage: P = H.encoding("ACT") sage: H.enciphering(M, P) QRT ## Sage 4.4.2 release schedule 7 May 2010 Leave a comment Sage 4.4.2 is intended to be a little release on the way to Sage 5.0. I’m devoting a week or so to managing the release of Sage 4.4.2. Here’s a proposed release schedule I posted to sage-release: • Sage 4.4.2.alpha0 — release 10th May 2010 • Sage 4.4.2.rc0 — feature freeze; release 15th May 2010 • Anything critical to get Sage 4.4.2.final to stabilize • Sage 4.4.2.final — release 18th May 2010 I’m being too optimistic here with the above schedule. But we’ll see how things go. ## Binary trees and branch cut 1 May 2010 1 comment The topic and content of this post originate from a question to sage-support. I’m posting the essential responses here so it doesn’t get lost in the sage-support archive. Problem How do you construct a binary tree in Sage? If T is a binary tree, how do you cut off a branch of that tree? Solution There is as yet no class for representing binary trees in Sage. However, you could use either the classes Graph or DiGraph to construct a graph T and then use the method T.is_tree() to determine whether or not T is a tree. There is also a balanced tree generator. Also missing is a method to determine whether or not a tree is binary. That can be remedied by defining your own function to test the number of children a vertex has. Using Graph to construct a tree, and then test that tree to see that it is binary, is rather difficult because unless you label the vertices to indicate their parents, you don’t know which vertex is a child of which other vertex. In general, I prefer using DiGraph to construct a tree T and then use the method T.neighbors_out() in testing whether or not T is a binary tree. The reason is that in a digraph that represents a tree, you can think of the out-neighbors of a vertex as being the children of that vertex. Here is an example demonstrating the construction of a binary tree rooted at vertex v. By definition, a vertex in a binary tree has at most 2 children. The session below uses this definition to test whether or not a tree is binary. sage: T = DiGraph({"v": ["a", "w"], ....: "w": ["x", "y"], ....: "x": ["c", "b"], ....: "y": ["z", "d"], ....: "z": ["f", "e"]}) sage: T.vertices() ['a', 'b', 'c', 'd', 'e', 'f', 'v', 'w', 'x', 'y', 'z'] sage: T.edges(labels=None) [('v', 'a'), ('v', 'w'), ('w', 'x'), ('w', 'y'), ('x', 'b'), ('x', 'c'), ('y', 'd'), ('y', 'z'), ('z', 'e'), ('z', 'f')] sage: T.is_tree() True sage: def is_binary_tree(tree): ....: for v in tree.vertex_iterator(): ....: if len(tree.neighbors_out(v)) > 2: ....: return False ....: return True ....: sage: is_binary_tree(T) True Nathann Cohen offered another way to test that a graph is a binary tree. sage: def is_binary_tree(g): ....: if g.is_tree() and max(g.degree()) == 3 and g.degree().count(2) == 1: ....: return True ....: return False ....: sage: is_binary_tree(T) True Once you have determined the root vertex of a branch that you want to cut off, you could use breadth-first search (or depth-first search) to determine all vertices in that branch. Again, assume that your binary tree T is represented using the DiGraph class and V is a list of vertices in the branch you want to cut off. You can use the method T.delete_vertices() to cut off that branch. Deleting a vertex v not only deletes v, but also all edges incident on that vertex. Say you have constructed your tree as in the above session and you have determined that the vertex y is the root of the branch you want to cut off. Here is how you can cut off that branch: sage: V = list(T.breadth_first_search("y")) sage: V ['y', 'd', 'z', 'e', 'f'] sage: T.delete_vertices(V) sage: T.vertices() ['a', 'b', 'c', 'v', 'w', 'x'] sage: T.edges(labels=None) [('v', 'a'), ('v', 'w'), ('w', 'x'), ('x', 'b'), ('x', 'c')] ## Adding Mercurial patches before building Sage 8 March 2010 Leave a comment The following problem and accompanying solution were posted to sage-devel. I have polished them up a bit and put them here so they won’t be buried in the huge sage-devel mailing list. Problem You have a number of Mercurial patches that you want to apply to a Sage source distribution. However, you don’t want to go through the following work flow: 1. Build a Sage source distribution. 2. Apply your patches to the Sage library. 3. Produce a new source tarball based on your patched Sage source distribution. 4. Compile your newly wrapped up modified Sage source tarball. The key problem is that you don’t want to run through two separate compilation processes. Solution The following solution uses the queues extension of Mercurial. In a Sage source tarball, the Sage library is wrapped up as the package SAGE_ROOT/spkg/standard/sage-x.y.z.spkg Uncompress that bzip2 compressed tarball to get a directory named SAGE_ROOT/spkg/standard/sage-x.y.z/ If you want, you could delete SAGE_ROOT/spkg/standard/sage-x.y.z.spkg Now patch the Sage library as if it had been built and found under SAGE_ROOT/devel: $ cd SAGE_ROOT/spkg/standard/sage-x.y.z/ $hg qimport /URL/or/path/to/patch.patch$ hg qpush $<repeat-previous-two-commands-as-often-as-required> The command $ hg qapplied should return a list of patches you have applied using “hg qpush“. Once you are happy that you have applied all necessary patches, wrap up the patched Sage library: $pwd SAGE_ROOT/spkg/standard/sage-x.y.z/$ hg qfinish -a \$ cd .. And see this section of the Developer’s Guide to find out how to package the newly patched Sage library, i.e. the directory SAGE_ROOT/spkg/standard/sage-x.y.z/ Now you should have SAGE_ROOT/spkg/standard/sage-x.y.z.spkg so you could delete SAGE_ROOT/spkg/standard/sage-x.y.z/ Finally, proceed to build Sage from source.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905850291252136, "perplexity": 1955.2509162329065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824543.20/warc/CC-MAIN-20171021024136-20171021044136-00336.warc.gz"}
http://pseudomonad.blogspot.com/2011/12/old-symmetry.html
## Friday, December 2, 2011 ### Old Symmetry As in the tetractys qutrit path count, the hexagonal symmetries of 20th century physics include a doubling of points at the centre of the hexagon. Although clearly discrete, such hexagons are usually interpreted in terms of the continuum representation theory. Observe how the quantum numbers $q$ and $s$ take values $-1$, $0$, $1$. The charge $q = \pm 1$ is now associated to a triplet of ribbon twists. Similarly, rotation of a Furey hexagon (Fano plane) gives color, and a tripled hexagon gives generations. There are four index hexagons in the tetractys: three at the corners and one in the centre. One easily triples the state count on the baryon octet, accounting for quarks. This octet count now agrees with the tetractys.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8201037645339966, "perplexity": 2131.880670015202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320209.66/warc/CC-MAIN-20170624013626-20170624033626-00224.warc.gz"}
https://idiocy.org/gaussian-prime-spirals.html
# Gaussian prime spirals Gaussian integers are complex numbers where both the real and imaginary parts are integers. So $$3+7i$$ is a Gaussian integer, but $$1/2+3i$$ isn't. Pretty straight-forward, right? A Gaussian prime is, funnily enough, a Gaussian integer that is also a prime. Because complex numbers are a bit weird the rules for determining whether any Gaussian integer is prime are a little convoluted. Basically it boils down to: • if either the real or imaginary part is $$0$$ the other part must be a prime of the form $$4n+3$$. Or: • the real and imaginary parts squared and added together must be a prime not of the form $$4n+3$$. A prime of the form $$4n+3$$ simply means that if you subtract three and divide by four the resulting number should be an integer. One thing to note about the Gaussian primes is that they are symmetrical around both the real and imaginary axes. This means that since $$13+8i$$ is a prime, so are all $$±13±8i$$. It’s also worth noting that all $$±8±13i$$ are primes too. Lots of symmetry! But what's this got to do with spirals, you ask. Well, nothing as far as I can see since they're not really spirals, but that seems to be the name that's stuck. Start at $$8+13i$$, which is a prime, and repeatedly add $$1$$ until you reach another prime, $$10+13i$$, then turn left, so now you’re adding $$i$$ and continue until you reach another prime, $$10+17i$$ and turn left again ($$-1$$) and continue repeating, you’ll eventually end up tracing over your previous path: 1. $$8+13i$$ 2. $$10+13i$$ 3. $$10+17i$$ 4. $$8+17i$$ 5. $$8+13i$$ (same as 1) 6. $$10+13i$$ (same as 2) 7. etc. It seems to be a rule that you will always end up tracing a loop, although it’s usually more complex than the square in the example above. For example $$-12-7i$$ looks like this: The “spirals” are quite attractive and nicely symmetrical. Or at least all the ones I’ve seen have been. <style> #output { width: 100%; height: 30vw; position: relative; } #gaussianSpiral { width: 100%; height: 100%; } #spiralControls input[type="text"] { width: 3em; } </style> <div id="output"> <canvas id="gaussianSpiral" width="100%" height="100%"></canvas> </div> <div id="spiralControls"> <div id="cornerCount">&nbsp;</div> <input type="text" id="realPart" value="8"> <input type="text" id="imaginaryPart" value="13">$$i$$<br> <input type="submit" id="makeSpiral" value="generate spiral"> </div> Some numbers to try: • $$-12-7i$$ • $$3+5i$$ • $$5+23i$$ • $$12+28i$$ • $$1+63i$$ Note The script above generates the spiral and displays it in the browser, but it can take quite a long time if the spiral is large, for example $$232+277i$$. My intention was to include a corner count that updated as the spiral was generated so you could tell it was actually doing something, but battling the CSS on this page has put me off for the time being. I'll try and add it later.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8122196793556213, "perplexity": 788.8316638758494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647556.43/warc/CC-MAIN-20180321004405-20180321024405-00325.warc.gz"}
http://mathhelpforum.com/number-theory/133066-indicies-print.html
# Indicies • Mar 10th 2010, 02:41 AM tim_mannire Indicies Hi everyone. I know this may seem a little basic for university study, but we are currently doing revison and i'm stuck on this question Simplify the expression 3^x+y-15^y/3^y-2 • Mar 10th 2010, 06:25 AM Sudharaka Quote: Originally Posted by tim_mannire Hi everyone. I know this may seem a little basic for university study, but we are currently doing revison and i'm stuck on this question Simplify the expression 3^x+y-15^y/3^y-2 Dear tim, Do you mean, $\frac{3^{x}+y-15^{y}}{3^{y}-2}$ • Mar 10th 2010, 11:30 AM tim_mannire yes, but it is 3 to the power of 3^(x+y) • Mar 10th 2010, 11:36 AM icemanfan About the only thing I can think of is this: $\frac{3^{x+y} - 15^y}{3^y - 2} = \frac{3^y(3^x - 5^y)}{3^y - 2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.972762405872345, "perplexity": 2590.5797725052635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718987.23/warc/CC-MAIN-20161020183838-00485-ip-10-171-6-4.ec2.internal.warc.gz"}
https://lonati.di.unimi.it/?page=lfa
# Linguaggi Formali e Automi ## Pubblicazioni ### Articoli pubblicati su rivista 1. S. Crespi Reghizzi, V. Lonati, D. Mandrioli, M. Pradella, Toward a theory of input-driven locally parsable languages, Theoretical Computer Science, 658: 105-121 (2017) 2. V. Lonati, D. Mandrioli, F. Panella, M. Pradella. Operator Precedence Languages: Their Automata-Theoretic and Logic Characterization, SIAM Journal on Computing (SICOMP), 2015, Vol. 44, No. 4, pp. 1026-1088. Strategies to scan pictures with automata based on Wang tiles. R.A.I.R.O. Theoretical Informatics and Applications, Vol 45(1): 163-180, 2011. Abstract: Wang automata are devices for picture language recognition recently introduced by us, which characterize the class REC of recognizable picture languages. Thus, Wang automata are equivalent to tiling systems or online tessellation acceptors, and are based like Wang systems on labeled Wang tiles. The present work focus on scanning strategies, to prove that the ones Wang automata are based on are those following four kinds of movements: boustrophedonic, ``L-like'', ``U-like'', and spirals. Deterministic recognizability of picture languages with Wang automata. Discrete Mathematics and Theoretical Computer Science, Vol 12(4): 73-94, 2010. The paper includes the results presented at MFCS 2009 and SOFSEM 2010. Abstract: We present a model of automaton for picture language recognition, called Wang automaton, which is based on labeled Wang tiles. Wang automata combine features of both online tessellation acceptors and 4-ways automata: as in online tessellation acceptors, computation assigns states to each picture position; as in 4-way automata, the input head visits the picture moving from one pixel to an adjacent one, according to some scanning strategy. Wang automata recognize the class REC, i.e. they are equivalent to tiling systems or online tessellation acceptors, and hence strictly more powerful than 4-way automata. We also introduce a natural notion of determinism for Wang automata, and study the resulting class, extending the more traditional approach of diagonal-based determinism, used e.g. by deterministic tiling systems. In particular, we prove that the concept of row (or column) ambiguity defines the class of languages recognized by Wang automata directed by boustrophedonic scanning strategies. 5. A. Bertoni, M. Goldwurm, and V. Lonati. The Complexity of Unary Tiling Recognizable Picture Languages: Nondeterministic and Unambiguous Cases. Fundamenta Informaticae, Vol. 91(2): 231-249, 2009. The paper includes the results presented at STACS 2007. Abstract: In this paper we consider the classes REC1 and UREC1 of unary picture languages that are tiling recognizable and unambiguously tiling recognizable, respectively. By representing unary pictures by quasi-unary strings we characterize REC1 (resp. UREC1) as the class of quasi-unary languages recognized by nondeterministic (resp. unambiguous) linearly space-bounded one-tape Turing machines with constraint on the number of head reversals. We apply such a characterization in two directions. First we prove that the binary string languages encoding tiling recognizable unary square languages lies between NTime(2^n) and NTime(4^n); by separation results, this implies there exists a non-tiling recognizable unary square language whose binary representation is a language in NTime(4^n \log n). In the other direction, by means of results on picture languages, we are able to compare the power of deterministic, unambiguous and nondeterministic one-tape Turing machines that are linearly space-bounded and have constraint on the number of head reversals. 6. P. Boldi, V. Lonati, R. Radicioni, M. Santini, The Number of Convex Permutominoes. Information and Computation, Vol. 206(9-10): 1074-1083, 2008. The paper includes the results presented at LATA 2007. Abstract: Permutominoes are polyominoes defined by suitable pairs of permutations. In this paper we provide a formula to count the number of convex permutominoes of given perimeter. To this aim we define the transform of a generic pair of permutations, we characterize the transform of any pair defining a convex permutomino, and we solve the counting problem in the transformed space. 7. P. Boldi, V. Lonati, M. Santini, S. Vigna. Graph fibrations, graph isomorphism, and PageRank. R.A.I.R.O. Theoretical Informatics and Applications, Vol. 40: 227-253, 2006. Abstract: PageRank is a ranking method that assigns scores to web pages using the limit distribution of a random walk on the web graph. A fibration of graphs is a morphism that is a local isomorphism of in-neighbourhoods, much in the same way a covering projection is a local isomorphism of neighbourhoods. We show that a deep connection relates fibrations and Markov chains with restart, a particular kind of Markov chains that include the PageRank one as a special case. This fact provides constraints on the values that PageRank can assume. Using our results, we show that a recently defined class of graphs that admit a polynomial-time isomorphism algorithm based on the computation of PageRank is really a subclass of fibration-prime graphs, which possess simple, entirely discrete polynomial-time isomorphism algorithms based on classical techniques for graph isomorphism. We discuss efficiency issues in the implementation of such algorithms for the particular case of web graphs, in which O(n) space occupancy (where n is the number of nodes) may be acceptable, but O(m) is not (where m is the number of arcs). 8. M. Goldwurm, and V. Lonati. Pattern statistics and Vandermonde matrices. Theoretical Computer Science, Vol. 356: 153-169, 2006. The paper includes the results presented at STACS 2005. Abstract: In this paper we determine some limit distributions of pattern statistics in rational stochastic models. We present a general approach to analyze these statistics in rational models having an arbitrary number of strongly connected components. We explicitly establish the limit distributions in most significant cases; they are characterized by a family of unimodal density functions defined by means of confluent Vandermonde matrices. 9. A. Bertoni, C. Choffrut, M. Goldwurm, and V. Lonati. Local limit properties for pattern statistics and rational models. Theory of Computing Systems, Vol. 39: 209-235, 2006. The paper includes the results presented at Stacs 2004 and Dlt 2004. Abstract: Motivated by problems of pattern statistics, we study the limit distribution of the random variable counting the number of occurrences of the symbol a in a word of length n chosen at random in {a,b}*, according to a probability distribution defined via a rational formal series s with positive real coefficients. Our main result is a local limit theorem of Gaussian type for these statistics under the hypothesis that s is a power of a primitive series. This result is obtained by showing a general criterion for (Gaussian) local limi t laws of sequences of integer random variables. To prove our result we also introduce and analyze a notion of symbol-periodicity for irreducible matrices, whose entries are polynomials over positive semirings; the properties we prove on this topic extend the classical Perron-Frobenius theory of non-negative real matrices. As a further application we obtain some asymptotic evaluations of the maximum coefficient of monomials of given size for rational series in two commutative variables. 10. D. de Falco, M. Goldwurm, V. Lonati. Frequency of symbol occurrences in bicomponent stochastic models Theoretical Computer Science, Vol. 327(3): 269-300, 2004. The paper includes the results presented at Words 2003 e Dlt 2003. Abstract: We give asymptotic estimates of the frequency of occurrences of a symbol in a random word generated by any bicomponent stochastic model. More precisely, we consider the random variable Y_n representing the number of occurrences of a given symbol in a word of length n generated at random; the stochastic model is defined by a rational formal series r having a linear representation with two primitive components. This model includes the case when r is the product or the sum of two primitive rational formal series. We obtain asymptotic evaluations for the mean value and the variance of Y_n and its limit distribution. 11. A. Bertoni, C. Choffrut, M. Goldwurm, and V. Lonati. On the number of occurrences of a symbol in words of regular languages. Theoretical Computer Science, Vol. 302(1-3): 431-456, 2003. Abstract: We study the random variable Y_n representing the number of occurrences of a symbol a in a word of length n chosen at random in a regular language L contained in {a,b}*, where the random choice is defined via a nonnegative rational formal series r of support L. Assuming that the transition matrix associated with r is primitive we obtain asymptotic estimates for the mean value and the variance of Y_n and present a central limit theorem for its distribution. under a further condition on such a matrix, we also derive an asymptotic approximation of the discrete Fourier transform of Y_n that allows to prove a local limit theorem for Y_n. Further consequences of our analysis concern the growth of the coefficients in rational formal series; in particular, it turns out that, for a wide class of regular languages L, the maximum number of words of length n in L having the same number of occurrences of a given symbol is of the order of growth L^n / n^(1/2) for some constant L. ### Atti di convegni internazionali 1. S. Crespi Reghizzi, V. Lonati, D. Mandrioli, M. Pradella. Locally Chain-Parsable Languages . In Proceedings MFCS 2015, 40th International Symposium on Mathematical Foundations of Computer Science, Milan, August 24-28, 2015. Lecture Notes in Computer Science, Vol 9234: 154-166, 2015. 2. V. Lonati, D. Mandrioli, F. Panella. M. Pradella. First-order Logic Definability of Free Languages. In Proceedings CSR 2015, 10th International Computer Science Symposium in Russia, Listvyanka, July 13-17, 2015 Lecture Notes in Computer Science, Vol 9139, pp. 310-324. 3. V. Lonati, D. Mandrioli, F. Panella. M. Pradella. Free Grammars and Languages. In Proceedings ICTCS 2013, 14th Italian Conference on Theoretical Computer Science, Palermo, September 9-11, 2013. 4. F. Panella. M. Pradella. V. Lonati, D. Mandrioli, Operator Precedence omega-languages. In Proceedings DLT 2013, 17th International Conference on Developments in Language Theory, Paris, June 18-21, 2013. Lecture Notes in Computer Science, Vol 7907: 396-408, 2013. . Abstract: Recent literature extended the analysis of omega-languages from the regular ones to various classes of languages with "visible syntax structure", such as visibly pushdown languages (VPLs). Operator precedence languages (OPLs), instead, were originally defined to support deterministic parsing and exhibit interesting relations with these classes of languages: OPLs strictly include VPLs, enjoy all relevant closure properties and have been characterized by a suitable automata family and a logic notation. We introduce here operator precedence omega-languages (omega-OPLs), investigating various acceptance criteria and their closure properties. Whereas some properties are natural extensions of those holding for regular languages, others require novel investigation techniques.Application-oriented examples show the gain in expressiveness and verifiability offered by ωOPLs w.r.t. smaller classes. 5. V. Lonati, D. Mandrioli, M. Pradella. Logic Characterization of Invisibly Structured Languages: the Case of Floyd Languages. In Proceedings SOFSEM 2013, 39th International Conference on Current Trends in Theory and Practice of Computer Science, January 26-31, 2013. Lecture Notes in Computer Science, Vol 7741: 307-318, 2013. . Abstract: Operator precedence grammars define a classical Boolean and deterministic context-free language family (called Floyd languages or FLs). FLs have been shown to strictly include the well-known Visibly Pushdown Languages, and enjoy the same nice closure properties. In this paper we provide a complete characterization of FLs in terms of a suitable Monadic Second-Order Logic. Traditional approaches to logic characterization of formal languages refer explicitly to the structures over which they are interpreted - e.g, trees or graphs - or to strings that are isomorphic to the structure, as in parenthesis languages. In the case of FLs, instead, the syntactic structure of input strings is "invisible" and must be reconstructed through parsing. This requires that logic formulae encode some typical context-free parsing actions, such as shift-reduce ones. 6. V. Lonati, D. Mandrioli, M. Pradella. Automata and Logic for Floyd Languages. In proceedings ICTCS 2012, 13th Italian Conference on Theoretical Computer Science. Varese, Italy, September 19-21, 2012. Towards more expressive 2D deterministic automata. In Proceedings CIAA 2011, International Conference on Implementation and Application of Automata, July 12-16, 2011. Lecture Notes in Computer Science, Vol 6807: 225-237, 2011. Abstract: REC defines an important class of picture languages that is considered a 2D analogous of regular languages. In this paper we recall some of the most expressive operational approaches to define deterministic subclasses of REC. We summarize their main characteristics and properties and try to understand if it is possible to combine their main features to define a larger deterministic subclass. We conclude by proposing a convenient generalization based on automata and study some of its formal properties. 8. V. Lonati, D. Mandrioli, M. Pradella. Precedence Automata and Languages. In Proceedings CSR 2011, 6th International Computer Science Symposium in Russia, June 14-18, 2011. Lecture Notes in Computer Science, Vol 6651: 291-304, 2011. Abstract: Operator precedence grammars define a classical Boolean and deterministic context-free family (called Floyd languages or FLs). FLs have been shown to strictly include the well-known visibly pushdown languages, and enjoy the same nice closure properties. We introduce here Floyd automata, an equivalent operational formalism for defining FLs. This also permits to extend the class to deal with infinite strings to perform for instance model checking. Picture recognizability with automata based on Wang tiles. In Proceedings SOFSEM 2010, 36th International Conference on Current trends in Theory and Practice of Computer Sciences. Czech Republic, January 23-29, 2010. Lecture Notes in Computer Science, Vol. 5901: 576-587, 2010. Abstract: We introduce a model of automaton for picture language recognition which is based on tiles and is called Wang automaton, since its description relies on the notation of Wang systems. Wang automata combine features of both online tessellation acceptors and 4-ways automata: as in online tessellation acceptors, computation assigns states to each picture position; as in 4-way automata, the input head visits the picture moving from one pixel to an adjacent one, according to some scanning strategy. We prove that Wang automata recognize the class REC, i.e. they are equivalent to tiling systems or online tessellation acceptors, and hence strictly more powerful than 4-way automata. We also consider a very natural notion of determinism for Wang automata, and study the resulting class, comparing it with other deterministic classes considered in the literature, like DREC and Snake-DREC. Deterministic recognizability of picture languages by Wang automata. In proceedings ICTCS 2009, 11th Italian Conference on Theoretical Computer Science. Cremona, Italy, September 28-30, 2009. Snake-Deterministic Tiling Systems. In proceedings MFCS 2009, 34st International Symposium on Mathematical Foundations of Computer Science, Novy Smokovec (Slovakia), August 24-28, 2009. Lecture Notes in Computer Science, Vol. 5734: 549-560. Abstract: The concept of determinism, while clear and well assessed for string languages, is still matter of research as far as picture languages are concerned. We introduce here a new kind of determinism, called snake, based on the boustrophedonic scanning strategy, that is a natural scanning strategy used by many algorithms on 2D arrays and pictures. We consider a snake-deterministic variant of tiling systems, which defines the so-called Snake-DREC class of languages. Snake-DREC properly extends the more traditional approach of diagonal-based determinism, used e.g. by deterministic tiling systems, and by online tessellation automata. Our main result is showing that the concept of snake-determinism of tiles coincides with row (or column) unambiguity. 12. P. Boldi, V. Lonati, M. Santini, R. Radicioni. The Number of Convex Permutominoes. In proceedings LATA 2007, 1st International Conference on Language and Automata Theory and Applications, Tarragona (Spain) March 29 - April 4, 2007. Abstract: Permutominoes are polyominoes defined by suitable pairs of permutations. In this paper we provide a formula to count the number of convex permutominoes of given perimeter. To this aim we define the transform of a generic pair of permutations, we characterize the transform of any pair defining a convex permutomino, and we solve the counting problem in the transformed space. 13. A. Bertoni, M. Goldwurm, and V. Lonati. On the complexity of unary tiling-recognizable picture languages. In proceedings STACS 2007, Aachen (Germany) February 22-24, 2007. Lecture Notes in Computer Science, Vol. 4393: 381-392. Abstract: We give a characterization, in terms of computational complexity, of the family REC_U of unary picture languages that are tiling recognizable. We introduce quasi-unary strings to represent unary pictures and we prove that any unary two-dimensional language L is in REC_U if and only if the set of all quasi-unary strings encoding the elements of L is recognizable by a one-tape nondeterministic Turing machine that is space and head-reversal linearly bounded. In particular, the result implies that the family of binary string languages corresponding to tiling-recognizable square languages lies between NTime(2^n) and NTime(4^n). This also implies the existence of a nontiling-recognizable unary square language that corresponds to a binary string language recognizable in nondeterministic time O(4^n log n). 14. M. Goldwurm, and V. Lonati. Pattern occurrences in multicomponent models. In proceedings STACS 2005, Stuttgart (Germany) February 24-26, 2005. Lecture Notes in Computer Science, Vol. 3404, 680-692. Abstract: In this paper we determine some limit distributions of pattern statistics in rational stochastic models, defined by means of nondeterministic weighted finite automata. We present a general approach to analyze these statistics in rational models having an arbitrary number of connected components. We explicitly establish the limit distributions in the most significant cases; these ones are characterized by a family of unimodal density functions defined by polynomials over adjacent intervals. 15. C. Choffrut, M. Goldwurm, and V. Lonati. On the maximum coefficients of rational formal series in commuting variables. In proceedings DLT'04, Auckland (New Zealand), December 13-17 2004. Lecture Notes in Computer Science, Vol. 3340: 114-126, 2004. Abstract: We study the growth function (R+) rational formal series in two commuting variables Defined, for every n in N, as the maximum of the coefficients of monomials of size n. We give a rather general sufficient condition under which the growth function of such a series is of the order n^(k/2) L^n for any integer k>-2 and any positive real L. Our analysis is related to the study of limit distributions in pattern statistics. In particular, we prove a general criterion for establishing Gaussian local limit laws for sequences of discrete positive random variables. 16. A. Bertoni, C. Choffrut, M. Goldwurm, and V. Lonati. Local limit distributions in pattern statistics: beyond the Markovian model. In proceedings STACS 2004, Montpellier (France), March 25-27, 2004. Lecture Notes in Computer Science, Vol. 2996: 117-128, 2004. Abstract: Motivated by problems of pattern statistics, we study the limit distribution of the random variable counting the number of occurrences of the symbol `a' in a word of length n chosen at random in {a,b}*, according to a probability distribution defined via a finite automaton equipped with positive real weights. We determine the local limit distribution of such a quantity under the hypothesis that the transition matrix naturally associated with the finite automaton is primitive. Our probabilistic model extends the Markovian models traditionally used in the literature on pattern statistics. This result is obtained by introducing a notion of symbol-periodicity for irreducible matrices whose entries are polynomials in one variable over an arbitrary positive semiring. This notion and the related results we prove are of interest in their own right, since they extend classical properties of the Perron-Frobenius Theory for non-negative real matrices. 17. D. de Falco, M. Goldwurm, and V. Lonati. Pattern statistics in bicomponent stochastic models. In Proceedings Words '03, Turku (Finland), September 10-13, 2003. Turku (Finland), TUCS General Publication vol. 27: 344-357. Abstract: We give asymptotic estimates of the frequency of occurrences of a symbol in a random word generated by any (non-ergodic) bicomponent stochastic model. More precisely, we consider the random variable Y_n representing the number of occurrences of a given symbol in a word of length n generated at random; the stochastic model is defined by a rational formal series r having a linear representation with two primitive components. This model includes the case when r is the product or the sum of two primitive rational for mal series. We obtain asymptotic evaluations for the mean and the variance of Y_n and its limit distribution. These results improve the analysis presented in a recent work dealing with the particular case where r is the product of two primitive rational formal series. 18. D. de Falco, M. Goldwurm, and V. Lonati. Frequency of symbol occurrences in simple non-primitive stochastic models. In Proceedings 7th D.L.T. Conference, Szeged (Hungary), July 7-11, 2003. Lecture Notes in Computer Science, Vol. 2710, pag 242-253, 2003. Abstract: We study the random variable Y_n representing the number of occurrences of a given symbol in a word of length n generated at random. The stochastic model we assume is a simple non-ergodic model defined by the product of two primitive rational formal series, which form two distinct ergod ic components. We obtain asymptotic evaluations for the mean and the variance of Y_n and its limit distribution. It turns out that there are two main cases: if one component is dominant and non-degenerate we get a Gaussian limit distribution; if the two components are equipotent and have different leading terms of the mean, we get a uniform limit distribution. Other particular limit distributions are obtained in the case of a degenerate dominant compon ent and in the equipotent case when the leading terms of the expectation values are equal. 19. A. Bertoni, C. Choffrut, M. Goldwurm, and V. Lonati. Asymptotic evaluation in a regular language of the number of words of given length with a fixed number of occurrences of a symbol. Abstract in Proceedings Words '01, Palermo (Italy), September 17-21, 2001, a cura dell'Università di Palermo. Versione preliminare del lavoro pubblicato su TCS nel 2003. ### Tesi di dottorato, rapporti, manoscritti non pubblicati 1. V. Lonati, D. Mandrioli, M. Pradella. Logic Characterization of Floyd Language. April 2012. Extended version of the paper accepted for presentation at SOFSEM 2013. 2. V. Lonati, D. Mandrioli, M. Pradella. Precedence Automata and Languages. December 2010. Extended version of the paper presented at CSR 2010. 3. P. Boldi, Violetta Lonati, M. Santini, R. Radicioni, S. Vigna. Tree Language Determinism, Ambiguity and Typing: towards a Uniform Approach. Rapporto interno 320-08, Università degli Studi di Milano, Dipartimento di Scienze dell'Informazione (2008). Abstract: Regular tree languages can be defined as the homomorphic image of a tree language generated by an extended context-free grammar -- much as a regular string language can be seen as the homomorphic image of a local language. The typing problem consists in finding the preimages of any tree in a given regular tree language. From the application point of view, regular tree languages have been recently rediscovered as an XML specification mechanism, and typing turns out to be a foundamental tool for document validation. In this paper we provide a systematic approach to the typing problem, by identifying several levels of determinism of tree grammars that are in turn re ected into the structure of the descending automata recognizing languages. 4. A. Lonati, V. Lonati, M. Santini. Modeling and transforming a multilingual technical lexicon for conservation-restoration using XML. Rapporto interno 311-07, Università degli Studi di Milano, Dipartimento di Scienze dell'Informazione (2007). 5. Pattern statistics in rational models. Novembre 2004. Phd disserttion. Advisors: Proff. A. Bertoni e M. Goldwurm.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.866451621055603, "perplexity": 1613.101666919291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358842.4/warc/CC-MAIN-20211129194957-20211129224957-00236.warc.gz"}
https://squaremaths.com/gifted-and-bored/
Mathematics education uncovered and recovered N is in Year 5. “What are you doing at school now?” “Umm, we are still converting fractions to decimals and back” “Show me an example” “Well, it’s something like  $$\mathrm{\frac{1}{2}=0.5}$$,  $$\mathrm{\frac{1}{4}=0.25}$$, $$\mathrm{ \frac{7}{10}=0.7}$$ ” “I see. Can you write $$\mathrm{\frac{1}{8 }}$$ and $$\mathrm{\frac{3}{40 }}$$ as decimals?” He can. Without much effort. “The interesting thing is that some numbers cannot be written as fractions, and as decimals they are infinitely long. Have you heard about a number called $$\pi$$?” “Yes, I have. It’s very big, isn’t it?” “No, the number itself is not big, it’s just a little over 3. But if you try to write it down, it looks like this: 3.141592653589793… The decimal tail is infinite, and there is no way to write it down as a fraction. That’s why we use either approximations like 3.14 or the Greek letter $$\pi$$ to represent the number.” “And what goes after …9793…?” “I don’t remember the digits that follow, but you can easily find millions of digits online.” “Wow… So”, he pauses, “why are we doing this”, he points at the piece of paper with $$\mathrm{\frac{1}{8 }}$$  written on it, “if there is something like that in the world?” His finger moves along the chain of digits of $$\pi$$. Why, indeed. Tags: Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8206885457038879, "perplexity": 1181.1098344872173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00465.warc.gz"}
https://xianblog.wordpress.com/tag/controlled-mcmc/
## mixture modelling for testing hypotheses Posted in Books, Statistics, University life with tags , , , , , , , , , , on January 4, 2019 by xi'an ## high-dimensional stochastic simulation and optimisation in image processing [day #1] Posted in pictures, Statistics, Travel, Uncategorized, University life, Wines with tags , , , , , , , , , , , on August 29, 2014 by xi'an Even though I flew through Birmingham (and had to endure the fundamental randomness of trains in Britain), I managed to reach the “High-dimensional Stochastic Simulation and Optimisation in Image Processing” conference location (in Goldney Hall Orangery) in due time to attend the (second) talk by Christophe Andrieu. He started with an explanation of the notion of controlled Markov chain, which reminded me of our early and famous-if-unpublished paper on controlled MCMC. (The label “controlled” was inspired by Peter Green who pointed out to us the different meanings of controlled in French [meaning checked or monitored] and in English . We use it here in the English sense, obviously.) The main focus of the talk was on the stability of controlled Markov chains. With of course connections with out controlled MCMC of old, for instance the case of the coerced acceptance probability. Which happened to be not that stable! With the central tool being Lyapounov functions. (Making me wonder whether or not it would make sense to envision the meta-problem of adaptively estimating the adequate Lyapounov function from the MCMC outcome.) As I had difficulties following the details of the convex optimisation talks in the afternoon, I eloped to work on my own and returned to the posters & wine session, where the small number of posters allowed for the proper amount of interaction with the speakers! Talking about the relevance of variational Bayes approximations and of possible tools to assess it, about the use of new metrics for MALA and of possible extensions to Hamiltonian Monte Carlo, about Bayesian modellings of fMRI and of possible applications of ABC in this framework. (No memorable wine to make the ‘Og!) Then a quick if reasonably hot curry and it was already bed-time after a rather long and well-filled day!z ## advanced Markov chain Monte Carlo methods Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , on December 5, 2011 by xi'an This book, Advanced Markov Chain Monte Carlo Methods: Learning from Past Samples, by Faming Liang, Chuanhai Liu, and Raymond Carroll, appeared last year and has been sitting on my desk all this time, patiently (?) waiting for a review. When I received it, I took a brief look at it (further than the cool cover!) and then decided I needed more than that to write a useful review! Here are my impressions  on Advanced Markov Chain Monte Carlo Methods after a deeper read. (I have not read any other review in the main statistical journals so far.) The title, Advanced Markov Chain Monte Carlo Methods, is a clear warning on the level of the book: “advanced”, it certainly is!!! By page 85, the general description of MCMC simulation methods is completed, including perfect sampling and reversible jump MCMC, and the authors engage into a detailed description of highly specialised topics of their choice: Auxiliary variables (Chap. 4), Population-based MCMC (Chap. 5), Dynamic weighting (Chap. 6), Stochastic approximation Monte Carlo (Chap. 7), and MCMC with adaptive proposals (Chap. 8).  The book is clearly inspired by the numerous papers the authors have written in those area, especially Faming Liang. (The uneven distribution of the number of citations per year with peaks in 2000 and 2009 reflects this strong connection.) While the book attempts at broadening the spectrum by including introductory sections, and discussing other papers, it remains nonetheless that this centred focus of the book reduces its potential readership to graduate students and researchers who could directly work on the original papers. I would thus hesitate in teaching my graduate students from this book, given that they only attend a single course on Monte Carlo methods. Continue reading ## Andrew gone NUTS! Posted in pictures, R, Statistics, University life with tags , , , , , , , , , , on November 24, 2011 by xi'an Matthew Hoffman and Andrew Gelman have posted a paper on arXiv entitled “The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo” and developing an improvement on the Hamiltonian Monte Carlo algorithm called NUTS (!). Here is the abstract: Hamiltonian Monte Carlo (HMC) is a Markov chain Monte Carlo (MCMC) algorithm that avoids the random walk behavior and sensitivity to correlated parameters that plague many MCMC methods by taking a series of steps informed by first-order gradient information. These features allow it to converge to high-dimensional target distributions much more quickly than simpler methods such as random walk Metropolis or Gibbs sampling. However, HMC’s performance is highly sensitive to two user-specified parameters: a step size ε and a desired number of steps L. In particular, if L is too small then the algorithm exhibits undesirable random walk behavior, while if L is too large the algorithm wastes computation. We introduce the No-U-Turn Sampler (NUTS), an extension to HMC that eliminates the need to set a number of steps L. NUTS uses a recursive algorithm to build a set of likely candidate points that spans a wide swath of the target distribution, stopping automatically when it starts to double back and retrace its steps. Empirically, NUTS perform at least as efficiently as and sometimes more efficiently than a well tuned standard HMC method, without requiring user intervention or costly tuning runs. We also derive a method for adapting the step size parameter ε on the fly based on primal-dual averaging. NUTS can thus be used with no hand-tuning at all. NUTS is also suitable for applications such as BUGS-style automatic inference engines that require efficient “turnkey” sampling algorithms. Now, my suspicious and pessimistic nature always makes me wary of universality claims! I completely acknowledge the difficulty in picking the number of leapfrog steps L in the Hamiltonian algorithm, since the theory behind does not tell anything useful about L. And the paper is quite convincing in its description of the NUTS algorithm, being moreover beautifully written.  As indicated in the paper, the “doubling” solution adopted by NUTS is reminding me of Radford Neal’s procedure in his Annals of Statistics paper on slice sampling. (The NUTS algorithm also relies on a slice sampling step.) However, besides its ability to work as an automatic Hamiltonian methodology, I wonder about the computing time (and the “unacceptably large amount of memory”, p.12) required by the doubling procedure: 2j is growing fast with j! (If my intuition is right, the computing time should increase rather quickly with the dimension. And I do not get the argument within the paper that the costly part is the gradient computation: it seems to me the gradient must be computed for all of the 2j points.) The authors also mention delayed rejection à la Tierney and Mira (1999) and the scheme reminded me a wee of the pinball sampler we devised a while ago with Kerrie Mengersen. Choosing the discretisation step ε is more “traditional”, using the stochastic approximation approach we set in our unpublished-yet-often-quoted tech report with Christophe Andrieu. (I do not think I got the crux of the “dual averaging” for optimal calibration on p.11) The illustration through four benchmarks [incl. a stochastic volatility model!] is quite convincing as well, with (unsurprisingly) great graphical tools. A final grumble: that the code is “only” available in the proprietary language Matlab! Now, I bet some kind of Rao-Blackwellisation is feasible with all the intermediate simulations! Posted in Statistics, University life with tags , , , , , , on November 9, 2011 by xi'an Maxime Lenormand, Franck Jabot and Guillaume Deffuant have just posted on arXiv a paper about a refinement of the ABC-PMC algorithm we developed with Marc Beaumont, Jean-Marie Cornuet, and Jean-Michel Marin. The authors state in their introduction that ABC-PMC presents two shortcomings which are particularly problematic for costly to simulate complex models. First, the sequence of tolerance levels ε1,…,εT has to be provided to the ABC algorithm. In practice, this implies to do preliminary simulations of the model, a step which is computationally costly for complex models. Furthermore, a badly chosen sequence of tolerance levels may inflate the number of simulations required to reach a given precision as we will see below. A second shortcoming of the PMC-ABC algorithm is that it lacks a criterion to decide whether it has converged. The final tolerance level εT may be too large for the ABC approach to satisfactorily approximate the posterior distribution of the model. Inversely, a larger εT may be sufficient to obtain a good approximation of the posterior distribution, hence sparing a number of model simulations. shortcomings which I thought were addressed by the ABC-SMC algorithm of Pierre Del Moral, Arnaud Doucet and Ajay Jasra [not referenced in the current paper], the similar algorithm of Chris Drovandi and Tony Pettitt, and our recent paper with  Jean-Michel Marin, Pierre Pudlo and Mohammed Sedki [presented at ABC in London, submitted to Statistics and Computing a few months ago, but alas not available on-line for unmentionable reasons linked to the growing dysfunctionality of one co-author…!]. It is correct that we did not address the choice of the εt‘s in the original paper, even though we already used an on-line selection as a quantile of the current sample of distances. In essence, given the fundamentally non-parametric nature of ABC, the tolerances εt should always be determined from the simulated samples, as regular bandwidths. The paper essentially proposes the same scheme as in Del Moral et al., before applying it to the toy example of Sisson et al. (PNAS, 2007) and to a more complex job dynamic model in central France. Were I to referee this paper, I would thus suggest that the authors incorporate a comparison with both papers of Del Moral et al. and of Drovandi and Pettitt to highlight the methodological  novelty of their approach. ## Robust adaptive Metropolis algorithm [arXiv:10114381] Posted in R, Statistics with tags , , , , , , on November 23, 2010 by xi'an Matti Vihola has posted a new paper on arXiv about adaptive (random walk) Metropolis-Hastings algorithms. The update in the (lower diagonal) scale matrix is $S_nS_n^\text{T}=S_{n-1}\left(\mathbf{I}_d-\eta_n[\alpha_n-\alpha^\star]\dfrac{U_nU_n^\text{T}}{||U_n||^2}\right)S^\text{T}_{n-1}$ where • $\alpha_n$ is the current acceptance probability and $\alpha^\star$ the target acceptance rate; • $U_n$ is the current random noise for the proposal, $Y_n=X_{n-1}+S_{n-1}U_n$; • $\eta_n$ is a step size sequence decaying to zero. The spirit of the adaptation is therefore a Robbins-Monro type adaptation of the covariance matrix in the random walk, with a target acceptance rate. It follows the lines Christophe Andrieu and I had drafted in our [most famous!] unpublished paper, Controlled MCMC for optimal sampling. The current paper shows that the fixed point for $S_n$ is proportional to the scale of the target if the latter is elliptically symmetric (but does not establish a sufficient condition for convergence). It concludes with a Law of Large Numbers for the empirical average of the $f(X_n)$ under rather strong assumptions (on f, the target, and the matrices $S_n$). The simulations run on formalised examples show a clear improvement over the existing adaptive algorithms (see above) and the method is implemented within Matti Vihola’s Grapham software. I presume Matti will present this latest work during his invited talk at Adap’skiii. Ps-Took me at least 15 minutes to spot the error in the above LaTeX formula, ending up with S^\text{T}_{n−1}: Copy-pasting from the pdf file had produced an unconventional minus sign in n−1 that was impossible to spot! On Monday, Paul Fearnhead and Benjamin Taylor reposted on arXiv a paper about adaptive SMC. It is as well since I had missed the first posting on Friday. While the method has some similarities with our earlier work on population Monte Carlo methods with Olivier Cappé, Randal Douc, Arnaud Guillin and Jean-Michel Marin, there are quite novel and interesting features in this paper!  First, the paper is firmly set within a sequential setup, as in Chopin (2002, Biometrika) and Del Moral, Doucet and Jasra (2006, JRSS B). This means considering a sequence of targets corresponding to likelihoods with increasing datasets. We mentioned this case as a possible implementation of population Monte Carlo but never truly experimented with this. Fearnhead and Taylor do set their method within this framework, using a sequence of populations (or particle systems) aimed at this moving sequence of targets. The second major difference is that, while they also use a mixture of transition kernels as their proposal (or importance functions) and while they also aim at optimising the parameters of those transitions (parameters that I would like to dub cyberparameters to distinguish them from the parameters of the statistical model), they do not update those cyberparameters in a deterministic way, as we do. On the opposite, they build a non-parametric approximation $\pi_t(h)$ to the distribution of those cyberparameters and simulate from those approximations at each step of the sequential algorithm, using a weight $f(\theta^{(j)}_{t-1},\theta^{(j)}_t)$ that assesses the quality of the move from $\theta^{(j)}_{t-1}$ to  $\theta^{(j)}_{t}$, based on the simulated $h^{(j)}_t$. I like very much this aspect of the paper, in that the cyberparameters are part of the dynamics in the stochastic algorithm, a point I tried to implement since the (never published) controlled MCMC paper with Christophe Andrieu. As we do in our paper now published in Statistics and Computing, they further establish that this method is asymptotically improving the efficiency criterion at each step of the sequential procedure. The paper concludes with an extensive simulation study where Fearnhead and Taylor show that their implementation outperforms random walk with adaptive steps. (I am not very happy with their mixture example in that they resort to an ordering on the means…)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8353093266487122, "perplexity": 1201.0912645938479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669276.41/warc/CC-MAIN-20191117192728-20191117220728-00378.warc.gz"}
http://math.stackexchange.com/questions/101745/lower-bounding-angles-in-integer-lattices
# Lower-Bounding angles in integer Lattices Given an $n \times n$ integer grid I chose any two grid points $a,b$, draw a line $l$ through $a$ and $b$ and measure the angle between $l$ and a horizontal line. I can do this for any grid point pair and I'll get a set $A$ of angles. I am interested in finding the largest angle $\alpha$ s.t. every element in $A$ is an integer multiple of $\alpha$. My question is if it is possible to give a good lower bound on $\alpha$? Maybe this goes into the topic of approximating non-integer numbers using Lattices? - Why should such an $\alpha$ exist? –  Aryabhata Jan 23 '12 at 19:22 Adding to Aryabhata's comment: Such an $\alpha$ exists if and only if all the angles involved are rational multiples of a common angle, and hence iff all the angles are rational multiples of each other. –  Srivatsan Jan 23 '12 at 19:31 Draw a vertical line. The angle is then $\pi/2$ radians, or equivalently $90$ degrees. Now draw the line through $(0,0)$ and $(2,1)$. The angle is now (in radians) $\arctan(1/2)$. But it is known that $\arctan(1/2)$ and $\pi/2$ are incommensurable, meaning that there is no $\alpha$ such that each is an integer multiple of $\alpha$. So (except when our grid is very tiny) there cannot be an $\alpha$ that satisfies your conditions. Let $\theta$ be (in degrees) a rational angle, or equivalently (in radians) a rational multiple of $\pi$. Then if $\tan\theta$ exists and is rational, we must have $\tan\theta=0$ or $\tan\theta=\pm 1$. This result goes back to Lambert, and was the main component of his proof that $\pi$ is irrational Thank your for your interesting answer. Maybe it possible to find such an $\alpha$ if we don't restrict all our angles to be exact multiples of $\alpha$ but we just demand that every angle is $\epsilon$ close to a multiple of $\alpha$ for some fixed $\epsilon$? –  stefan Jan 23 '12 at 19:50 @stefan: Certainly it can be done, even I can do it, crudely. Let $\alpha=10^{-6}$ (radians). Then all our angles are within $10^{-6}$ of an integer multiple of $\alpha$. The question is whether we can do much better, where the $\epsilon$ is much smaller than $1/n$. That is a possibly difficult problem in simultaneous diophantine approximations. –  André Nicolas Jan 23 '12 at 20:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9234010577201843, "perplexity": 151.24203874824212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.chemistrynotesinfo.com/2015/06/pattern-for-single-paper-mcq-test-NET-Chemistry.html
## NOTE: If error occur=> Change website url from chemistrynotesinfo.blogspot.in to ## UGC NET Chemistry (CY) Exam Patter • The MCQ test paper of chemistry shall carry maximum of 200 marks. • This exam shall be for duration of 3hrs. • The question paper shall be divided in three parts i.e. Part 'A' , Part 'B', Part 'C' Part wise description of paper • Part 'A' shall be common to all subjects. This part shall be a test containing a maximum of 20 questions of General Aptitude. The candidates shall be required to answer any 15 questions of two marks each. The total marks allocated to this section shall be 30 out of 200 • Part 'B' shall   contain subject-related conventional MCQs. The total marks allocated to this section shall be 70 out of 200. The maximum number of questions to be attempted shall be in the range of 20-35. • Part 'C' shall contain higher value questions that may test the candidate's knowledge of scientific concepts and/or application of the scientific concepts. The questions shall be of analytical nature where a candidate is expected to apply the scientific knowledge to arrive at the solution to the given scientific problem.  The total marks allocated to this section shall be 100 out of 200. • Negative marking for wrong answers.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9358296990394592, "perplexity": 2391.0133487180415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991514.63/warc/CC-MAIN-20210518191530-20210518221530-00591.warc.gz"}
https://brilliant.org/problems/weve-sprung-a-leak/
# We've Sprung a Leak! Calculus Level 3 A conical reservoir with its vertex pointing downward has a radius of 10 feet and a depth of 20 feet. Suddenly, a leak springs and water begins to empty the cone at its vertex and into an empty cylindrical basin with radius 6 feet and height 40 feet. At the moment the depth of the reservoir reaches 16 feet and is decreasing by 2 feet per minute, how fast is the height of water in the basin changing? ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8462085127830505, "perplexity": 580.363419762771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424721.74/warc/CC-MAIN-20170724042246-20170724062246-00483.warc.gz"}
http://mathhelpforum.com/differential-equations/157142-solve-bernoulli-equation.html
# Thread: Solve the Bernoulli equation. 1. ## Solve the Bernoulli equation. $\frac{dy}{dx}+xy=xy^3$ I have no idea how to do these. And we *have* to do it as a Bernoulli equation. Initial values: (0, 1/3) 2. Originally Posted by A Beautiful Mind $\frac{dy}{dx}+xy=xy^3$ I have no idea how to do these. And we *have* to do it as a Bernoulli equation. Initial values: (1/3, 0) The procedure is given here (and no doubt it's in your classnotes and textbook too): Bernoulli differential equation - Wikipedia, the free encyclopedia What part of applying this procedure are you stuck on? 3. Well, my answer I get before using the initial conditions is: $y = \sqrt{1+Ce^\frac{-x^2}{2}}$ $w'+xw=x$, where $w = \frac{1}{y^2}$ $w' = -3y^{-4}$ $w'=\frac{-3}{y^{4}}$ $e^{\frac{x^2}{2}$ is the integrating factor...the rest I tried to put on here, but the stuff keeps getting messed up in the latex. 4. Originally Posted by A Beautiful Mind Well, my answer I get before using the initial conditions is: $y = \sqrt{1+Ce^\frac{-x^2}{2}}$ http://www.wolframalpha.com/input/?i...+xy+%3D+xy%5E3 The DE is seperable in fact. Are you required to do it as a Bernoulli? 5. Yes, we're required to do it as a Bernoulli. I noticed it was separable too, but she's very strict about this. 6. I still need help with this...I'm getting nowhere. And my teachers office hours are cancelled. And the tutors can't do above Calc III. :/ 7. Originally Posted by A Beautiful Mind Well, my answer I get before using the initial conditions is: $y = \sqrt{1+Ce^\frac{-x^2}{2}}$ $w'+xw=x$, Mr F says: This is wrong. where $w = \frac{1}{y^2}$ Mr F says: Yes, this is the correct substitution. $w' = -3y^{-4}$ Mr F says: This is wrong. $w'=\frac{-3}{y^{4}}$ $e^{\frac{x^2}{2}$ is the integrating factor...the rest I tried to put on here, but the stuff keeps getting messed up in the latex. There are a number of things you are not doing correctly. Have you read any references on the required technique eg. http://en.wikipedia.org/wiki/Bernoul...ntial_equation. Please show all your work, every step, so that it can be reviewed. 8. $\frac{dy}{dx}+xy=xy^3$ $n = 3$ $u = y^{1-3}$ $u = y^-2 $ $\frac{du}{dx}$ $\frac{du}{dx} = -2y^{-3}$ $\frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}$ $\frac{1}{-2y^{-3}}\frac{du}{dx}+xy=xy^3$ $\frac{du}{dx}-2y^{-2}=-2x$ Getting integrating factor. $P(x) = -2x$ $e^{\int-2x} = e^{-x^2}$ $e^{-x^2}\frac{du}{dx}+(-2y^{-2}xe^{-x^2}) = -2xe^{-x^2}$ Did the integration in my head. You integrate [e^-x^/2(u)}]' and -2xe^{-x^2}. It's easy to see on the right side the -2x term will cancel out by u-substitution. $e^{-x^2}u = e^-x^2+C$ Sub in the u. $\frac{e^{-x^2}}{y^2} = e^-x^2+C$ $\frac{1}{y^2}=1+Ce^{x^2}$ Plug in initial values to this equation. Y(0) = 1/3 $\frac{1}{(1/3)^2} = 1+Ce^0$ $9 = 1+C$ $C= 8$ $\frac{1}{y^2}= 1+8e^{x^2}$ And yes, I read your first reference to wikipedia, but I got the first answer I posted yesterday as that weird square root. Then, I tried looking for examples. 9. Originally Posted by A Beautiful Mind $\frac{dy}{dx}+xy=xy^3$ $n = 3$ $u = y^{1-3}$ $u = y^-2 $ $\frac{du}{dx}$ $\frac{du}{dx} = -2y^{-3}$ $\frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}$ $\frac{1}{-2y^{-3}}\frac{du}{dx}+xy=xy^3$ Mr F says: All OK to here (although there are parts of your working I find difficult to understand) $\frac{du}{dx}-2y^{-2}=-2x$ Mr F says: This is wrong. There is a missing x. It should be $\displaystyle \frac{du}{dx} - 2xy^{-2} = -2x$. From which you get $\displaystyle \frac{du}{dx} - 2ux = -2x$. Getting integrating factor. $P(x) = -2x$ $e^{\int-2x} = e^{-x^2}$ $e^{-x^2}\frac{du}{dx}+(-2y^{-2}xe^{-x^2}) = -2xe^{-x^2}$ Did the integration in my head. You integrate [e^-x^/2(u)}]' and -2xe^{-x^2}. It's easy to see on the right side the -2x term will cancel out by u-substitution. $e^{-x^2}u = e^-x^2+C$ Sub in the u. $\frac{e^{-x^2}}{y^2} = e^-x^2+C$ $\frac{1}{y^2}=1+Ce^{x^2}$ Plug in initial values to this equation. Y(0) = 1/3 $\frac{1}{(1/3)^2} = 1+Ce^0$ $9 = 1+C$ $C= 8$ $\frac{1}{y^2}= 1+8e^{x^2}$ And yes, I read your first reference to wikipedia, but I got the first answer I posted yesterday as that weird square root. Then, I tried looking for examples. OK, this final answer is correct. But a lot of your working presents as wrong even though you finally get the correct answer. You will lose marks for that. 10. Sorry, the x reappears in the next step, though. I subbed u =y^-2 though in that part.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 55, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9161854982376099, "perplexity": 1114.8067112645529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608120.92/warc/CC-MAIN-20170525180025-20170525200025-00139.warc.gz"}
https://mlnotebook.github.io/post/nn-more-maths/
A Simple Neural Network - Vectorisation Simplifying the NN maths ready for coding The third in our series of tutorials on Simple Neural Networks. This time, we’re looking a bit deeper into the maths, specifically focusing on vectorisation. This is an important step before we can translate our maths in a functioning script in Python. So we’ve been through the maths of a neural network (NN) using back propagation and taken a look at the different activation functions that we could implement. This post will translate the mathematics into Python which we can piece together at the end into a functioning NN! Forward Propagation Let’s remimnd ourselves of our notation from our 2 layer network in the maths tutorial: • I is our input layer • J is our hidden layer • $w_{ij}$ is the weight connecting the $i^{\text{th}}$ node in in $I$ to the $j^{\text{th}}$ node in $J$ • $x_{j}$ is the total input to the $j^{\text{th}}$ node in $J$ So, assuming that we have three features (nodes) in the input layer, the input to the first node in the hidden layer is given by: $$x_{1} = \mathcal{O}_{1}^{I} w_{11} + \mathcal{O}_{2}^{I} w_{21} + \mathcal{O}_{3}^{I} w_{31}$$ Lets generalise this for any connected nodes in any layer: the input to node $j$ in layer $l$ is: $$x_{j} = \mathcal{O}_{1}^{l-1} w_{1j} + \mathcal{O}_{2}^{l-1} w_{2j} + \mathcal{O}_{3}^{l-1} w_{3j}$$ But we need to be careful and remember to put in our bias term $\theta$. In our maths tutorial, we said that the bias term was always equal to 1; now we can try to understand why. We could just add the bias term onto the end of the previous equation to get: $$x_{j} = \mathcal{O}_{1}^{l-1} w_{1j} + \mathcal{O}_{2}^{l-1} w_{2j} + \mathcal{O}_{3}^{l-1} w_{3j} + \theta_{i}$$ If we think more carefully about this, what we are really saying is that “an extra node in the previous layer, which always outputs the value 1, is connected to the node $j$ in the current layer by some weight $w_{4j}$“. i.e. $1 \cdot w_{4j}$: $$x_{j} = \mathcal{O}_{1}^{l-1} w_{1j} + \mathcal{O}_{2}^{l-1} w_{2j} + \mathcal{O}_{3}^{l-1} w_{3j} + 1 \cdot w_{4j}$$ By the magic of matrix multiplication, we should be able to convince ourselves that: $$x_{j} = \begin{pmatrix} w_{1j} &w_{2j} &w_{3j} &w_{4j} \end{pmatrix} \begin{pmatrix} \mathcal{O}_{1}^{l-1} \\ \mathcal{O}_{2}^{l-1} \\ \mathcal{O}_{3}^{l-1} \\ 1 \end{pmatrix}$$ Now, lets be a little more explicit, consider the input $x$ to the first two nodes of the layer $J$: \begin{align} x_{1} &= \begin{pmatrix} w_{11} &w_{21} &w_{31} &w_{41} \end{pmatrix} \begin{pmatrix} \mathcal{O}_{1}^{l-1} \\ \mathcal{O}_{2}^{l-1} \\ \mathcal{O}_{3}^{l-1} \\ 1 \end{pmatrix} \\[0.5em] x_{2} &= \begin{pmatrix} w_{12} &w_{22} &w_{32} &w_{42} \end{pmatrix} \begin{pmatrix} \mathcal{O}_{1}^{l-1} \\ \mathcal{O}_{2}^{l-1} \\ \mathcal{O}_{3}^{l-1} \\ 1 \end{pmatrix} \end{align} Note that the second matrix is constant between the input calculations as it is only the output values of the previous layer (including the bias term). This means (again by the magic of matrix multiplication) that we can construct a single vector containing the input values $x$ to the current layer: $$\begin{pmatrix} x_{1} \\ x_{2} \end{pmatrix} = \begin{pmatrix} w_{11} & w_{21} & w_{31} & w_{41} \\ w_{12} & w_{22} & w_{32} & w_{42} \end{pmatrix} \begin{pmatrix} \mathcal{O}_{1}^{l-1} \\ \mathcal{O}_{2}^{l-1} \\ \mathcal{O}_{3}^{l-1} \\ 1 \end{pmatrix}$$ This is an $\left(n \times m+1 \right)$ matrix multiplied with an $\left(m +1 \times 1 \right)$ where: • $n$ is the number of nodes in the current layer $l$ • $m$ is the number of nodes in the previous layer $l-1$ Lets generalise - the vector of inputs to the $n$ nodes in the current layer from the nodes $m$ in the previous layer is: $$\begin{pmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{pmatrix} = \begin{pmatrix} w_{11} & w_{21} & \cdots & w_{(m+1)1} \\ w_{12} & w_{22} & \cdots & w_{(m+1)2} \\ \vdots & \vdots & \ddots & \vdots \\ w_{1n} & w_{2n} & \cdots & w_{(m+1)n} \\ \end{pmatrix} \begin{pmatrix} \mathcal{O}_{1}^{l-1} \\ \mathcal{O}_{2}^{l-1} \\ \mathcal{O}_{3}^{l-1} \\ 1 \end{pmatrix}$$ or: $$\mathbf{x_{J}} = \mathbf{W_{IJ}} \mathbf{\vec{\mathcal{O}}_{I}}$$ In this notation, the output from the current layer $J$ is easily written as: $$\mathbf{\vec{\mathcal{O}}_{J}} = \sigma \left( \mathbf{W_{IJ}} \mathbf{\vec{\mathcal{O}}_{I}} \right)$$ Where $\sigma$ is the activation or transfer function chosen for this layer which is applied elementwise to the product of the matrices. This notation allows us to very efficiently calculate the output of a layer which reduces computation time. Additionally, we are now able to extend this efficiency by making out network consider all of our input examples at once. Remember that our network requires training (many epochs of forward propagation followed by back propagation) and as such needs training data (preferably a lot of it!). Rather than consider each training example individually, we vectorise each example into a large matrix of inputs. Our weights $\mathbf{W_{IJ}}$ connecting the layer $l$ to layer $J$ are the same no matter which input example we put into the network: this is fundamental as we expect that the network would act the same way for similar inputs i.e. we expect the same neurons (nodes) to fire based on the similar features in the input. If 2 input examples gave the outputs $\mathbf{\vec{\mathcal{O}}_{I_{1}}}$ and $\mathbf{\vec{\mathcal{O}}_{I_{2}}}$ from the nodes in layer $I$ to a layer $J$ then the outputs from layer $J$ , $\mathbf{\vec{\mathcal{O}}_{J_{1}}}$ and $\mathbf{\vec{\mathcal{O}}_{J_{1}}}$ can be written: $$\begin{pmatrix} \mathbf{\vec{\mathcal{O}}_{J_{1}}} \\ \mathbf{\vec{\mathcal{O}}_{J_{2}}} \end{pmatrix} = \sigma \left(\mathbf{W_{IJ}}\begin{pmatrix} \mathbf{\vec{\mathcal{O}}_{I_{1}}} & \mathbf{\vec{\mathcal{O}}_{I_{2}}} \end{pmatrix} \right) = \sigma \left(\mathbf{W_{IJ}}\begin{pmatrix} \begin{bmatrix}\mathcal{O}_{I_{1}}^{1} \\ \vdots \\ \mathcal{O}_{I_{1}}^{m} \end{bmatrix} \begin{bmatrix}\mathcal{O}_{I_{2}}^{1} \\ \vdots \\ \mathcal{O}_{I_{2}}^{m} \end{bmatrix} \end{pmatrix} \right) = \sigma \left(\begin{pmatrix} \mathbf{W_{IJ}}\begin{bmatrix}\mathcal{O}_{I_{1}}^{1} \\ \vdots \\ \mathcal{O}_{I_{1}}^{m} \end{bmatrix} & \mathbf{W_{IJ}} \begin{bmatrix}\mathcal{O}_{I_{2}}^{1} \\ \vdots \\ \mathcal{O}_{I_{2}}^{m} \end{bmatrix} \end{pmatrix} \right)$$ For the $m$ nodes in the input layer. Which may look hideous, but the point is that all of the training examples that are input to the network can be dealt with simultaneously because each example becomes another column in the input vector and a corresponding column in the output vector. In summary, for forward propagation: • All $n$ training examples with $m$ features (input nodes) are put into column vectors to build the input matrix $I$, taking care to add the bias term to the end of each. • All weight vectors that connect $m +1$ nodes in the layer $I$ to the $n$ nodes in layer $J$ are put together in a weight-matrix • $$\mathbf{I} = \left( \begin{bmatrix} \mathcal{O}_{I_{1}}^{1} \\ \vdots \\ \mathcal{O}_{I_{1}}^{m} \\ 1 \end{bmatrix} \begin{bmatrix} \mathcal{O}_{I_{2}}^{1} \\ \vdots \\ \mathcal{O}_{I_{2}}^{m} \\ 1 \end{bmatrix} \begin{bmatrix} \cdots \\ \cdots \\ \ddots \\ \cdots \end{bmatrix} \begin{bmatrix} \mathcal{O}_{I_{n}}^{1} \\ \vdots \\ \mathcal{O}_{I_{n}}^{m} \\ 1 \end{bmatrix} \right) \ \ \ \ \mathbf{W_{IJ}} = \begin{pmatrix} w_{11} & w_{21} & \cdots & w_{(m+1)1} \\ w_{12} & w_{22} & \cdots & w_{(m+1)2} \\ \vdots & \vdots & \ddots & \vdots \\ w_{1n} & w_{2n} & \cdots & w_{(m+1)n} \\ \end{pmatrix}$$ • We perform $\mathbf{W_{IJ}} \mathbf{I}$ to get the vector $\mathbf{\vec{\mathcal{O}}_{J}}$ which is the output from each of the $m$ nodes in layer $J$ • Back Propagation To perform back propagation there are a couple of things that we need to vectorise. The first is the error on the weights when we compare the output of the network $\mathbf{\vec{\mathcal{O}}_{K}}$ with the known target values: $$\mathbf{T_{K}} = \begin{bmatrix} t_{1} \\ \vdots \\ t_{k} \end{bmatrix}$$ A reminder of the formulae: $$\delta_{k} = \mathcal{O}_{k} \left( 1 - \mathcal{O}_{k} \right) \left( \mathcal{O}_{k} - t_{k} \right), \ \ \ \ \delta_{j} = \mathcal{O}_{i} \left( 1 - \mathcal{O}_{j} \right) \sum_{k \in K} \delta_{k} W_{jk}$$ Where $\delta_{k}$ is the error on the weights to the output layer and $\delta_{j}$ is the error on the weights to the hidden layers. We also need to vectorise the update formulae for the weights and bias: $$W + \Delta W \rightarrow W, \ \ \ \ \theta + \Delta\theta \rightarrow \theta$$ Vectorising the Output Layer Deltas Lets look at the output layer delta: we need a subtraction between the outputs and the target which is multiplied by the derivative of the transfer function (sigmoid). Well, the subtraction between two matrices is straight forward: $$\mathbf{\vec{\mathcal{O}}_{K}} - \mathbf{T_{K}}$$ but we need to consider the derivative. Remember that the output of the final layer is: $$\mathbf{\vec{\mathcal{O}}_{K}} = \sigma \left( \mathbf{W_{JK}}\mathbf{\vec{\mathcal{O}}_{J}} \right)$$ and the derivative can be written: $$\sigma ^{\prime} \left( \mathbf{W_{JK}}\mathbf{\vec{\mathcal{O}}_{J}} \right) = \mathbf{\vec{\mathcal{O}}_{K}}\left( 1 - \mathbf{\vec{\mathcal{O}}_{K}} \right)$$ Note: This is the derivative of the sigmoid as evaluated at each of the nodes in the layer $K$. It is acting elementwise on the inputs to layer $K$. Thus it is a column vector with the same length as the number of nodes in layer $K$. Put the derivative and subtraction terms together and we get: $$\mathbf{\vec{\delta}_{K}} = \sigma^{\prime}\left( \mathbf{W_{JK}}\mathbf{\vec{\mathcal{O}}_{J}} \right) * \left( \mathbf{\vec{\mathcal{O}}_{K}} - \mathbf{T_{K}}\right)$$ Again, the derivatives are being multiplied elementwise with the results of the subtration. Now we have a vector of deltas for the output layer $K$! Things aren’t so straight forward for the detlas in the hidden layers. Lets visualise what we’ve seen: Figure 1: NN showing the weights and outputs in vector form along with the target values for layer $K$ Vectorising the Hidden Layer Deltas We need to vectorise: $$\delta_{j} = \mathcal{O}_{i} \left( 1 - \mathcal{O}_{j} \right) \sum_{k \in K} \delta_{k} W_{jk}$$ Let’s deal with the summation. We’re multipying each of the deltas $\delta_{k}$ in the output layer (or more generally, the subsequent layer could be another hidden layer) by the weight $w_{jk}$ that pulls them back to the node $j$ in the current layer before adding the results. For the first node in the hidden layer: $$\sum_{k \in K} \delta_{k} W_{jk} = \delta_{k}^{1}w_{11} + \delta_{k}^{2}w_{12} + \delta_{k}^{3}w_{13} = \begin{pmatrix} w_{11} & w_{12} & w_{13} \end{pmatrix} \begin{pmatrix} \delta_{k}^{1} \\ \delta_{k}^{2} \\ \delta_{k}^{3}\end{pmatrix}$$ Notice the weights? They pull the delta from each output layer node back to the first node of the hidden layer. In forward propagation, these we consider multiple nodes going out to a single node, rather than this way of receiving multiple nodes at a single node. Combine this summation with the multiplication by the activation function derivative: $$\delta_{j}^{1} = \sigma^{\prime} \left( x_{j}^{1} \right) \begin{pmatrix} w_{11} & w_{12} & w_{13} \end{pmatrix} \begin{pmatrix} \delta_{k}^{1} \\ \delta_{k}^{2} \\ \delta_{k}^{3} \end{pmatrix}$$ remembering that the input to the $\text{1}^\text{st}$ node in the layer $J$ $$x_{j}^{1} = \mathbf{W_{I1}}\mathbf{\vec{\mathcal{O}}_{I}}$$ What about the $\text{2}^\text{nd}$ node in the hidden layer? $$\delta_{j}^{2} = \sigma^{\prime} \left( x_{j}^{2} \right) \begin{pmatrix} w_{21} & w_{22} & w_{23} \end{pmatrix} \begin{pmatrix} \delta_{k}^{1} \\ \delta_{k}^{2} \\ \delta_{k}^{3} \end{pmatrix}$$ This is looking familiar, hopefully we can be confident based upon what we’ve done before to say that: $$\begin{pmatrix} \delta_{j}^{1} \\ \delta_{j}^{2} \end{pmatrix} = \begin{pmatrix} \sigma^{\prime} \left( x_{j}^{1} \right) \\ \sigma^{\prime} \left( x_{j}^{2} \right) \end{pmatrix} * \begin{pmatrix} w_{11} & w_{12} & w_{13} \\ w_{21} & w_{22} & w_{23} \end{pmatrix} \begin{pmatrix}\delta_{k}^{1} \\ \delta_{k}^{2} \\ \delta_{k}^{3} \end{pmatrix}$$ We’ve seen a version of this weights matrix before when we did the forward propagation vectorisation. In this case though, look carefully - as we mentioned, the weights are not in the same places, in fact, the weight matrix has been transposed from the one we used in forward propagation. This makes sense because we’re going backwards through the network now! This is useful because it means there is very little extra calculation needed here - the matrix we need is already available from the forward pass, but just needs transposing. We can call the weights in back propagation here $\mathbf{ W_{KJ}}$ as we’re pulling the deltas from $K$ to $J$. \begin{align} \mathbf{W_{KJ}} &= \begin{pmatrix} w_{11} & w_{12} & \cdots & w_{1n} \\ w_{21} & w_{22} & \cdots & w_{23} \\ \vdots & \vdots & \ddots & \vdots \\ w_{(m+1)1} & w_{(m+1)2} & \cdots & w_{(m+1)n} \end{pmatrix} , \ \ \ \mathbf{W_{JK}} = \begin{pmatrix} w_{11} & w_{21} & \cdots & w_{(m+1)1} \\ w_{12} & w_{22} & \cdots & w_{(m+1)2} \\ \vdots & \vdots & \ddots & \vdots \\ w_{1n} & w_{2n} & \cdots & w_{(m+1)n} \\ \end{pmatrix} \\[0.5em] \mathbf{W_{KJ}} &= \mathbf{W^{\intercal}_{JK}} \end{align} And so, the vectorised equations for the output layer and hidden layer deltas are: \begin{align} \mathbf{\vec{\delta}_{K}} &= \sigma^{\prime}\left( \mathbf{W_{JK}}\mathbf{\vec{\mathcal{O}}_{J}} \right) * \left( \mathbf{\vec{\mathcal{O}}_{K}} - \mathbf{T_{K}}\right) \\[0.5em] \mathbf{ \vec{ \delta }_{J}} &= \sigma^{\prime} \left( \mathbf{ W_{IJ} \mathcal{O}_{I} } \right) * \mathbf{ W^{\intercal}_{JK}} \mathbf{ \vec{\delta}_{K}} \end{align} Lets visualise what we’ve seen: Figure 2: The NN showing the delta vectors Vectorising the Update Equations Finally, now that we have the vectorised equations for the deltas (which required us to get the vectorised equations for the forward pass) we’re ready to get the update equations in vector form. Let’s recall the update equations \begin{align} \Delta W &= -\eta \ \delta_{l} \ \mathcal{O}_{l-1} \\ \Delta\theta &= -\eta \ \delta_{l} \end{align} Ignoring the $-\eta$ for now, we need to get a vector form for $\delta_{l} \ \mathcal{O}_{l-1}$ in order to get the update to the weights. We have the matrix of weights: $$\mathbf{W_{JK}} = \begin{pmatrix} w_{11} & w_{21} & w_{31} \\ w_{12} & w_{22} & w_{32} \\ \end{pmatrix}$$ Suppose we are updating the weight $w_{21}$ in the matrix. We’re looking to find the product of the output from the second node in $J$ with the delta from the first node in $K$. $$\Delta w_{21} = \delta_{K}^{1} \mathcal{O}_{J}^{2}$$ Considering this example, we can write the matrix for the weight updates as: $$\Delta \mathbf{W_{JK}} = \begin{pmatrix} \delta_{K}^{1} \mathcal{O}_{J}^{1} & \delta_{K}^{1} \mathcal{O}_{J}^{2} & \delta_{K}^{1} \mathcal{O}_{J}^{3} \\ \delta_{K}^{2} \mathcal{O}_{J}^{1} & \delta_{K}^{2} \mathcal{O}_{J}^{2} & \delta_{K}^{2} \mathcal{O}_{J}^{3} \end{pmatrix} = \begin{pmatrix} \delta_{K}^{1} \\ \delta_{K}^{2}\end{pmatrix} \begin{pmatrix} \mathcal{O}_{J}^{1} & \mathcal{O}_{J}^{2}& \mathcal{O}_{J}^{3} \end{pmatrix}$$ Generalising this into vector notation and including the learning rate $\eta$, the update for the weights in layer $J$ is: $$\Delta \mathbf{W_{JK}} = -\eta \mathbf{ \vec{ \delta }_{K}} \mathbf{ \vec { \mathcal{O} }_{J}}$$ Similarly, we have the update to the bias term. If: $$\Delta \vec{\theta} = -\eta \mathbf{ \vec{ \delta }_{K}}$$ So the bias term is updated just by taking the deltas straight from the nodes in the subsequent layer (with the negative factor of learning rate). In summary, for back propagation, the equations we need in vector form are: \begin{align} \mathbf{\vec{\delta}_{K}} &= \sigma^{\prime}\left( \mathbf{W_{JK}}\mathbf{\vec{\mathcal{O}}_{J}} \right) * \left( \mathbf{\vec{\mathcal{O}}_{K}} - \mathbf{T_{K}}\right) \\[0.5em] \mathbf{ \vec{ \delta }_{J}} &= \sigma^{\prime} \left( \mathbf{ W_{IJ} \mathcal{O}_{I} } \right) * \mathbf{ W^{\intercal}_{JK}} \mathbf{ \vec{\delta}_{K}} \end{align} \begin{align} \mathbf{W_{JK}} + \Delta \mathbf{W_{JK}} &\rightarrow \mathbf{W_{JK}}, \ \ \ \Delta \mathbf{W_{JK}} = -\eta \mathbf{ \vec{ \delta }_{K}} \mathbf{ \vec { \mathcal{O} }_{J}} \\[0.5em] \vec{\theta} + \Delta \vec{\theta} &\rightarrow \vec{\theta}, \ \ \ \Delta \vec{\theta} = -\eta \mathbf{ \vec{ \delta }_{K}} \end{align} With $*$ representing an elementwise multiplication between the matrices. What's next? Although this kinds of mathematics can be tedious and sometimes hard to follow (and probably with numerous notation mistakes… please let me know if you find them!), it is necessary in order to write a quick, efficient NN. Our next step is to implement this setup in Python.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9948915243148804, "perplexity": 613.7962663139778}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00348.warc.gz"}
http://www.smashcompany.com/technology/skip-lists-a-probabilistic-alternative-to-balanced-trees
# Skip Lists: A Probabilistic Alternative to Balanced Trees (written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: [email protected] Interesting: Skip lists are a data structure that can be used in place of balanced trees. Skip lists use probabilistic balancing rather than strictly enforced balancing and as a result the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees A node that has k forward pointers is called a level k node. If every (2i)th node has a pointer 2i nodes ahead, then levels of nodes are distributed in a simple pattern: 50% are level 1, 25% are level 2, 12.5% are level 3 and so on. What would happen if the levels of nodes were chosen randomly, but in the same proportions (e.g., as in Figure 1e)? A node’s i’th forward
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576842784881592, "perplexity": 3879.6913503909996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865863.76/warc/CC-MAIN-20180523235059-20180524015059-00222.warc.gz"}
http://mathschallenge.net/view/snapped_pole
Snapped Pole Problem A vertical pole $AB$ measuring $5$ metres snaps at point $C$. The pole remains in contact at $C$ and the top of the pole touches the ground at point $T$, a distance of $3$ metres from $A$. Find the length $AC$, the point where the pole snapped. Problem ID: 362 (28 Oct 2009)     Difficulty: 2 Star Show Problem & Solution
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691640734672546, "perplexity": 1402.6098259736425}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292608.25/warc/CC-MAIN-20160823195812-00163-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/high-voltage-to-bring-electricity-to-the-cities.305315/
# High Voltage to bring electricity to the cities 1. Apr 6, 2009 ### DaTario Hi All, Is there a simple explanation for the fact that in power stations (where electric energy is created) the voltage applied to the conducting wires, which will carry the energy to the city, is much higher than 110 V ? Thank you all Best Wishes DaTario 2. Apr 6, 2009 ### Staff: Mentor Yes, resistance losses are a function of amperage, so if you have a certain power requirement, using a higher voltage enables lower amperage and lower resistance losses. 3. Apr 6, 2009 ### b.shahvir Hi, for the same power requirement, increasing the applied voltage will cause current to drop significantly. This is economical for the power utility companies as they will be saving on money invested in electrical equipment as reduction in amps will, for eg., reduce the size of transmission line conductors. Also, reduced amps results in reduced line losses as rightly pointed out by Russ. You can play with Ohm's law by putting higher values of voltage in the eqn. P = V x I keeping power 'P' as constant. You will find that as voltage increases, amps decreases for same power requirement! Regards, Shahvir 4. Apr 6, 2009 ### Bob S Also, there is actually power loss in long interstate lines due to radiation (think horizontal antennas) at 60 Hz. The radiation is proportional to current, so raising the voltage reduces the radiation. 5. Apr 16, 2009 ### DaTario Ok, First of all, thank you for the response. Now, let me just touch one point in this discussion. Joule effect gives us an expression to the power loss due to the passage of current through a conductor (P = R i^2). But there seems to exist the power consumed by city users of electricity, which should be given by a different expression. The expression P = U i is an equivalent form of P = R i^2 (at least it seems to be). I am a bit confused in articulating these two formulas (namely: P = R i^2 and P = Ui) as in both cases I guess they address the Power loss by heating (Joule effect). Thank you once more, Best wishes, DaTario 6. Apr 16, 2009 ### Bob S The two formulas for power consumed by city users are equivalent. The joule heating is the same. 1 watt = 1 joule per second. 1 kiowatt-hour is 1000 watts for 1 hour. 7. Apr 24, 2009 ### DaTario Ok, let me check If I have correctly understood this: By raising the voltage, for a given power requirement, the current assume a low value and this is the key concept, as low value for current represent low value for energy losses due to joule effect. Is it correct? What about capacitive and inductive effects? Thank you DaTario 8. Apr 25, 2009 ### b.shahvir Your assumptions are correct! Low current means lower voltage drops across the capacitive and inductive components of the transmission line, as a result, lower losses in the lines itself....hence a more efficient power transfer to load. 9. Apr 25, 2009 ### Bob S There is one inductive effect that the utilities have to deal with that does not come from their own equipment. Nearly every electric motor appears to be slightly inductive to the utility. This includes electric motors in refrigerators and washing machines in homes. Unless corrected at the local level, the utility has to generate and transmit extra current to compensate for this. This is the basis for terms like "power factor" and "VA" (vs. watt) load. Our power meters measure only kilowatt-hours, and not KVA-hours, but some industrial companies have to pay a surcharge if they have too large (actually too small) a power factor. 10. Apr 25, 2009 ### b.shahvir How does the 'self-inductance' of the transmission line itself contribute to power loss? .....as self-inductance is a reactive (wattless) component and power loss is due to active (wattful) components of a transmission line! plz elaborate on this concept. Thanx. Last edited: Apr 26, 2009 11. Apr 26, 2009 ### DaTario Nice question! Thank you for the debate. DaTario 12. Apr 27, 2009 ### b.shahvir The following are my thoughts on the topic; In my opinion, power loss in transmission line means when the receiving end voltage Vr is less than the sending end voltage Vs. This happens as Vs is dropped across line components resulting in reduced voltage Vr. Reduction in Vr accounts for power loss. The power loss is due to series line resistance, skin effect, series line loop inductance, shunt leakage capacitances (in long lines) shunt inductances (load side), power loss due to radiation and induction, etc. Presently my topic of interest is power loss due to series line inductance which causes reduced Vr and hence in my opinion is equally responsible for power loss akin to series line resistance and skin effect. Suitable guidance for the same will be greatly appreciated. Thanx. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: High Voltage to bring electricity to the cities
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8588122129440308, "perplexity": 2263.4095637032888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824894.98/warc/CC-MAIN-20171021190701-20171021210701-00246.warc.gz"}
https://link.springer.com/article/10.1007/s11786-020-00495-9
## 1 Introduction Although known earlier to Dodgson [8] and JordanFootnote 1 (see Durand [9]), the fraction-free method for exact matrix computations became well known because of its application by Bareiss [1] to the solution of a linear system over $$\mathbb {Z}$$, and later over an integral domain [2]. He implemented fraction-free Gaussian elimination of the augmented matrix $$[A\ B]$$, and kept all computations in $$\mathbb {Z}$$ until a final division step. Since, in linear algebra, equation solving is related to the matrix factorizations LU and QR, it is natural that fraction-free methods would be extended later to those factorizations. The forms of the factorizations, however, had to be modified from their floating-point counterparts in order to retain purely integral data. The first proposed modifications were based on inflating the initial data until all divisions were guaranteed to be exact, see for example Lee and Saunders [17], Nakos et al. [21] and Corless and Jeffrey [7]. This strategy, however, led to the entries in the L and U matrices becoming very large, and an alternative form was presented in Zhou and Jeffrey [26], and is described below. Similarly, fraction-free Gram–Schmidt orthogonalization and QR factorization were studied in Erlingsson et al. [10] and Zhou and Jeffrey [26]. Further extensions have addressed fraction-free full-rank factoring of non-invertible matrices and fraction-free computation of the Moore–Penrose inverse [15]. More generally, applications exist in areas such as the Euclidean algorithm, and the Berlekamp–Massey algorithm [16]. More general domains are possible, and here we consider matrices over a principal ideal domain $$\mathbb {D}$$. For the purpose of giving illustrative examples and conducting computational experiments, matrices over $$\mathbb {Z}$$ and $$\mathbb {Q}[x]$$ are used, because these domains are well established and familiar to readers. We emphasize, however, that the methods here apply for all principal ideal domains, as opposed to methods that target specific domains, such as Giesbrecht and Storjohann [12] and Pauderis and Storjohann [24]. The shift from equation solving to matrix factorization has the effect of making visible the intermediate results, which are not displayed in the original Bareiss implementation. Because of this, it becomes apparent that the columns and rows of the L and U matrices frequently contain common factors, which otherwise pass unnoticed. We consider here how these factors arise, and what consequences there are for the computations. Our starting point is a fraction-free form for LU decomposition [15]: given a matrix A over $$\mathbb {D}$$, \begin{aligned} A = P_r LD^{-1} U P_c, \end{aligned} where L and U are lower and upper triangular matrices, respectively, D is a diagonal matrix, and the entries of L, D, and U are from $$\mathbb {D}$$. The permutation matrices $$P_r$$ and $$P_c$$ ensure that the decomposition is always a full-rank decomposition, even if A is rectangular or rank deficient; see Sect. 2. The decomposition is computed by a variant of Bareiss’s algorithm [2]. In Sect. 6, the $$L D^{-1} U$$ decomposition also is the basis of a fraction-free QR decomposition. The key feature of Bareiss’s algorithm is that it creates factors which are common to every element in a row, but which can then be removed by exact divisions. We refer to such factors, which appear predictably owing to the decomposition algorithm, as “systematic factors”. There are, however, other common factors which occur with computable probability, but which depend upon the particular data present in the input matrix. We call such factors “statistical factors”. In this paper we discuss the origins of both kinds of common factors and show that we can predict a nontrivial proportion of them from simple considerations. Once the existence of common factors is recognized, it is natural to consider what consequences, if any, there are for the computation, or application, of the factorizations. Some consequences we shall consider include a lack of uniqueness in the definition of the LU factorization, and whether the common factors add significantly to the sizes of the elements in the constituent factors. This in turn leads to questions regarding the benefits of removing common factors, and what computational cost is associated with such benefits. A synopsis of the paper is as follows. After recalling Bareiss’s algorithm, the $$L D^{-1} U$$ decomposition, and the algorithm from Jeffrey [15] in Sect. 2, we establish, in Sect. 3, a relation between the systematic common row factors of U and the entries in the Smith–Jacobson normal form of the same input matrix A. In Sect. 4 we propose an efficient way of identifying some of the systematic common row factors introduced by Bareiss’s algorithm; these factors can then be easily removed by exact division. In Sect. 5 we present a detailed analysis concerning the expected number of statistical common factors in the special case $$\mathbb {D}=\mathbb {Z}$$, and we find perfect agreement with our experimental results. We conclude that the factors make a measurable contribution to the element size, but they do not impose a serious burden on calculations. In Sect. 6 we investigate the QR factorization. In this context, the orthonormal Q matrix used in floating point calculations is replaced by a $$\Theta$$ matrix, which is left-orthogonal, i.e. $$\Theta ^t\Theta$$ is diagonal, but $$\Theta \Theta ^t$$ is not. We show that, for a square matrix A, the last column of $$\Theta$$, as calculated by existing algorithms, is subject to an exact division by the determinant of A, with a possibly significant reduction in size. Throughout the paper, we employ the following notation. We assume, unless otherwise stated, that the ring $$\mathbb {D}$$ is an arbitrary principal ideal domain. We denote the set of all m-by-n matrices over $$\mathbb {D}$$ by $$\mathbb {D}^{m\times n}$$. We write $${\mathbf {1}}_{n}$$ for the n-by-n identity matrix and $$\mathbf {0}_{m\times n}$$ for the m-by-n zero matrix. We shall usually omit the subscripts if no confusion is possible. For $$A \in \mathbb {D}^{m\times n}$$ and $$1 \le i \le m$$, $${A}_{i,*}$$ is the ith row of A. Similarly, $${A}_{*,j}$$ is the jth column of A for $$1 \le j \le n$$. If $$1 \le i_1 < i_2 \le m$$ and $$1 \le j_1 < j_2 \le n$$, we use $${A}_{{i_{1}\ldots i_{2}},{j_{1}\ldots j_{2}}}$$ to refer to the submatrix of A made up from the entries of the rows $$i_1$$ to $$i_2$$ and the columns $$j_1$$ to $$j_2$$. Given elements $$a_1,\ldots ,a_n \in \mathbb {D}$$, with $${{\,\mathrm{diag}\,}}(a_1,\ldots ,a_n)$$ we refer to the diagonal matrix that has $$a_j$$ as the entry at position (jj) for $$1 \le j \le n$$. We will use the same notation for block diagonal matrices. We denote the set of all column vectors of length m with entries in $$\mathbb {D}$$ by $$\mathbb {D}^{m}$$ and that of all row vectors of length n by $$\mathbb {D}^{1\times n}$$. If $$\mathbb {D}$$ is a unique factorization domain and $$v = (v_1,\ldots ,v_n) \in \mathbb {D}^{1\times n}$$, then we set $$\gcd (v) = \gcd (v_1,\ldots ,v_n)$$. Moreover, with $$d \in \mathbb {D}$$ we write $$d \mid v$$ if $$d \mid v_1 \wedge \cdots \wedge d \mid v_n$$ (or, equivalently, if $$d \mid \gcd (v)$$). We also use the same notation for column vectors. We will sometimes write column vectors $$w \in \mathbb {D}^{m}$$ with an underline $$\underline{w}$$ and row vectors $$v \in \mathbb {D}^{1\times n}$$ with an overline $$\overline{v}$$ if we want to emphasize the specific type of vector. ## 2 Bareiss’s Algorithm and the $$L D^{-1} U$$ Decomposition For the convenience of the reader, we start by recalling Bareiss’s algorithm [2]. Let $$\mathbb {D}$$ be an integral domainFootnote 2, and let $$A \in \mathbb {D}^{n\times n}$$ be a matrix and $$b \in \mathbb {D}^{n}$$ be a vector. Bareiss modified the usual Gaussian elimination with the aim of keeping all calculations in $$\mathbb {D}$$ until the final step. If this is done naïvely then the entries increase in size exponentially. Bareiss used results from Sylvester and Jordan to reduce this to linear growth. Bareiss defined the notationFootnote 3 \begin{aligned} A^{(k)}_{ij} = \det \begin{bmatrix} A_{1,1} &{}\quad \cdots &{}\quad A_{1,k} &{}\quad A_{1,j} \\ \vdots &{}\quad \ddots &{}\quad \vdots &{}\quad \vdots \\ A_{k,1} &{}\quad \cdots &{}\quad A_{k,k} &{}\quad A_{k,j} \\ A_{i,1} &{}\quad \cdots &{}\quad A_{i,k} &{}\quad A_{i,j} \end{bmatrix}\ , \end{aligned} (2.1) for $$i>k$$ and $$j>k$$, and with special cases $$A_{i,j}^{(0)}=A_{ij}$$ and $$A_{0,0}^{(-1)}=1$$. We start with division-free Gaussian elimination, which is a simple cross-multiplication scheme, and denote the result after k steps by $$A^{[k]}_{ij}$$. We assume that any pivoting permutations have been completed and need not be considered further. The result of one step is \begin{aligned} A^{[1]}_{i,j}= A_{1,1}A_{i,j}-A_{i,1}A_{1,j} =\det \begin{bmatrix} A_{1,1} &{}\quad A_{1,j} \\ A_{i,1} &{}\quad A_{i,j} \end{bmatrix} = A^{(1)}_{i,j}\ , \end{aligned} (2.2) and the two quantities $$A^{[1]}_{i,j}$$ and $$A^{(1)}_{i,j}$$ are equal. A second step, however, leads to \begin{aligned} A^{[2]}_{i,j}= A^{[1]}_{2,2}A^{[1]}_{i,j}-A^{[1]}_{i,2}A^{[1]}_{2,j} = A_{1,1}\det \begin{bmatrix} A_{1,1} &{}\quad A_{1,2} &{}\quad A_{1,j} \\ A_{2,1} &{}\quad A_{2,2} &{}\quad A_{2,j} \\ A_{i,1} &{}\quad A_{i,2} &{}\quad A_{i,j} \end{bmatrix} = A_{1,1} A^{(2)}_{i,j}\ . \end{aligned} (2.3) Thus, as stated in Sect. 1, simple cross-multiplication introduces a systematic common factor in all entries $$i,j>2$$. This effect continues for general k (see [2]), and leads to exponential growth in the size of the terms. Since the systematic factor is known, it can be removed by an exact division, and then the terms grow linearly in size. Thus Bareiss’s algorithm is \begin{aligned} A^{(k+1)}_{i,j} =\frac{1}{A^{(k-1)}_{k,k}}\left( A^{(k)}_{k+1,k+1} A^{(k)}_{i,j}-A^{(k)}_{i,k+1}A^{(k)}_{k+1,j} \right) \ , \end{aligned} (2.4) and the division is exact. The elements of the reduced matrix are thus minors of A. The main interest for Bareiss was to advocate a ‘two-step’ method, wherein one proceeds from step k to step $$k+2$$ directly, rather than by repeated Gaussian steps. The two-step method claims improved efficiency, but the results obtained are the same, and we shall not consider it here. In Jeffrey [15], Bareiss’s algorithm was used to obtain a fraction-free variant of the LU factorization of A. We quote the main result from that paper here as Theorem 1. The idea behind the factorization is that schemes which inflate the initial matrix A, such as Lee and Saunders [17] and Nakos et al. [21] and Corless and Jeffrey [7] do not avoid the quotient field, but merely move the divisors to the other side of the defining equation, at the cost of significant inflation. In any subsequent application, the divisors will have to move back, and the inflation will have to be reversed. In contrast, the present factorization isolates the divisors in an explicit inverse matrix. The matrices $$P_r, L, D, U, P_c$$ appearing in the decomposition below contain only elements from $$\mathbb {D}$$, but the inverse of D,if it were evaluated, would have to contain elements from the quotient field. By expressing the factorization in a form containing $$D^{-1}$$ unevaluated, all calculations can stay within $$\mathbb {D}$$. ### Theorem 1 (Jeffrey [15, Thm. 2]). A rectangular matrix A with elements from an integral domain $$\mathbb {D}$$, having dimensions $$m\times n$$ and rank r, may be factored into matrices containing only elements from $$\mathbb {D}$$ in the form \begin{aligned} A = P_r L D^{-1} U P_c = P_r \begin{pmatrix} \mathcal {L} \\ \mathcal {M} \end{pmatrix} D^{-1} \begin{pmatrix} \mathcal {U}&\quad \mathcal {V} \end{pmatrix} P_c \end{aligned} where the permutation matrix $$P_r$$ is $$m\times m$$; the permutation matrix $$P_c$$ is $$n\times n$$; $$\mathcal {L}$$ is $$r\times r$$, lower triangular and has full rank: \begin{aligned} \mathcal {L} = \begin{bmatrix} A^{(0)}_{1,1} \\ A^{(0)}_{2,1} &{}{}\quad A^{(1)}_{2,2} \\ \vdots &{}{}\quad \vdots &{}{}\quad \ddots \\ A^{(0)}_{r,1} &{}{}\quad A^{(1)}_{r,2} &{}{}\quad \cdots &{}{}\quad A^{(r-1)}_{r,r} \\ \end{bmatrix}\ ; \end{aligned} (2.5) $$\mathcal {M}$$ is $$(m-r)\times r$$ and could be null; $$\mathcal {U}$$ is $$r\times r$$ and upper triangular, while $$\mathcal {V}$$ is $$r\times (n-r)$$ and could be null: \begin{aligned} \mathcal {U} = \begin{bmatrix} A^{(0)}_{1,1} &{}{}\quad A^{(0)}_{1,2} &{}{}\quad \cdots &{}{}\quad A^{(0)}_{1,r} \\ &{}{}\quad A^{(1)}_{2,2} &{}{}\quad \cdots &{}{}\quad A^{(1)}_{2,r} \\ &{}{}\quad &{}{}\quad \ddots &{}{}\quad \vdots \\ &{}{}\quad &{}{}\quad &{}{}\quad A^{(r-1)}_{r,r} \\ \end{bmatrix}\ . \end{aligned} (2.6) Finally, the D matrix is \begin{aligned} D^{-1} = \begin{bmatrix} A^{(-1)}_{0,0} A^{(0)}_{1,1} &{}\quad \\ &{}\quad A^{(0)}_{1,1} A^{(1)}_{2,2} &{} \\ &{}\quad &{}\quad \ddots \\ &{}\quad &{}\quad &{}\quad A^{(n-2)}_{n-1,n-1}A^{(n-1)}_{n,n} \\ \end{bmatrix}^{-1}\ . \end{aligned} (2.7) ### Remark 2 It is convenient to call the diagonal elements $$A^{(k-1)}_{k,k}$$ pivots. They drive the pivoting strategy, which determines $$P_r$$, and they are used for the exact-division step (2.4) in Bareiss’s algorithm. ### Remark 3 As in numerical linear algebra, the $$LD^{-1}U$$ decomposition can be stored in a single matrix, since the diagonal (pivot) elements need only be stored once. The proof of Theorem 1 given in Jeffrey [15] outlines an algorithm for the computation of the $$L D^{-1} U$$ decomposition. The algorithm is a variant of Bareiss’s algorithm [1], and yields the same U. The difference is that Jeffrey [15] also explains how to obtain L and D in a fraction-free way. ### Algorithm 4 ($$LD^{-1}U$$ decomposition) Input:: A matrix $$A \in \mathbb {D}^{m\times n}$$. Output:: The $$LD^{-1}U$$ decomposition of A as in Theorem 1. 1. 1. Initialize $$p_0 = 1$$, $$P_r = {\mathbf {1}}_{m}$$, $$L = \mathbf {0}_{m\times m}$$, $$U = A$$ and $$P_c = {\mathbf {1}}_{n}$$. 2. 2. For each $$k = 1,\ldots ,\min \{m,n\}$$: 1. (a) Find a non-zero pivot $$p_k$$ in $${U}_{{k\ldots m}{k\ldots n}}$$ and bring it to position (kk) recording the row and column swaps in $$P_r$$ and $$P_c$$. Also apply the row swaps to L accordingly. If no pivot is found, then set $$r = k$$ and exit the loop. 2. (b) Set $$L_{k,k} = p_k$$ and $$L_{i,k} = U_{i,k}$$ for $$i=k+1,\ldots ,m$$. Eliminate the entries in the kth column and below the kth row in U by cross-multiplication; that is, for $$i > k$$ set $${U}_{i,*}$$ to $$p_k {U}_{i,*} - U_{ik} {U}_{k,*}$$. 3. (c) Perform division by $$p_{k-1}$$ on the rows beneath the kth in U; that is, for $$i > k$$ set $${U}_{i,*}$$ to $${U}_{i,*} / p_{k-1}$$. Note that the divisions will be exact. 3. 3. If r is not set yet, set $$r = \min \{m,n\}$$. 4. 4. If $$r < m$$, then trim the last $$m-r$$ columns from L as well as the last $$m-r$$ rows from U. 5. 5. Set $$D = {{\,\mathrm{diag}\,}}(p_1, p_1 p_2, \ldots , p_{r-1} p_r)$$. 6. 6. Return $$P_r$$, L, D, U, and $$P_c$$. The algorithm does not specify the choice of pivot in step 2a. Conventional wisdom (see, for example, Geddes et al. [11]) is that in exact algorithms choosing the smallest possible pivot (measured in a way suitable for $$\mathbb {D}$$) will lead to the smallest output sizes. We have been able to confirm this experimentally in Middeke and Jeffrey [18] for $$\mathbb {D}= \mathbb {Z}$$ where size was measured as the absolute value. In step 2c the divisions are guaranteed to be exact. Thus, an implementation can use more efficient procedures for this step if available (for example, for big integers using mpz_divexact in the gmp library which is based on Jebelean [14] instead of regular division). One of the goals of the present paper is to discuss improvements to the decomposition explained above. Throughout this paper we shall use the term $$L D^{-1} U$$ decomposition to mean exactly the decomposition from Theorem 1 as computed by Algorithm 4. For the variations of this decomposition we introduce the following term: ### Definition 5 (Fraction-Free LU Decomposition). For a matrix $$A \in \mathbb {D}^{m\times n}$$ of rank r we say that $$A = P_r L D^{-1} U P_c$$ is a fraction-free LU decomposition if $$P_r \in \mathbb {D}^{m\times m}$$ and $$P_c \in \mathbb {D}^{n\times n}$$ are permutation matrices, $$L \in \mathbb {D}^{m\times r}$$ has $$L_{ij} = 0$$ for $$j > i$$ and $$L_{ii} \ne 0$$ for all i, $$U \in \mathbb {D}^{r\times n}$$ has $$U_{ij} = 0$$ for $$i > j$$ and $$U_{ii} \ne 0$$ for all i, and $$D \in \mathbb {D}^{r\times r}$$ is a diagonal matrix (with full rank). We will usually refer to matrices $$L \in \mathbb {D}^{m\times r}$$ with $$L_{ij} = 0$$ for $$j > i$$ and $$L_{ii} \ne 0$$ for all i as lower triangular and to matrices $$U \in \mathbb {D}^{r\times n}$$ with $$U_{ij} = 0$$ for $$i > j$$ and $$U_{ii} \ne 0$$ for all i as upper triangular even if they are not square. As mentioned in the introduction, Algorithm 4 does result in common factors in the rows of the output U and the columns of L. In the following sections, we will explore methods to explain and predict those factors. The next result asserts that we can cancel all common factors which we find from the final output. This yields a fraction-free LU decomposition of A where the size of the entries of U (and L) are smaller than in the $$L D^{-1} U$$ decomposition. ### Corollary 6 Given a matrix $$A \in \mathbb {D}^{m\times n}$$ with rank r and its standard $$L D^{-1} U$$ decomposition $$A = P_c L D^{-1} U P_c$$, if $$D_U = {{\,\mathrm{diag}\,}}(d_1,\ldots ,d_r)$$ is a diagonal matrix with $$d_k \mid U_{k,*}$$ for $$k = 1, \ldots , n$$, then setting $$\hat{U} = D_U^{-1} U$$ and $$\hat{D} = D D_U^{-1}$$ where both matrices are fraction-free we have the decomposition $$A = P_c L \hat{D}^{-1} \hat{U} P_c$$. ### Proof By Theorem 1, the diagonal entries of U are the pivots chosen during the decomposition and they also divide the diagonal entries of D. Thus, any common divisor of $$U_{k,*}$$ will also divide $$D_{kk}$$ and therefore both $$\hat{U}$$ and $$\hat{D}$$ are fraction-free. We can easily check that $$A = P_c L D^{-1} D_U D_U^{-1} U = P_c L \hat{D}^{-1} \hat{U} P_c$$. $$\square$$ ### Remark 7 If we predict common column factors of L we can cancel them in the same way. However, if we have already canceled factors from U, then there is no guarantee that $$d \mid L_{*,k}$$ implies $$d \mid \hat{D}_{kk}$$. Thus, in general we can only cancel $$\gcd (d, \hat{D}_{kk})$$ from $$L_{*,k}$$ (if $$\mathbb {D}$$ allows greatest common divisors). The same holds mutatis mutandis if we cancel the factors from L first. It will be an interesting discussion for future research whether it is better to cancel as many factors as possible from U or to cancel them from L. ## 3 LU and the Smith–Jacobson Normal Form This section explains a connection between “systematic factors” (that is, common factors which appear in the decomposition due to the algorithm being used) and the Smith–Jacobson normal form. For Smith’s normal form, see [5, 20], and for Jacobson’s generalization, see [22]. Given a matrix A over a principal ideal domain $$\mathbb {D}$$, we study the decomposition $$A=P_rLD^{-1}UP_c$$. For simplicity, from now on we consider the decomposition in the form $$P_r^{-1} A P_c^{-1} = L D^{-1} U.$$ The following theorem connecting the $$LD^{-1}U$$ decomposition with the Smith–Jacobson normal form can essentially be found in [2]. ### Theorem 8 Let the matrix $$A \in \mathbb {D}^{n\times n}$$ have the Smith–Jacobson normal form $$S = {{\,\mathrm{diag}\,}}(d_1,\ldots ,d_n)$$ where $$d_1,\ldots ,d_n \in \mathbb {D}$$. Moreover, let $$A = L D^{-1} U$$ be an $$L D^{-1} U$$ decomposition of A without permutations. Then for $$k=1,\ldots ,n$$ \begin{aligned} d_k^* = \prod _{j=1}^k d_j \mid U_{k,*} \quad \text {and}\quad d_k^* \mid L_{*,k}. \end{aligned} ### Remark 9 The values $$d_1^*, \ldots , d_n^*$$ are known in the literature as the determinantal divisors of A. ### Proof The diagonal entries of the Smith–Jacobson normal form are quotients of the determinantal divisors [20, II.15], i. e., $$d_1^* = d_1$$ and $$d_k = d^*_k/d^*_{k-1}$$ for $$k=2,\ldots ,n$$. Moreover, $$d_k^*$$ is the greatest common divisor of all $$k\times k$$ minors of A for each $$k=1,\ldots ,n$$. The entries of U and L, however, are k-by-k minors of A, as displayed in (2.5) and (2.6). $$\square$$ From Theorem 8, we obtain the following result. ### Corollary 10 The kth determinantal divisor $$d_k^*$$ can be removed from the kth row of U (since it divides $$D_{k,k}$$ by Theorem 6) and also $$d_{k-1}^*$$ can be removed from the kth column of L because $$d_{k-1}^* \mid d_k^*$$ and $$d_j^*$$ divides the jth pivot for $$j = k-1,k$$. Thus, $$d_{k-1}^* d_k^* \mid D_{k,k}$$. We illustrate this with an example using the polynomials over the finite field with three elements as our domain $$\mathbb {Z}_3[t]$$. Let $$A \in \mathbb {Z}_3[t]^{4\times 4}$$ be the matrix \begin{aligned} A = \begin{pmatrix} 2 t^{2} + t + 1 &{}\quad 0 &{}\quad t^{2} + 2 t &{}\quad 2 t^{3} + 2 t^{2} + 2 t + 2 \\ t^{3} + t^{2} + 2 t + 1 &{}\quad t^{2} &{}\quad 0 &{}\quad 2 t^{3} + t^{2} + 2 \\ t^{4} + t^{3} + t + 2 &{}\quad t^{3} + 2 t^{2} + t &{}\quad 2 t^{3} + t^{2} + t &{}\quad 2 t^{2} + t + 1 \\ 2 t &{}\quad t &{}\quad 2 t &{}\quad t^{2} + 2 t \end{pmatrix}. \end{aligned} Computing the regular (that is, not fraction-free) LU decomposition yields $$A = L_0 U_0$$ where \begin{aligned} L_0 = \begin{pmatrix} 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \frac{- t^{3} - t^{2} + t - 1}{t^{2} - t - 1} &{}\quad 1 &{}\quad 0 &{}\quad 0 \\ \frac{- t^{4} - t^{3} - t + 1}{t^{2} - t - 1} &{}\quad \frac{t^{2} - t + 1}{t} &{}\quad 1 &{}\quad 0 \\ \frac{t}{t^{2} - t - 1} &{}\quad \frac{1}{t} &{}\quad \frac{t^{4} - t^{3} - t^{2} + t - 1}{t^{4} - t^{3} - t^{2} - 1} &{}\quad 1 \end{pmatrix} \end{aligned} and \begin{aligned} U_0 = \begin{pmatrix} - t^{2} + t + 1 &{}\quad 0 &{}\quad t^{2} - t &{}\quad -t^{3} - t^{2} - t - 1 \\ 0 &{}\quad t^{2} &{}\quad \frac{t^{5} + t^{3} - t^{2} - t}{t^{2} - t - 1} &{}\quad \frac{- t^{6} + t^{4} + t^{3} + t}{t^{2} - t - 1} \\ 0 &{}\quad 0 &{}\quad \frac{- t^{4} + t^{3} + t^{2} + 1}{t^{2} - t - 1} &{}\quad \frac{t^{5} - t^{4} + t^{3} - t^{2} - t - 1}{t^{2} - t - 1} \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad \frac{t^{2} - t}{t^{4} - t^{3} - t^{2} - 1} \end{pmatrix}. \end{aligned} On the other hand, the $$L D^{-1} U$$ decomposition for A is $$A = L D^{-1} U$$ where \begin{aligned} L = \begin{pmatrix} - (t^2 - t - 1) &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ t^3 + t^2 - t + 1 &{}\quad - t^2 (t^2 - t - 1) &{}\quad 0 &{}\quad 0 \\ (t^2 + 1) (t^2 + t - 1) &{}\quad - t (t + 1)^2 (t^2 - t - 1) &{}\quad (t + 1) t^2 (t^3 + t^2 + t - 1) &{}\quad 0 \\ - t &{}\quad - t (t^2 - t - 1) &{}\quad t^2 (t^4 - t^3 - t^2 + t - 1) &{}\quad (t - 1) t^3 \end{pmatrix}, \end{aligned} \begin{aligned} D= & {} {{\,\mathrm{diag}\,}}\bigl (- (t^2 - t - 1),\; t^2 (t^2 - t - 1)^2,\\&\quad - (t + 1) t^4 (t^2 - t - 1) (t^3 + t^2 + t - 1),\;(t + 1) (t - 1) t^5 (t^3 + t^2 + t - 1)\bigr ) \end{aligned} and \begin{aligned} U = \begin{pmatrix} - (t^2 - t - 1) &{}\quad 0 &{}\quad t (t - 1) &{}\quad - (t + 1) (t^2 + 1) \\ 0 &{}\quad - t^2 (t^2 - t - 1) &{}\quad - t (t - 1) (t^3 + t^2 - t + 1) &{}\quad t (t^5 - t^3 - t^2 - 1) \\ 0 &{}\quad 0 &{}\quad (t + 1) t^2 (t^3 + t^2 + t - 1) &{}\quad - t^2 (t^5 - t^4 + t^3 - t^2 - t - 1) \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad (t - 1) t^3 \end{pmatrix} \end{aligned} (showing the entries completely factorised). The Smith–Jacobson normal form of A is \begin{aligned} {{\,\mathrm{diag}\,}}\bigl (1, t, t, t (t-1)\bigr ); \end{aligned} and thus the determinantal divisors are $$d_1^* = 1$$, $$d_2^* = t$$, $$d_3^* = t^2$$, and $$d_4^* = t^3 (t-1)$$. As we can see, $$d_j^*$$ does indeed divide the jth row of U and the jth column of L for $$j=1,2,3,4$$. Moreover, $$d_1^* d_2^* = t$$ divides $$D_{2,2}$$, $$d_2^* d_3^* = t^3$$ divides $$D_{3,3}$$, and $$d_1^* d_2^* = t^5 (t-1)$$ divides $$D_{4,4}$$. ## 4 Efficient Detection of Factors When considering the output of Algorithm 4, we find an interesting relation between the entries of L and U which can be exploited in order to find “systematic” common factors in the $$L D^{-1} U$$ decomposition. Theorem 11 below predicts a divisor of the common factor in the kth row of U, by looking at just three entries of L. Likewise, we obtain a divisor of the common factor of the kth column of L from three entries of U. As in the previous section, let $$\mathbb {D}$$ be a principal ideal domain. We remark that for general principal ideal domains the theorem below is more of a theoretical result. Depending on the specific domain $$\mathbb {D}$$, actually computing the greatest common divisors might not be easy (or even possible). The theorem becomes algorithmic, if we restrict $$\mathbb {D}$$ to be (computable) Euclidean domain. For other domains, the statement is still valid; but it is left to the reader to check whether algorithms for computing greatest common divisors exist. ### Theorem 11 Let $$A\in \mathbb {D}^{m\times n}$$ and let $$P_r L D^{-1} U P_c$$ be the $$L D^{-1} U$$ decomposition of A. Then \begin{aligned} \Bigl . \frac{ \gcd (L_{k-1,k-1}, L_{k,k-1}) }{ \gcd (L_{k-1,k-1}, L_{k,k-1}, L_{k-2,k-2}) } \;\Bigm |\; {U}_{k,*} \Bigr . \end{aligned} and \begin{aligned} \Bigl . \frac{ \gcd (U_{k-1,k-1}, U_{k-1,k}) }{ \gcd (U_{k-1,k-1}, U_{k-1,k}, U_{k-2,k-2}) } \;\Bigm |\; {L}_{*,k} \Bigr . \end{aligned} for $$k=2,\ldots ,m-1$$ (where we use $$L_{0,0} = U_{0,0} = 1$$ for $$k = 2$$). ### Proof Suppose that during Bareiss’s algorithm after $$k-1$$ iterations we have reached the following state \begin{aligned} A^{(k-1)} = \begin{pmatrix} T &{}\quad \underline{*} &{}\quad \underline{*} &{}\quad {\varvec{*}}\\ \overline{0} &{}\quad p &{}\quad * &{}\quad \overline{*} \\ \overline{0} &{}\quad 0 &{}\quad a &{}\quad \overline{v} \\ \overline{0} &{}\quad 0 &{}\quad b &{}\quad \overline{w} \\ \mathbf {0} &{}\quad \underline{0} &{}\quad \underline{*} &{}\quad {\varvec{*}}\end{pmatrix}\ , \end{aligned} where T is an upper triangular matrix, $$p,a,b \in \mathbb {D}$$, $$\overline{v}, \overline{w} \in \mathbb {D}^{1\times n-k-1}$$ and the other overlined quantities are row vectors and the underlined quantities are column vectors. Assume that $$a \ne 0$$ and that we choose it as a pivot. Continuing the computations we now eliminate b (and the entries below) by cross-multiplication \begin{aligned} A^{(k-1)} \leadsto \begin{pmatrix} T &{}{}\quad \underline{*} &{}{}\quad \underline{*} &{}{}\quad {\varvec{*}}\\ \overline{0} &{}{}\quad p &{}{}\quad * &{}{}\quad \overline{*} \\ \overline{0} &{}{}\quad 0 &{}{}\quad a &{}{}\quad \overline{v} \\ \overline{0} &{}{}\quad 0 &{}{}\quad 0 &{}{}\quad a\overline{w} - b\overline{v} \\ \mathbf {0} &{}{}\quad \underline{0} &{}{}\quad \underline{0} &{}{}\quad {\varvec{*}}\end{pmatrix}. \end{aligned} Here, we can see that any common factor of a and b will be a factor of every entry in that row, i. e., $$\gcd (a,b) \mid a\overline{w} - b\overline{v}$$. However, we still have to carry out the exact division step. This leads to \begin{aligned} A^{(k-1)} \leadsto \begin{pmatrix} T &{}\quad \underline{*} &{}\quad \underline{*} &{}\quad {\varvec{*}}\\ \overline{0} &{}\quad p &{}\quad * &{}\quad \overline{*} \\ \overline{0} &{}\quad 0 &{}\quad a &{}\quad \overline{v} \\ \overline{0} &{}\quad 0 &{}\quad 0 &{}\quad \frac{1}{p}(a\overline{w} - b\overline{v}) \\ \mathbf {0} &{}\quad \underline{0} &{}\quad \underline{0} &{}\quad {\varvec{*}}\end{pmatrix} = A^{(k)}. \end{aligned} The division by p is exact. Some of the factors in p might be factors of a or b while others are hidden in $$\overline{v}$$ or $$\overline{w}$$. However, every common factor of a and b which is not also a factor of p will still be a common factor of the resulting row. In other words, \begin{aligned} \Bigl . \frac{\gcd (a,b)}{\gcd (a,b,p)} \;\Bigm |\; \frac{1}{p}(a\overline{w} - b\overline{v}) \Bigr .. \end{aligned} In fact, the factors do not need to be tracked during the $$L D^{-1} U$$ reduction but can be computed afterwards: All the necessary entries a, b and p of $$A^{(k-1)}$$ will end up as entries of L. More precisely, we shall have $$p = L_{k-2,k-2}$$, $$a = L_{k-1,k-1}$$ and $$b = L_{k,k-1}$$. Similar reasoning can be used to predict common factors in the columns of L. Here, we have to take into account that the columns of L are made up from entries in U during each iteration of the computation. $$\square$$ As a typical example consider the matrix \begin{aligned} A = \begin{pmatrix} 8 &{}\quad 49 &{}\quad 45 &{}\quad -77 &{}\quad 66 \\ -10 &{}\quad -77 &{}\quad -19 &{}\quad -52 &{}\quad 48 \\ 51 &{}\quad 18 &{}\quad -81 &{}\quad 31 &{}\quad 69 \\ -97 &{}\quad -58 &{}\quad 37 &{}\quad 41 &{}\quad 22 \\ -60 &{}\quad 0 &{}\quad -25 &{}\quad -18 &{}\quad -92 \end{pmatrix}. \end{aligned} This matrix has a $$L D^{-1} U$$ decomposition with \begin{aligned} L = \begin{pmatrix} 8 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ -10 &{}\quad -126 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 51 &{}\quad -2355 &{}\quad 134076 &{}\quad 0 &{}\quad 0 \\ -97 &{}\quad 4289 &{}\quad -233176 &{}\quad -28490930 &{}\quad 0 \\ -60 &{}\quad 2940 &{}\quad -148890 &{}\quad -53377713 &{}\quad 11988124645 \end{pmatrix} \end{aligned} and with \begin{aligned} U = \begin{pmatrix} 8 &{}\quad 49 &{}\quad 45 &{}\quad -77 &{}\quad 66 \\ 0 &{}\quad -126 &{}\quad 298 &{}\quad -1186 &{}\quad 1044 \\ 0 &{}\quad 0 &{}\quad 134076 &{}\quad -414885 &{}\quad 351648 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad -28490930 &{}\quad 55072620 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 11988124645 \end{pmatrix}. \end{aligned} Note that in this example pivoting is not needed, i.e., we have $$P_r = P_c = {\mathbf {1}}$$. The method outlined in Theorem 11 correctly predicts the common factor 2 in the second row of U, the factor 3 in the third row and the factor 2 in the fourth row. However, it does not detect the additional factor 5 in the fourth row of U. The example also provides an illustration of the proof of Theorem 8: The entry $$-414885$$ of U at position (3, 4) is given by the determinant of the submatrix \begin{aligned} \begin{pmatrix} 8 &{}\quad 49 &{}\quad -77 \\ -10 &{}\quad -77 &{}\quad -52 \\ 51 &{}\quad 18 &{}\quad 31 \\ \end{pmatrix} \end{aligned} consisting of the first three rows and columns 1, 2 and 4 of A. In this particular example, however, the Smith–Jacobson normal form of the matrix A is $${{\,\mathrm{diag}\,}}(1,1,1,1,11988124645)$$ which does not yield any information about the common factors. Given Theorem 11, one can ask how good this prediction actually is. Concentrating on the case of integer matrices, the following Theorem 12 shows that with this prediction we do find a common factor in roughly a quarter of all rows. Experimental data suggest a similar behavior for matrices containing polynomials in $$\mathbb {F}_p[x]$$ where p is prime. Moreover, these experiments also showed that the prediction was able to account for $$40.17\%$$ of all the common prime factors (counted with multiplicity) in the rows of U.Footnote 4 ### Theorem 12 For random integers $$a,b,p \in \mathbb {Z}$$ the probability that the formula in Theorem 11 predicts a non-trivial common factor is \begin{aligned} \mathrm {P}\Bigl (\frac{\gcd (a,b)}{\gcd (p,a,b)} \ne 1\Bigr ) = 6 \frac{\zeta (3)}{\pi ^2} \approx 26.92\%. \end{aligned} ### Proof The following calculation is due to Hare [13] and Winterhof [25]: First note that the probability that $$\gcd (a,b) = n$$ is $$1/n^2$$ times the probability that $$\gcd (a,b) = 1$$. Summing up all of these probabilities gives \begin{aligned} \sum _{n=1}^\infty \mathrm {P}\bigl (\gcd (a,b) = n\bigr ) = \sum _{n=1}^\infty \frac{1}{n^2} \mathrm {P}\bigl (\gcd (a,b) = 1\bigr ) = \mathrm {P}\bigl (\gcd (a,b) = 1\bigr ) \frac{\pi ^2}{6}. \end{aligned} As this sum must be 1, this gives that the $$\mathrm {P}\bigl (\gcd (a,b) = 1\bigr ) = 6/\pi ^2$$, and the $$\mathrm {P}\bigl (\gcd (a,b) = n\bigr ) = 6/(\pi ^2 n^2)$$. Given that $$\gcd (a,b) = n$$, the probability that $$n \mid c$$ is 1/n. So the probability that $$\gcd (a,b) = n$$ and that $$\gcd (p,a,b) = n$$ is $$6/(\pi ^2 n^3)$$. So $$\mathrm {P}\bigl (\gcd (a,b)/\gcd (p,a,b) = 1\bigr )$$ is \begin{aligned} \sum _{n=1}^\infty \mathrm {P}\bigl (\gcd (a,b) = n \text { and } \gcd (p,a,b) = n\bigr ) = \sum _{n=1}^\infty \frac{6}{\pi ^2 n^3} = 6 \frac{\zeta (3)}{\pi ^2}. \end{aligned} $$\square$$ There is another way in which common factors in integer matrices can arise. Let d be any number. Then for random ab the probability that $$d \mid a+b$$ is 1/d. That means that if $$v,w \in \mathbb {Z}^{1\times n}$$ are vectors, then $$d \mid v + w$$ with a probability of $$1/d^n$$. This effect is noticeable in particular for small numbers like $$d = 2,3$$ and in the last iterations of the $$L D^{-1} U$$ decomposition when the number of non-zero entries in the rows has shrunk. For instance, in the second last iterations we only have three rows with at most three non-zero entries each. Moreover, we know that the first non-zero entries of the rows cancel during cross-multiplication. Thus, a factor of 2 appears with a probability of $$25\%$$ in one of those rows, a factor of 3 with a probability of $$11.11\%$$. In the example above, the probability for the factor 5 to appear in the fourth row was $$4\%$$. ## 5 Expected Number of Factors In this section, we provide a detailed analysis of the expected number of common “statistical” factors in the rows of U, in the case when the input matrix A has integer entries, that is, $$\mathbb {D}=\mathbb {Z}$$. We base our considerations on a “uniform” distribution on $$\mathbb {Z}$$, e.g., by imposing a uniform distribution on $$\{-n,\dots ,n\}$$ for very large n. However, the only relevant property that we use is the assumption that the probability that a randomly chosen integer is divisible by p is 1/p. We consider a matrix $$A=(A_{i,j})_{1\le i,j\le n}\in \mathbb {Z}^{n\times n}$$ of full rank. The assumption that A be square is made for the sake of simplicity; the results shown below immediately generalize to rectangular matrices. As before, let U be the upper triangular matrix from the $$LD^{-1}U$$ decomposition of A: \begin{aligned} U = \begin{pmatrix} U_{1,1} &{}\quad U_{1,2} &{}\quad \dots &{}\quad U_{1,n} \\ 0 &{}\quad U_{2,2} &{}\quad \dots &{}\quad U_{2,n} \\ \vdots &{}\quad &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad \dots &{}\quad &{}\quad U_{n,n} \end{pmatrix}. \end{aligned} Define \begin{aligned} g_k := \gcd (U_{k,k},U_{k,k+1},\dots ,U_{k,n}) \end{aligned} to be the greatest common divisor of all entries in the kth row of U. Counting (with multiplicities) all the prime factors of $$g_1,\dots ,g_{n-1}$$, one gets the plot that is shown in Fig. 1; $$g_n$$ is omitted as it contains only the single nonzero entry $$U_{n,n}=\det (A)$$. Our goal is to give a probabilistic explanation for the occurrence of these common factors, whose number seems to grow linearly with the dimension of the matrix. As we have seen in the proof of Theorem 8, the entries $$U_{k,\ell }$$ can be expressed as minors of the original matrix A: \begin{aligned} U_{k,\ell } = \det \begin{pmatrix} A_{1,1} &{}\quad A_{1,2} &{}\quad \dots &{}\quad A_{1,k-1} &{}\quad A_{1,\ell } \\ A_{2,1} &{}\quad A_{2,2} &{}\quad \dots &{}\quad A_{2,k-1} &{}\quad A_{2,\ell } \\ \vdots &{}\quad \vdots &{}\quad &{}\quad \vdots &{}\quad \vdots \\ A_{k,1} &{}\quad A_{k,2} &{}\quad \dots &{}\quad A_{k,k-1} &{}\quad A_{k,\ell } \end{pmatrix}. \end{aligned} Observe that the entries $$U_{k,\ell }$$ in the kth row of U are all given as determinants of the same matrix, where only the last column varies. For any integer $$q\ge 2$$ we have that $$q\mid g_k$$ if q divides all these determinants. A sufficient condition for the latter to happen is that the determinant \begin{aligned} h_k := \det \begin{pmatrix} A_{1,1} &{}\quad \dots &{}\quad A_{1,k-1} &{}\quad 1 \\ A_{2,1} &{}\quad \dots &{}\quad A_{2,k-1} &{}\quad x \\ \vdots &{}\quad \vdots &{}\quad &{}\quad \vdots \\ A_{k,1} &{}\quad \dots &{}\quad A_{k,k-1} &{}\quad x^{k-1} \end{pmatrix} \end{aligned} is divisible by q as a polynomial in $$\mathbb {Z}[x]$$, i.e., if q divides the content of the polynomial $$h_k$$. We now aim at computing how likely it is that $$q\mid h_k$$ when q is fixed and when the matrix entries $$A_{1,1},\dots ,A_{k,k-1}$$ are chosen randomly. Since q is now fixed, we can equivalently study this problem over the finite ring $$\mathbb {Z}_q$$, which means that the matrix entries are picked randomly and uniformly from the finite set $$\{0,\dots ,q-1\}$$. Moreover, it turns out that it suffices to answer this question for prime powers $$q=p^j$$. The probability that all $$k\times k$$-minors of a randomly chosen $$k\times (k+1)$$-matrix are divisible by $$p^j$$, where p is a prime number and $$j\ge 1$$ is an integer, is given by \begin{aligned} P_{p,j,k} := 1-\Bigl (1+p^{1-j-k}\,\frac{p^k-1}{p-1}\Bigr )\prod _{i=0}^{k-1}\bigl (1-p^{-j-i}\bigr ), \end{aligned} which is a special case of Brent and McKay [3, Thm. 2.1]. Note that this is exactly the probability that $$h_{k+1}$$ is divisible by $$p^j$$. Recalling the definition of the q-Pochhammer symbol \begin{aligned} (a;q)_k := \prod _{i=0}^{k-1} (1-aq^i),\quad (a;q)_0 := 1, \end{aligned} the above formula can be written more succinctly as \begin{aligned} P_{p,j,k} := 1-\Bigl (1+p^{1-j-k}\,\frac{p^k-1}{p-1}\Bigr )\Bigl (\frac{1}{p^j};\frac{1}{p}\Bigr )_k. \end{aligned} Now, an interesting observation is that this probability does not, as one could expect, tend to zero as k goes to infinity. Instead, it approaches a nonzero constant that depends on p and j (see Table 1): \begin{aligned} P_{p,j,\infty } := \lim _{k\rightarrow \infty } P_{p,j,k} = 1-\Bigl (1+\frac{p^{1-j}}{p-1}\Bigr )\Bigl (\frac{1}{p^j};\frac{1}{p}\Bigr )_\infty \end{aligned} Using the probability $$P_{p,j,k}$$, one can write down the expected number of factors in the determinant $$h_{k+1}$$, i.e., the number of prime factors in the content of the polynomial $$h_{k+1}$$, counted with multiplicities: \begin{aligned} \sum _{p\in \mathbb {P} }\sum _{j=1}^\infty P_{p,j,k}, \end{aligned} where $$\mathbb {P} =\{2,3,5,\dots \}$$ denotes the set of prime numbers. The inner sum can be simplified as follows, yielding the expected multiplicity $$M_{p,k}$$ of a prime factor p in $$h_{k+1}$$: \begin{aligned} M_{p,k} := \sum _{j=1}^\infty P_{p,j,k}= & {} \sum _{j=1}^\infty \biggl (1-\Bigl (1+p^{1-j-k}\,\frac{p^k-1}{p-1}\Bigr )\Bigl (\frac{1}{p^j};\frac{1}{p}\Bigr )_{\!k}\biggr ) \\= & {} -\sum _{j=1}^\infty \biggl (\Bigl (\frac{1}{p^j};\frac{1}{p}\Bigr )_{\!k}-1\biggr )-p^{1-k}\frac{p^k-1}{p-1}\sum _{j=1}^\infty \frac{1}{p^j}\Bigl (\frac{1}{p^j};\frac{1}{p}\Bigr )_{\!k} \\= & {} -\sum _{j=1}^\infty \sum _{i=1}^k (-1)^i p^{-ij-i(i-1)/2} \left[ \begin{array}{l} {k} \\ {i} \end{array} \right] _{1/p}\! -p^{1-k}\,\frac{p^k-1}{p-1} \frac{p^k}{p^{k+1}-1} \\= & {} \sum _{i=1}^k \frac{(-1)^{i-1}}{p^{i(i-1)/2}(p^i-1)} \left[ \begin{array}{l} {k} \\ {i} \end{array} \right] _{1/p} \! + \frac{1}{p^{k+1}-1} - \frac{1}{p-1} \end{aligned} In this derivation we have used the expansion formula of the q-Pochhammer symbol in terms of the q-binomial coefficient \begin{aligned} \left[ \begin{array}{l} {n}\\ {k} \end{array} \right] _{q} := \frac{\bigl (1-q^n\bigr )\bigl (1-q^{n-1}\bigr )\cdots \bigl (1-q^{n-k+1}\bigr )}{\bigl (1-q^k\bigr )\bigl (1-q^{k-1}\bigr )\cdots \bigl (1-q\bigr )}, \end{aligned} evaluated at $$q=1/p$$. Moreover, the identity that is used in the third step, \begin{aligned} \sum _{j=1}^\infty \frac{1}{p^j}\Bigl (\frac{1}{p^j};\frac{1}{p}\Bigr )_{\!k} = \frac{p^k}{p^{k+1}-1}, \end{aligned} is certified by rewriting the summand as \begin{aligned} \frac{1}{p^j}\Bigl (\frac{1}{p^j};\frac{1}{p}\Bigr )_{\!k} = t_{j+1} - t_j \quad \text {with}\quad t_j = \frac{p^k(p^{1-j}-1)}{p^{k+1}-1}\Bigl (\frac{1}{p^j};\frac{1}{p}\Bigr )_{\!k} \end{aligned} and by applying a telescoping argument. Hence, when we let k go to infinity, we obtain \begin{aligned} M_{p,\infty } = \lim _{k\rightarrow \infty } \sum _{j=1}^\infty P_{p,j,k} = \sum _{i=1}^\infty \frac{(-1)^{i-1}}{p^{i(i-1)/2}(p^i-1)} \frac{\bigl (p^{-i-1};p^{-1}\bigr )_\infty }{\bigl (p^{-1};p^{-1}\bigr )_\infty } - \frac{1}{p-1}. \end{aligned} Note that the sum converges quickly, so that one can use the above formula to compute an approximation for the expected number of factors in $$h_{k+1}$$ when k tends to infinity \begin{aligned} \sum _{p\in \mathbb {P} } M_{p,\infty } \approx 0.89764, \end{aligned} which gives the asymptotic slope of the function plotted in Figure 1. As discussed before, the divisibility of $$h_k$$ by some number $$q\ge 2$$ implies that the greatest common divisor $$g_k$$ of the kth row is divisible by q, but this is not a necessary condition. It may happen that $$h_k$$ is not divisible by q, but nevertheless q divides each $$U_{k,\ell }$$ for $$k\le \ell \le n$$. The probability for this to happen is the same as the probability that the greatest common divisor of $$n-k+1$$ randomly chosen integers is divisible by q. The latter obviously is $$q^{-(n-k+1)}$$. Thus, in addition to the factors coming from $$h_k$$, one can expect \begin{aligned} \sum _{p\in \mathbb {P} }\sum _{j=1}^\infty \frac{1}{p^{j(n-k+1)}} = \sum _{p\in \mathbb {P}}\frac{1}{p^{n-k+1}-1} \end{aligned} many prime factors in $$g_k$$. Summarizing, the expected number of prime factors in the rows of the matrix U is \begin{aligned} F(n)= & {} {} \sum _{k=2}^{n-1} \sum _{p\in \mathbb {P} } M_{p,k-1} + \sum _{k=1}^{n-1} \sum _{p\in \mathbb {P} } \frac{1}{p^{n-k+1}-1} \\ {}= & {} {} \sum _{p\in \mathbb {P} } \biggl (\sum _{k=0}^{n-2}M_{p,k} + \sum _{k=0}^{n-2} \frac{1}{p^{k+2}-1} \biggr ) \\ {}= & {} {} \sum _{p\in \mathbb {P} } \sum _{k=0}^{n-2} \biggl (\sum _{i=1}^k \frac{(-1)^{i-1}}{p^{i(i-1)/2}(p^i-1)} \left[ \begin{array}{l} {k} \\ {i} \end{array} \right] _{1/p}\!+ \frac{1}{p^{k+2}-1} + \frac{1}{p^{k+1}-1} - \frac{1}{p-1}\biggr ). \end{aligned} From the discussion above, it follows that for large n this expected number can be approximated by a linear function as follows: \begin{aligned} F(n) \approx 0.89764\,n - 1.53206. \end{aligned} ## 6 QR Decomposition The QR decomposition of a matrix A is defined by $$A=QR$$, where Q is an orthonormal matrix and R is an upper triangular matrix. In its standard form, this decomposition requires algebraic extensions to the domain of A, but a fraction-free form is possible. The modified form given in [26] is $$QD^{-1}R$$, and is proved below in Theorem 15. In [10], an exact-division algorithm for a fraction-free Gram-Schmidt orthogonal basis for the columns of a matrix A was given, but a complete fraction-free decomposition was not considered. We now show that the algorithms in [10] and in [26] both lead to a systematic common factor in their results. We begin by considering a fraction-free form of the Cholesky decomposition of a symmetric matrix. See [23, Eqn (3.70)] for a description of the standard form, which requires algebraic extensions to allow for square roots, but which are avoided here. This section assumes that $$\mathbb {D}$$ has characteristic 0; this assumption is needed in order to ensure that $$A^t A$$ has full rank. ### Lemma 13 Let $$A \in \mathbb {D}^{n\times n}$$ be a symmetric matrix such that its $$L D^{-1} U$$ decomposition can be computed without permutations; then we have $$U = L^t$$, that is, \begin{aligned} A = L D^{-1} L^t. \end{aligned} ### Proof Compute the decomposition $$A = L D^{-1} U$$ as in Theorem 1. If we do not execute item 4 of Algorithm 4, we obtain the decomposition \begin{aligned} A = \tilde{L} \tilde{D}^{-1} \tilde{U} = \begin{pmatrix} \mathcal {L} &{}\quad {\mathbf {0}} \\ \mathcal {M} &{}\quad {\mathbf {1}} \end{pmatrix} \begin{pmatrix} D &{}\quad {\mathbf {0}} \\ {\mathbf {0}} &{}\quad {\mathbf {1}} \end{pmatrix}^{-1} \begin{pmatrix} \mathcal {U} &{}\quad \mathcal {V} \\ {\mathbf {0}} &{}\quad {\mathbf {0}} \end{pmatrix}. \end{aligned} Then because A is symmetric, we obtain \begin{aligned} \tilde{L} \tilde{D}^{-1} \tilde{U} = A = A^t = \tilde{U}^t \tilde{D}^{-1} \tilde{L}^t \end{aligned} The matrices $$\tilde{L}$$ and $$\tilde{D}$$ have full rank which implies \begin{aligned} \tilde{U} (\tilde{L}^t)^{-1} \tilde{D} = \tilde{D} \tilde{L}^{-1} \tilde{U}^t. \end{aligned} Examination of the matrices on the left hand side reveals that they are all upper triangular. Therefore also their product is an upper triangular matrix. Similarly, the right hand side is a lower triangular matrix and the equality of the two implies that they must both be diagonal. Cancelling $$\tilde{D}$$ and rearranging the equation yields $$\tilde{U} = (\tilde{L}^{-1} \tilde{U}^t) \tilde{L}^t$$ where $$\tilde{L}^{-1} \tilde{U}^t$$ is diagonal. This shows that the rows of $$\tilde{U}$$ are just multiples of the rows of $$\tilde{L}^t$$. However, we know that the first r diagonal entries of $$\tilde{U}$$ and $$\tilde{L}$$ are the same, where r is the rank of $$\tilde{U}$$. This yields \begin{aligned} \tilde{L}^{-1} \tilde{U}^t = \begin{pmatrix} {\mathbf {1}}_{r} &{}\quad {\mathbf {0}} \\ {\mathbf {0}} &{}\quad {\mathbf {0}} \end{pmatrix}, \end{aligned} and hence, when we remove the unnecessary last $$n-r$$ rows of $$\tilde{U}$$ and the last $$n-r$$ columns of $$\tilde{L}$$ (as suggested in Jeffrey [15]), we remain with $$U = L^t$$. $$\square$$ As another preliminary to the main theorem, we need to delve briefly into matrices over ordered rings. Following, for example, the definition in [6, Sect. 8.6] an ordered ring is a (commutative) ring $$\mathbb {D}$$ with a strict total order > such that $$x > x'$$ together with $$y > y'$$ implies $$x + y > x' + y'$$ and also $$x > 0$$ together with $$y > 0$$ implies $$x y > 0$$ for all $$x, x', y, y' \in \mathbb {D}$$. As Cohn [6, Prop. 8.6.1] shows, such a ring must always be a domain, and squares of non-zero elements are always positive. Thus, the inner product of two vectors $$a, b \in \mathbb {D}^{m}$$ defined by $$(a,b) \mapsto a^t \,b$$ must be positive definite. This implies that given a matrix $$A \in \mathbb {D}^{m\times n}$$ the Gram matrix $$A^t A$$ is positive semi-definite. If we additionally require the columns of A to be linearly independent, then $$A^t A$$ becomes positive definite. ### Lemma 14 Let $$\mathbb {D}$$ be an ordered domain and let $$A \in \mathbb {D}^{n\times n}$$ be a symmetric and positive definite matrix. Then the $$L D^{-1} U$$ decomposition of A can be computed without using permutations. ### Proof By Sylvester’s criterion (see Theorem 22 in the “Appendix”) a symmetric matrix is positive definite if and only if its leading principal minors are positive. However, by Remark 2 and Equation 2.1, these are precisely the pivots that are used during Bareiss’s algorithm. Hence, permutations are not necessary. $$\square$$ If we consider domains which are not ordered, then the $$L D^{-1} U$$ decomposition of $$A^t A$$ will usually require permutations: Consider, for example, the Gaussian integers $$\mathbb {D}= \mathbb {Z}[i]$$ and the matrix \begin{aligned} A = \begin{pmatrix} 1 &{}\quad i \\ i &{}\quad 0 \end{pmatrix}. \end{aligned} Then \begin{aligned} A^t A = \begin{pmatrix} 0 &{}\quad i \\ i &{}\quad -1 \end{pmatrix}; \end{aligned} and Bareiss’s algorithm must begin with a row or column permutationFootnote 5. We are now ready to discuss the fraction-free QR decomposition. The theorem below makes two major changes to Zhou and Jeffrey [26, Thm. 8]: first, we add that $$\Theta ^t \Theta$$ is not just any diagonal matrix but actually equal to D. Secondly, the original theorem did not require the domain $$\mathbb {D}$$ to be ordered, which means that the proof cannot work. ### Theorem 15 Let $$A \in \mathbb {D}^{m\times n}$$ with $$n\le m$$ and with full column rank where $$\mathbb {D}$$ is an ordered domain. Then the partitioned matrix $$(A^t A \mid A^t)$$ has $$LD^{-1}U$$ decomposition \begin{aligned} (A^t A \mid A^t) = R^t D^{-1} (R \mid \Theta ^t), \end{aligned} where $$\Theta ^t \Theta = D$$ and $$A = \Theta D^{-1} R$$. ### Proof By Lemma 14, we can compute an $$L D^{-1} U$$ decomposition of $$A^t A$$ without using permutations; and by Theorem 13, the decomposition must have the shape \begin{aligned} A^t A = R^t D^{-1} R. \end{aligned} Applying the same row transformations to $$A^t$$ yields a matrix $$\Theta ^t$$, that is, we obtain $$(A^t A \mid A^t) = R^t D^{-1} (R \mid \Theta ^t).$$ As in the proof of Zhou and Jeffrey [26, Thm. 8], we easily compute that $$A = \Theta D^{-1} R$$ and that $$\Theta ^t \Theta = D^t (R^{-1})^t A^t A R^{-1} D = D^t (R^{-1})^t R^t D^{-1} R R^{-1} D = D.$$ $$\square$$ For example, let $$A\in \mathbb {Z}[x]^{3\times 3}$$ be the matrix \begin{aligned} A=\begin{pmatrix} x &{}\quad 1 &{}\quad 2 \\ 2 &{}\quad 0 &{}\quad -x \\ x &{}\quad 1 &{}\quad x + 1 \end{pmatrix}. \end{aligned} Then the $$LD^{-1}U$$ decomposition of $$A^tA=R^tD^{-1}R$$ is given by \begin{aligned} R= & {} \begin{pmatrix} 2 (x^2+2) &{}\quad 2 x &{}\quad x (x+1) \\ 0 &{}\quad 8 &{}\quad 4 (x^2+x+3) \\ 0 &{}\quad 0 &{}\quad 4 (x-1)^2 \end{pmatrix},\\ D= & {} \begin{pmatrix} 2 (x^2+2) &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 16 (x^2+2) &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 32 (x-1)^2 \end{pmatrix}, \end{aligned} and we obtain for the QR decomposition $$A = \Theta D^{-1} R$$: \begin{aligned} \Theta = \begin{pmatrix} x &{}\quad 4 &{}\quad -4 (x-1) \\ 2 &{}\quad -4 x &{}\quad 0 \\ x &{}\quad 4 &{}\quad 4 (x-1) \end{pmatrix}. \end{aligned} We see that the $$\Theta D^{-1} R$$ decomposition has some common factor in the last column of $$\Theta$$. This observation is explained by the following theorem. ### Theorem 16 With full-rank $$A \in \mathbb {D}^{n\times n}$$ and $$\Theta$$ as in Theorem 15, we have for all $$i=1,\ldots ,n$$ that \begin{aligned} \Theta _{in} = (-1)^{n+i} \det \limits _{i,n} A \cdot \det A \end{aligned} where $$\det _{i,n}A$$ is the (in) minor of A. ### Proof We use the notation from the proof of Theorem 15. From $$\Theta D^{-1} R = A$$ and $$\Theta ^t \Theta =D$$ we obtain \begin{aligned} \Theta ^t A = \Theta ^t \Theta D^{-1} R = R. \end{aligned} Thus, since A has full rank, $$\Theta ^t = R A^{-1}$$ or, equivalently, \begin{aligned} \Theta = (R A^{-1})^t = (A^{-1})^t R^t = (\det A)^{-1} ({\text {adj}} A)^t R^t \end{aligned} where $${\text {adj}} A$$ is the adjoint matrix of A. Since $$R^t$$ is a lower triangular matrix with $$\det A^t A = (\det A)^2$$ at position (nn), the claim follows. $$\square$$ For the other columns of $$\Theta$$ we can state the following. ### Theorem 17 The kth determinantal divisor $$d_k^*$$ of A divides the kth column of $$\Theta$$ and the kth row of R. Moreover, $$d_{k-1}^* d_k^*$$ divides $$D_{k,k}$$ for $$k \ge 2$$. ### Proof We first show that the kth determinantal divisor $$\delta _k^*$$ of $$(A^t A \mid A^t)$$ is the same as $$d_k^*$$. Obviously, $$\delta _k^* \mid d_k^*$$ since all minors of A are also minors of the right block $$A^t$$ of $$(A^t A \mid A^t)$$. Consider now the left block $$A^t A$$. We have by the Cauchy–Binet theorem [4, § 4.6] \begin{aligned} \det \limits _{I,J} (A^t A) = \sum _{\genfrac{}{}{0.0pt}{}{K \subseteq \{1,\ldots ,n\}}{|K| = q}} (\det \limits _{K,I} A) (\det \limits _{K,J} A) \end{aligned} where $$I, J \subseteq \{1,\ldots ,n\}$$ with $$|I| = |J| = q \ge 1$$ are two index sets and $$\det _{I,J} M$$ denotes the minor for these index sets of a matrix M. Thus, $$(d_k^*)^2$$ divides any minor of $$A^t A$$ since it divides every summand on the right hand side; and we see that $$d_k^* \mid \delta _k^*$$. Now, we use Theorems 15 and 8 to conclude that $$d_k^*$$ divides the kth row of $$(R \mid \Theta ^t)$$ and hence the kth row of R and the kth column of $$\Theta$$. Moreover, $$D_{k,k} = R_{k-1,k-1} R_{k,k}$$ for $$k \ge 2$$ by Theorem 1 which implies $$d_{k-1}^* d_k^* \mid D_{k,k}$$. $$\square$$ Knowing that there is always a common factor, we can cancel it, which leads to a fraction-free QR decomposition of smaller size. ### Theorem 18 For a square matrix A, a reduced fraction-free QR decomposition is $$A=\hat{\Theta }\hat{D}^{-1}\hat{R}$$, where $$S={\text {diag}}(1,1,\ldots ,\det A)$$ and $$\hat{\Theta }= \Theta S^{-1}$$, and $$\hat{R}=S^{-1}R$$. In addition, $$\hat{D}=S^{-1}DS^{-1}=\hat{\Theta }^t \hat{\Theta }$$. ### Proof By Theorem 16, $$\Theta S^{-1}$$ is an exact division. The statement of the theorem then follows from $$A=\Theta S^{-1} S D^{-1} S S^{-1} R$$. $$\square$$ If we apply Theorem 18 to our previous example, we obtain the simpler QR decomposition, where the factor $$\det A=-2(x-1)$$ has been removed. \begin{aligned} \begin{pmatrix} x &{}\quad 4 &{}\quad 2 \\ 2 &{}\quad -4 x &{} 0 \\ x &{}\quad 4 &{}\quad -2 \end{pmatrix}\; \begin{pmatrix} 2 (x^2+2) &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 16 (x^2+2) &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 8 \end{pmatrix}^{\!-1} \begin{pmatrix} 2 (x^2+2) &{}\quad 2 x &{}\quad x (x+1) \\ 0 &{}\quad 8 &{}\quad 4 (x^2+x+3) \\ 0 &{}\quad 0 &{}\quad -2 (x-1) \end{pmatrix}. \end{aligned} The properties of the QR-decomposition are strong enough to guarantee a certain uniqueness of the output. ### Theorem 19 Let $$A \in \mathbb {D}^{n\times n}$$ have full rank. Let $$A = \Theta D^{-1} R$$ the decomposition from Theorem 15; and let $$A = \tilde{\Theta } \tilde{D}^{-1} \tilde{R}$$ be another decomposition where $$\tilde{\Theta }, \tilde{D}, \tilde{R} \in \mathbb {D}^{n\times n}$$ are such that $$\tilde{D}$$ is a diagonal matrix, $$\tilde{R}$$ is an upper triangular matrix and $$\Delta = \tilde{\Theta }^t \tilde{\Theta }$$ is a diagonal matrix. Then $$\Theta ^t \tilde{\Theta }$$ is also a diagonal matrix and $$\tilde{R} = (\Theta ^t \tilde{\Theta })^{-1} \tilde{D} R$$. ### Proof We have \begin{aligned} \tilde{\Theta } \tilde{D}^{-1} \tilde{R} = \Theta D^{-1} R \qquad \text {and thus}\qquad \Theta ^t \tilde{\Theta } \tilde{D}^{-1} \tilde{R} = \Theta ^t \Theta D^{-1} R = R. \end{aligned} Since R and $$\tilde{R}$$ have full rank, this is equivalent to \begin{aligned} \Theta ^t \tilde{\Theta } = R \tilde{R}^{-1} \tilde{D}. \end{aligned} Note that all the matrices on the right hand side are upper triangular. Similarly, we can compute that \begin{aligned} \tilde{\Theta }^t \Theta D^{-1} R = \tilde{\Theta }^t \tilde{\Theta } \tilde{D}^{-1} \tilde{R} = \Delta \tilde{D}^{-1} \tilde{R} \end{aligned} which implies $$\tilde{\Theta }^t \Theta = \Delta \tilde{D}^{-1} \tilde{R} R^{-1} D.$$ Hence, also $$\tilde{\Theta }^t \Theta = (\Theta ^t \tilde{\Theta })^t$$ is upper triangular and consequently $$\tilde{\Theta }^t \Theta = T$$ for some diagonal matrix T with entries from $$\mathbb {D}$$. We obtain $$R = T \tilde{D}^{-1} \tilde{R}$$ and thus $$\tilde{R} = T^{-1} \tilde{D} R$$. $$\square$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9988964200019836, "perplexity": 1140.9275038695494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00739.warc.gz"}
https://www.vedantu.com/question-answer/if-a-is-nonsingular-matrix-of-order-3-and-adja-class-12-maths-cbse-5ee9f310f9a05a3f5d55348b
Question # If A is non-singular matrix of order 3 and ${\text{|adjA| = |A}}{{\text{|}}^{\text{K}}}$, then write the value of K. Verified 49.3k+ views Hint: In order to solve this problem one must know the formula ${\text{|adjA| = |A}}{{\text{|}}^{{\text{n - 1}}}}$ where n is the order of the matrix. Using this will solve our problem and we will get the right value of K. The given equation is ${\text{|adjA| = |A}}{{\text{|}}^{\text{K}}}$ A is a matrix of order 3. And we know that if A is a matrix of order 3 then ${\text{|adjA| = |A}}{{\text{|}}^{{\text{n - 1}}}}$ where n is the order of the matrix. Here order is 3 so n = 3. Then we can say that ${\text{|adjA| = |A}}{{\text{|}}^{\text{K}}}{\text{ = |A}}{{\text{|}}^{{\text{n - 1}}}} = {\text{|A}}{{\text{|}}^{3 - 1}}{\text{ = |A}}{{\text{|}}^2}$ Then we get ${\text{|A}}{{\text{|}}^{\text{K}}} = {\text{|A}}{{\text{|}}^2}$ Hence, the value of K is 2. Note: Whenever you face such types of problems then you need to know the most important formula of matrices and determinants like ${\text{|adjA| = |A}}{{\text{|}}^{{\text{n - 1}}}}$ where n is the order of the matrix.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9740194082260132, "perplexity": 93.21629694616679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00446.warc.gz"}
http://www.ipm.ac.ir/ViewPaperInfo.jsp?PTID=7328&school=Physic
## “School of Physic” Back to Papers Home Back to Papers of School of Physic Paper   IPM / Physic / 7328 School of Physics Title: A Dirac Particle in a Complex Potential Author(s): K. Saaidi Status: Preprint Journal: Year: 2003 Supported by: IPM Abstract: It has been observed that a quantum mechanical theory need not to be Hermitian to have a real spectrum. In this paper we obtain the eigenvalues of a Dirac charged particle in a complex static and spherically symmetric potential. Furthermore, we study the Complex Morse and complex Coulomb potentials.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.846024215221405, "perplexity": 1599.9525803245365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369054.89/warc/CC-MAIN-20210304113205-20210304143205-00276.warc.gz"}
http://mathhelpforum.com/calculus/181470-volume-integral-3-formulas-mmn-15-3a.html
# Thread: volume integral with 3 formulas mmn 15 3a 1. ## volume integral with 3 formulas mmn 15 3a find the volume inclosed by z=0 x=y and y^2+z^2=x i am having trouble drawing it i know i should look on th shadow of it on the x-y plane and integrate by it 2. Originally Posted by transgalactic find the volume inclosed by z=0 x=y and y^2+z^2=x i am having trouble drawing it i know i should look on th shadow of it on the x-y plane and integrate by it No, you shouldn't. y^2+ z^2= x is a paraboloid with axis along the x-axis, not the y-axis. For this problem you should project onto the yz-plane. Or, you can just "change" the problem- swap x and y and find the voume of the region bounded by x= 0, z= y, and z= x^2+ y^2. For that, you project onto the xy-plane. 3. i cant imagine it 4. I would approach it like this: $\iint_D\int_{y^2+z^2}^{y}dxdydz$ where $D: y^2+z^2\leq y; z\geq0$ 5. the main problem is how to know how y^2+ z^2= x looks like i only know how y^2+ x^2= z 6. ## Re: volume integral with 3 formulas mmn 15 3a Originally Posted by HallsofIvy No, you shouldn't. y^2+ z^2= x is a paraboloid with axis along the x-axis, not the y-axis. For this problem you should project onto the yz-plane. Or, you can just "change" the problem- swap x and y and find the voume of the region bounded by x= 0, z= y, and z= x^2+ y^2. For that, you project onto the xy-plane. i can project on every plane i want to and build the integral appropriatly i jast cant imagine this shape if i can find out how the shape looks like then it will solve it very fast but imaganening a parabaloid cutting x=0 plane and z=0 is very hard thing to draw and imagine
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8961994051933289, "perplexity": 2291.6607893253367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513611.22/warc/CC-MAIN-20171211144705-20171211164705-00319.warc.gz"}
https://www.tsp.chalmers.se/research/phd-projects/
# Past PhD theses You can read more about the projects and the research conducted on Chalmers Research, or click the direct links in the items below. Three-nucleon forces (3NFs) are necessary to accurately describe the properties of atomic nuclei. These forces arise naturally together with two-nucleon forces (2NFs) when constructing nuclear interactions using chiral effective field theories (χEFTs) of quantum chromodynamics. Unlike phenomenological nuclear interaction models, χEFT promises a handle on the theoretical uncertainty in our description of the nuclear interaction. Recently, methods from Bayesian statistics have emerged to quantify this theoretical truncation error in physical predictions based on chiral interactions. Alongside quantifying the truncation error, the low-energy constants (LECs) of the chiral interactions must be inferred using selected experimental data. In this regard, the abundant sets of experimentally measured nucleon-nucleon (NN) and nucleon-deuteron (Nd) scattering cross sections serve as natural starting points to condition such inferences on. Unfortunately, the high computational cost incurred when solving the Faddeev equations for Nd scattering has thus far hampered Bayesian parameter estimation of LECs from such data. In this thesis, I present the results from a two-part systematic investigation of the wave-packet continuum discretisation (WPCD) method for reliably approximating two- and three-nucleon (NNN) scattering states with an aim towards a quantitative Bayesian analysis in the NNN continuum. In the first part, I explore the possibilities of using graphics processing units to utilise the inherent parallelism of the WPCD method, focusing on solving the Lippmann-Schwinger equation. In the second part, I use the WPCD method to solve the Faddeev equations for Nd scattering and analyse the reliability of the approximations of the WPCD method. This allows me to quantify the posterior predictive distributions for a range of low-energy neutron-deuteron cross sections conditioned on NN scattering data and NN interactions up to fourth order in χEFT. It is well established that three-nucleon forces (3NFs) are necessary for achieving realistic and accurate descriptions of atomic nuclei. In particular, such forces arise naturally when using chiral effective field theories (χEFT). However, due to the huge computational complexity associated with the inclusion of 3NFs in many-body methods they are often approximated or neglected completely. In this thesis, three different methods to include the physics of 3NFs in the ab initio no-core shell-model(NCSM) have been implemented and tested. In the first method, we approximate the 3NFs as effective two-body operators by exploiting Wick’s theorem to normal order the 3NF relative a harmonic-oscillator Slater determinant reference state and discarding the remaining three-body term. We explored the performance of this single-reference normal-ordered two-body approximation on the ground-state energies of the two smallest closed-core nuclei, 4He and 16O, in particular focusing on consequences of the breaking of translational symmetry. The second approach is a full implementation of 3NFs in a new NCSM code, named JupiterNCSM, that we provide as an open-source research software. We have validated and benchmarked JupiterNCSM against other codes and we have specifically used it to investigate theeffects of different 3NFs on light p-shell nuclei 6He and 6Li. Finally, we implement the eigenvector continuation (EVC) method to emulate the response of ground-state energies of the aforementioned A = 6 nuclei to variations in the low-energy constants of χEFT that parametrize the 3NFs. In this approach, the full Hamiltonian is projected onto a small subspace that is constructed from a few selected eigenvectors. These training vectors are computed with JupiterNCSM in a large model space for a small set of parameter values. This thesis provides the first EVC-based emulation of nuclei computed with a Slater-determinant basis. After the training phase, we find that EVC predictions offer a very high accuracy and more than seven orders of magnitude computational speedup. As a result we are able to perform rigorous statistical inferences to explore the effects of 3NFs in nuclear many-body systems. The desire of discovery is an anthropic need which characterises and connects the human being over the eras. In particular, observing the sky is an instinctive drive exerted by the curiosity of the mysteries which it retains. At the present time, the tremendous advances in the exploration of space have opened even more challenges than back in the days. One of the most urgent question is unveiling the nature of dark matter (DM). As stated by Neta A. Bahcall (Professor at Princeton University), “Cosmology has revealed an amazing universe, filled with a “dark sector” that composes 95% of the energy density of our cosmos […]” (Dark matter universe, PNAS, 2015). About one-third of this dark sector is associated to an invisible and still undetected form of matter, the so-called dark mat- ter, whose gravitational effect manifests at all cosmological scales. Both theoretical and experimental observations based on ordinary gravity reinforced the evidences for the existence of DM, since its first appearance in the pioneering calculations of F. Zwicky (1933). This PhD project explores the hypothesis that DM is made of new particles beyond the standard model. More specifically, it focuses on those DM particles which are trapped into the galactic gravitational field and populate the galactic halo. If DM interacts with ordinary particles, extremely sensitive detectors operating in very low-background envi- ronments, are expected to detect galactic DM particles scattering off their target material. This widely employed experimental technique is known as DM direct detection and it is the focus of my studies, where I consider the further hypothesis that DM interacts with atomic nuclei. The research I conducted during my PhD program consists of two main parts: the first part focused on purely phenomenology aspects of the DM direct detection (namely on the DM annual modulation treated using a non-relativistic effective theory and on the scattering of spin-1 DM particles off polarised nuclei) and the second one is more closely connected to experimental applications. The latter has been strongly stimulated by my collaboration with the two DM direct detection experiments CRESST and COSINUS. For CRESST, I compute the DM-nucleus cross-section for the conventional spin-dependent interactions, used to analyse the data collected with a prototype Li-based detector module, and I derive some prospects for a time dependent analysis of CRESST-III data, using a sta- tistical frequentist approach based on Monte Carlo simulations. For COSINUS, I provide a significant extension of the pulse shape model currently used by CRESST and COSINUS in order to explain experimental observations related to the COSINUS detector response. Finally, I contribute to ongoing studies on the phonon propagation in NaI crystals based on solid state physics. This PhD thesis has been oriented to fill the gap between theoretical and experimental efforts in the DM field. This approach has facilitated the exchange of expertise, has driven the trend of my research and has stimulated the development of the ideas and methods described in this PhD thesis. The scientific method implies a dynamical relationship between experiment and theory. Indeed, experimental results are understood through theories, which themselves are of less value until confronted with experiment. In this thesis I study this relationship by quantifying two key properties of theories: theoretical uncertainties and predictive power. Specifically I investigate chiral effective field theory and the precision and accuracy by which it reproduces and predicts low-energy nuclear observables. I estimate both statistical and systematic uncertainties. The conclusion is that the latter, which in my approximation originates from omitted higher-order terms in the chiral expansion, are much larger than the former. In relation to this, I investigate the order-by-order convergence up to fourth order in the chiral expansion. I find that predictions generally improve with increasing order, while the additional low-energy constants (LECs) of the interaction makes it more difficult to fully constrain the theory. Furthermore, in order to accurately reproduce properties of heavier nuclei I see indications that it is necessary to include selected experimental data from such systems directly in the fitting of the interaction. In order to perform these studies I have developed accurate and efficient methods as well as computer codes for the calculation of observables. In particular, the application of automatic differentiation for derivative calculations is shown to be crucial for the minimization procedure. These developments open up new avenues for future studies. For example, it is now possible to do extensive sensitivity analyses of the experimental data and the model; to investigate the power counting from a data perspective; and incorporate more experimental data in the fitting procedure.of magnitude computational speedup. As a result we are able to perform rigorous statistical inferences to explore the effects of 3NFs in nuclear many-body systems. Halo nuclei are loosely bound systems consisting of a core plus valence nucleon(s). In so called Halo, or Cluster, effective field theory, the core of the halo nucleus is treated as an effective degree-of-freedom without internal structure. As such, Cluster effective field theory is a low-energy model, appropriate for the typical momentum scales of halo physics. The advantages of using effective field theory are the systematic way of improving results, by including higher orders in the momentum expansion, and the rigorous error estimates that are available at each order. In this thesis we present a formalism for treating one-proton and two-neutron halo nuclei in effective field theory, with an emphasis on charge radii, astrophysical S-factors, and the renormalization of three-body states. We also discuss a new power-counting scheme for heavy-core systems and introduce finite-size contributions. For one-proton halo nuclei we derive formalism for S- and P-wave systems, which we exemplify by studying the one-proton halo states 17F* and 8B, respectively. Of particular significance are: (i) our calculation of the radiative capture cross section of 16O(p,gamma)17F* to fifth order in the S-wave system and (ii) our derivation of a leading-order correlation between the charge radius of 8B and the threshold S-factor of 7Be(p,gamma)8B for the P-wave system. Our alternative power counting for halo nuclei with a heavy core leads to a new organizational principle that demotes the naive leading-order contributions to the charge radius for neutron halos. Additionally, in this new power counting we include the finite-size effects of the constituents explicitly into the field theory and derive how their finite sizes contribute to the charge radius of S- and P-wave one-neutron and one-proton halo states. For two-neutron halo systems we derive the field-theory integral equations to study both bound and resonant states. We apply the formalism to the 0+ channel of 6He. In this three-body field theory we include both the 3/2- and the 1/2- channels of the alpha-n subsystem, together with the 0+ channel of the n-n part. Furthermore, we include the relevant three-body interactions and analyze, in particular, the renormalization of the system. In this thesis we present the ab initio no-core shell model (NCSM) and use this framework to study light atomic nuclei with realistic nucleon-nucleon interactions. In particular, we present results for radii and ground-state energies of systems with up to twelve nucleons. Since the NCSM uses a finite harmonic oscillator basis, we need to apply corrections to compute basis-independent results. The derivation, application, and analysis of such corrections constitute important results that are presented in this thesis. Furthermore, we compute three-body overlap functions from microscopic wave functions obtained in the NCSM in order to study the onset of clusterization in many-body systems. In particular, we study the Borromean two-neutron halo state in 6He by computing the overlap function < 6He(0+)|4He(0+) + n + n >. We can thereby demonstrate that the clusterization is driven by the Pauli principle. Finally, we develop state-of-the-art computational tools to efficiently extract one- and two-body transition densities from microscopic wave functions. These quantities are important properties of many-body systems and are keys to compute structural observables. In this work we study the core-swelling effect in 6He by computing the average distance between nucleons.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736938238143921, "perplexity": 1032.3243843788803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00300.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-ex/0007057/
# A Search for B→τν (CLEO Collaboration) July 13, 2021 ###### Abstract We report results of a search for in a sample of 9.7 million charged meson decays. The search uses both and decay modes of the , and demands exclusive reconstruction of the companion decay to suppress background. We set an upper limit on the branching fraction at 90% confidence level. With slight modification to the analysis we also establish at 90% confidence level. preprint: CLNS 00/1674 CLEO 00-10 T. E. Browder, Y. Li, J. L. Rodriguez, H. Yamamoto, T. Bergfeld, B. I. Eisenstein, J. Ernst, G. E. Gladding, G. D. Gollin, R. M. Hans, E. Johnson, I. Karliner, M. A. Marsh, M. Palmer, C. Plager, C. Sedlack, M. Selen, J. J. Thaler, J. Williams, K. W. Edwards, R. Janicek, P. M. Patel, A. J. Sadoff, R. Ammar, A. Bean, D. Besson, R. Davis, N. Kwak, X. Zhao, S. Anderson, V. V. Frolov, Y. Kubota, S. J. Lee, R. Mahapatra, J. J. O’Neill, R. Poling, T. Riehle, A. Smith, C. J. Stepaniak, J. Urheim, S. Ahmed, M. S. Alam, S. B. Athar, L. Jian, L. Ling, M. Saleem, S. Timm, F. Wappler, A. Anastassov, J. E. Duboscq, E. Eckhart, K. K. Gan, C. Gwon, T. Hart, K. Honscheid, D. Hufnagel, H. Kagan, R. Kass, T. K. Pedlar, H. Schwarthoff, J. B. Thayer, E. von Toerne, M. M. Zoeller, S. J. Richichi, H. Severini, P. Skubic, A. Undrus, S. Chen, J. Fast, J. W. Hinson, J. Lee, D. H. Miller, E. I. Shibata, I. P. J. Shipsey, V. Pavlunin, D. Cronin-Hennessy, A.L. Lyon, E. H. Thorndike, C. P. Jessop, H. Marsiske, M. L. Perl, V. Savinov, D. Ugolini, X. Zhou, T. E. Coan, V. Fadeyev, Y. Maravin, I. Narsky, R. Stroynowski, J. Ye, T. Wlodek, M. Artuso, R. Ayad, C. Boulahouache, K. Bukin, E. Dambasuren, S. Karamov, G. Majumder, G. C. Moneti, R. Mountain, S. Schuh, T. Skwarnicki, S. Stone, G. Viehhauser, J.C. Wang, A. Wolf, J. Wu, S. Kopp, A. H. Mahmood, S. E. Csorna, I. Danko, K. W. McLean, Sz. Márka, Z. Xu, R. Godang, K. Kinoshita,***Permanent address: University of Cincinnati, Cincinnati, OH 45221 I. C. Lai, S. Schrenk, G. Bonvicini, D. Cinabro, S. McGee, L. P. Perera, G. J. Zhou, E. Lipeles, S. P. Pappas, M. Schmidtler, A. Shapiro, W. M. Sun, A. J. Weinstein, F. Würthwein,Permanent address: Massachusetts Institute of Technology, Cambridge, MA 02139. D. E. Jaffe, G. Masek, H. P. Paar, E. M. Potter, S. Prell, V. Sharma, D. M. Asner, A. Eppich, T. S. Hill, R. J. Morrison, R. A. Briere, T. Ferguson, H. Vogel, B. H. Behrens, W. T. Ford, A. Gritsan, J. Roy, J. G. Smith, J. P. Alexander, R. Baker, C. Bebek, B. E. Berger, K. Berkelman, F. Blanc, V. Boisvert, D. G. Cassel, M. Dickson, P. S. Drell, K. M. Ecklund, R. Ehrlich, A. D. Foland, P. Gaidarev, L. Gibbons, B. Gittelman, S. W. Gray, D. L. Hartill, B. K. Heltsley, P. I. Hopman, C. D. Jones, D. L. Kreinick, M. Lohner, A. Magerkurth, T. O. Meyer, N. B. Mistry, E. Nordberg, J. R. Patterson, D. Peterson, D. Riley, J. G. Thayer, P. G. Thies, D. Urner, B. Valant-Spaight, A. Warburton, P. Avery, C. Prescott, A. I. Rubiera, J. Yelton, J. Zheng, G. Brandenburg, A. Ershov, Y. S. Gao, D. Y.-J. Kim, and R. Wilson University of Hawaii at Manoa, Honolulu, Hawaii 96822 University of Illinois, Urbana-Champaign, Illinois 61801 Carleton University, Ottawa, Ontario, Canada K1S 5B6 and the Institute of Particle Physics, Canada McGill University, Montréal, Québec, Canada H3A 2T8 and the Institute of Particle Physics, Canada Ithaca College, Ithaca, New York 14850 University of Kansas, Lawrence, Kansas 66045 University of Minnesota, Minneapolis, Minnesota 55455 State University of New York at Albany, Albany, New York 12222 Ohio State University, Columbus, Ohio 43210 University of Oklahoma, Norman, Oklahoma 73019 Purdue University, West Lafayette, Indiana 47907 University of Rochester, Rochester, New York 14627 Stanford Linear Accelerator Center, Stanford University, Stanford, California 94309 Southern Methodist University, Dallas, Texas 75275 Syracuse University, Syracuse, New York 13244 University of Texas, Austin, TX 78712 University of Texas - Pan American, Edinburg, TX 78539 Vanderbilt University, Nashville, Tennessee 37235 Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061 Wayne State University, Detroit, Michigan 48202 California Institute of Technology, Pasadena, California 91125 University of California, San Diego, La Jolla, California 92093 University of California, Santa Barbara, California 93106 Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 Cornell University, Ithaca, New York 14853 University of Florida, Gainesville, Florida 32611 Harvard University, Cambridge, Massachusetts 02138 The purely leptonic decay of the meson offers a clean probe of the weak decay process. The branching fraction B(B→ℓν)=G2FmBm2ℓ8π(1−m2ℓm2B)2f2B|Vub|2τB, exhibits simple dependence on the meson decay constant and the magnitude of the quark mixing matrix element . The dependence on lepton mass () arises from helicity conservation and heavily suppresses the rate to light leptons. In the system this means is favored over or final states. Nevertheless, the expected branching fraction is small and the presence of additional neutrinos in the final state significantly weakens the experimental signature. In the context of the Standard Model, a crisp determination of CKM parameters may be obtained in principle by comparing with the difference in heavy and light neutral masses[1], Δmd=G2F6π2ηBmBm2Wf2BBBS0(xt)|Vtd|2, a quantity which is known from mixing measurements[2] to considerable precision: . In this comparison the dependence on the poorly known decay constant drops out, and one obtains[3] B(B→τν)=((4.08±0.24)×10−4)∣∣∣VubVtd∣∣∣2. The range is set by current theoretical uncertainties. Given a sufficiently precise experimental measurement of the branching fraction, this relationship could be used to map out an allowed zone in the plane of Wolfenstein and parameters[4] that is roughly similar to that determined by measurements of , but subject to a different mix of statistical, systematic, and theoretical uncertainties[5]. Alternatively, if is obtained from other measurements in the system, then the determination of may be viewed as a measurement of the decay constant . This may be the only way to measure . Looking beyond the Standard Model, the rate is sensitive to effects from charged Higgs bosons and may be used to set a limit on charged Higgs mass. The sensitivity is greatest for large values of the Higgs doublet vacuum expectation value ratio, [6]. Existing experimental information is limited, however. A previous search by this collaboration[7] in the system yielded a 90% confidence level upper limit , and three searches[8] in the system have yielded upper limits ranging from down to . Although the system offers powerful kinematical advantages, future measurements will be at the . In this Letter we present results of a new search for using a method which is uniquely adapted to the system. In this method we fully reconstruct the companion in a quasi-inclusive reconstruction technique similar to that developed for earlier measurements[9]. The data used in this analysis were collected with the CLEO II detector at the Cornell Electron Storage Ring (CESR). The data sample consists of taken at the (4S), corresponding to 9.66M pairs, and an additional taken below the threshold, which is used for background studies. CLEO II is a general purpose solenoidal magnet detector, described in detail elsewhere [10]. Cylindrical drift chambers in a 1.5T solenoidal magnetic field measure momentum and specific ionization () of charged particles. Photons are detected using a 7800-crystal CsI(Tl) electromagnetic calorimeter covering 98% of . Two-thirds of the data was taken in the CLEO II.V detector configuration, in which the innermost chamber was replaced by a 3-layer, double-sided silicon vertex detector, and the gas in the main drift chamber was changed from an argon-ethane to a helium-propane mixture. Track quality requirements are imposed on charged tracks, and pions and kaons are identified by their specific ionization, . Pairs of photons with an invariant mass within 2.5 standard deviations of the nominal mass are kinematically fit with a mass constraint. mesons are identified in the decay mode. Electrons are identified based on and the ratio of the track momentum to the associated shower energy in the CsI calorimeter; muons over about 1 GeV/c momentum are identified by their penetration depth in the instrumented steel flux return; below about 1 GeV/c muons are not distinguished from pions. The experiment is fully simulated by a GEANT-based Monte Carlo[11] that includes beam-related debris by overlaying random trigger events on Monte Carlo-generated events. The simulation is used to study backgrounds and optimize selection criteria, but directly enters the analysis only through the calculation of the signal reconstruction efficiency. To search for decays we fully reconstruct each event in the simultaneous decay modes (“signal ”) and (“companion ”). Here and throughout, charge conjugate modes are implied. For the signal we accept any single track which passes track quality requirements. Pion candidates must have momentum greater than 0.7 GeV/c and must neither pass lepton identification criteria nor be candidate daughters. We do not impose particle identification criterea. This approach encompasses the three decay modes and , which together constitute 46.5% of the branching fraction. Reconstruction efficiencies are 64%, 34%, and 84%, respectively, and there is some crossfeed into the “” channel from the tau decay modes , , and . The crossfeed efficiencies are 6%, 20%, and 8% respectively. The total reconstruction efficiency, including branching fractions and crossfeeds, is 32.9%. For the companion , we take advantage of the large (46%) branching fraction and seek to reconstruct , accepting either or and reconstructing the in the following eight modes, , , , , , , , and . Based on the reconstructed mass, the mass, and the kaon and pion particle identification information, we compute a quality factor and use it to reject poor candidates. The system may be any of the following: , , , , , or . With each reconstructed in one of the target decay modes, we now require that there be no additional charged tracks in the detector, and that the sum of all energy in the crystal calorimeter not associated with the ionization energy deposition of charged tracks be less than a mode-dependent value . For the clean decay modes of the companion , and , we set GeV, while for all other modes it is tightened to 0.4 GeV. The main source of non-associated calorimeter energy deposition is from hadronic interactions in the calorimeter that cast debris laterally and result in small energy deposits that are not matched with a parent track. Monte Carlo simulation and careful investigation of appropriate data samples indicates that on average such deposits sum to 240 MeV per (signal) event. Additional contributions arise from beam-related debris, averaging 26 MeV per event and concentrated in the far forward and backward portions of the calorimeter; and from real photons from incorrect signal reconstruction, which average 10 MeV per event. In addition to this summed energy requirement, we also test whether any unassigned calorimeter signal can be paired with an already identified photon shower to form an object with invariant mass within 2.5 standard deviations of the mass. If such a pairing can be made, the event is rejected. We suppress background from events by imposing requirements on the value of , the invariant mass squared of the system. For most of the states we demand , but for the case no restriction is needed, and for we permit . Backgrounds arising from events (“continuum”) are distinguished by a jetty topology. To suppress these backgrounds we compute the direction of the thrust axis of the companion candidate and measure the angle to the direction of the lepton or pion of the candidate. For a signal event these directions should be uncorrelated and the distribution uniform, while for continuum background the correlation is high and peaks at 1. We require be less than 0.90 and 0.75 for and candidates, respectively. Continuum background is more severe in the mode and demands the tighter cut. Additional backgrounds from are suppressed by requiring the Fox-Wolfram[13] moments ratio H2/H0 to be less than 0.5, which favors spherical topologies. Contributions from two-photon events () are negligible. The identification of acceptable candidates for the daughter, the , and system, together with the absence of extra tracks or significant extra neutral energy, marks the appearance of a signal candidate. We now characterize these candidates by the kinematic properties of the companion , since there is no additional information in the lone daughter track. In particular we use the total momentum and energy of the companion , computed from momenta and energies of its daughter products. These raw quantities are then recast as the more useful beam-constrained mass and energy difference variables. If more than one candidate is reconstructed in a given event, the one with the highest value of  is selected. Figure 1 shows the distribution of events in the plane for Monte Carlo background, Monte Carlo continuum background, Monte Carlo signal, and for the actual data set. The Monte Carlo background samples represent equivalent integrated luminosity of, respectively, three times and two times the actual data sample. The clustering of signal Monte Carlo events inside the  signal region but around GeV is due to reconstructing as . In such cases, the absence of the appropriate soft or from decay lowers the candidate’s total energy. Events in this satellite peak constitute 24% of the total signal yield. We select events whose  falls within 2.5 standard deviations of the true mass, and extract the signal yield by fitting the resulting  distribution. The net signal efficiency including all secondary branching fractions for the analysis is . The signal fit shape is the sum of a narrow ( MeV) Gaussian centered at for the primary signal yield, and a wide Gaussian ( MeV) centered at MeV for the satellite peak. The shapes and the relative normalization of these Gaussians are determined by Monte Carlo. Residual backgrounds are modelled by a linear distribution whose slope is determined by fitting the data lying outside the 2.5 window in . We fit the  distribution by an extended unbinned maximum likelihood method[14] to obtain the total yield of signal and background; the  shape parameters are fixed by the procedure described above and are not varied in the fit. Figure 2a shows the final  distribution of data inside the 2.5 standard deviation signal region of ; six events remain after all selection criteria are applied. Figure 2b shows the fit shape with normalization as resulting from the likelihood fit; the central value of the fitted yield is 0.96 events. The background level is consistent with Monte Carlo expectations given the selection criteria and the size of the data sample. Figure 2c shows a comparison of the  distribution for Monte Carlo events and data. To increase the yield for this plot we have released the restriction on leftover tracks, and here require exactly one extra charged track. These data events thus constitute in a sideband to the signal region. There are 71 such events in data, and 68 predicted by Monte Carlo. As evident in the figure, the Monte Carlo also reproduces the  spectrum of these events very closely. Examination of Monte Carlo background events in the signal region itself shows (a) that the background is composed of approximately equal amounts of and continuum events; (b) that the background in the mode is dominated by continuum while the background in the mode is dominated by ; and (c) about 75% of all background events, whether or continuum, have a present. Were it available, hadronic calorimetry would help suppress some of this remaining background. The branching ratio is related to the signal yield by where is the number of charged mesons in the data sample and is the efficiency as given above. We crosscheck the efficiency by conducting a separate analysis identical to this one in all key respects except that the target signal is replaced by whose branching fraction is large and well-measured. To ensure as much topological similarity to the case as possible, we restrict this ancillary analysis to the low-multiplicity sub-mode, . We find a yield of events in data, and compare this to the Monte Carlo result where the error is primarily due to uncertainties in the branching ratio[2]. The discrepancy between these yields is 1.3. We adopt a conservative course, using the efficiency determined by Monte Carlo, and assigning to it a relative systematic error given by . Figure 2d shows the likelihood function () plotted as versus . Also shown is the result of convolving the likelihood function with the systematic uncertainty distribution of the efficiency (assumed to be Gaussian). The systematic error on the efficiency is dominated by the 24.1% discussed in the preceding paragraph, but also includes contributions from reconstruction efficiency uncertainty and uncertainty in the efficiency of the non-associated neutral energy cuts. In total, the relative systematic error on efficiency is 24.4%. We integrate the systematics-convolved likelihood function to obtain a 90% confidence upper limit on from We find: B(B→τν)<8.4×10−4 at 90% confidence level. This approach can be shown[15] to be equivalent to the assumption of a flat Bayesian prior probability for and is known to yield a conservative upper limit. A frequentist approach based on generating Monte Carlo experiments gives at 90% confidence level [16]. We also investigate the decay mode [17]. There is currently no experimental information on this decay mode although limits on the related decays and exist[18]. The search strategy is the same as described above, but we require that the lone track on the signal side fail lepton identification. The expected momentum distribution of the charged track peaks at GeV/c, so we retain the 0.7 GeV/c momentum requirement previously applied to the pion candidate in the mode. The resulting set of three signal candidates is a subset of the six candidates. They are marked by shading in Fig. 2. We perform the same unbinned likelihood fit as above and obtain a central value yield of 0.81 events. The efficiency of the is ; we find at 90% confidence level. The efficiency is calculated using the form factor model of Reference References, but it changes only negligibly if instead we use 3-body phase space and a constant matrix element. We corroborate our result with an independent analysis, which is based only on counting events and yields an upper limit at 90% confidence level. We have reported an analysis of 9.66 million charged meson decays which results in a conservative upper limit on the branching fraction . We also modify the analysis slightly to establish The method used is optimized for conditions available at experiments, and we anticipate useful application of the method to other rare decay modes with large missing energy. We gratefully acknowledge the effort of the CESR staff in providing us with excellent luminosity and running conditions. This work was supported by the National Science Foundation, the U.S. Department of Energy, the Research Corporation, the Natural Sciences and Engineering Research Council of Canada, the A.P. Sloan Foundation, the Swiss National Science Foundation, and Alexander von Humboldt Stiftung. ## References • [1] See A.J. Buras, and R. Fleischer in Heavy Flavours II, World Scientific, (1997); Eds. A. J. Buras and M. Lindner. for definitions and values of QCD parameters and and the Inami-Lim function . • [2] C. Caso et al. (PDG), Eur. Phys. C3, 1, (1998). • [3] T. Draper, Nucl. Phys. Proc. Suppl. 73 (1999) 43. B. H. Behrens, et al., Phys. Rev. D61, (2000) 052001. • [4] L. Wolfenstein, Annu. Rev. Nucl. Part. Sci. 36, 137 (1986) See also Ref. References. • [5] G. Harris and J. Rosner, Phys. Rev. D45, (1992) 946. • [6] W.-S. Hou, Phys. Rev. D Brief Report 48 (1993) 2342 • [7] M. Artuso et al., CLEO Collaboration, Phys. Lett. D47 (1995) 785. • [8] D. Buskulic et al., ALEPH Collaboration, Phys. Lett. B 343 (1995) 444. M. Acciarri et al., L3 Collaboration, Phys. Lett. B 396 (1997) 327. P. Abreu et al., DELPHI Collaboration, CERN-EP/99-162 submitted to Phys. Lett. B (1999). • [9] H. Kroha et al, (CLEO Collaboration), Proceedings of the XXVIth Int’l Conf. on High Energy Physics (Dallas, 1992); M.S. Alam et al, (CLEO Collaboration), Phys. Rev. Lett. 74, 2885 (1995); M.S. Alam et al, (CLEO Collaboration), CLEO-CONF 98-17, ICHEP98-1011; T.E. Browder et al, (CLEO Collaboration), Phys. Rev. Lett. 81, 1786 (1998) • [10] Y. Kubota et al. (CLEO Collaboration), Nucl. Instrum. Methods Phys. Res., Sec. A320, 66 (1992); T.S. Hill, Nucl. Instrum. Methods Phys. Res., Sec. A 418, 32 (1998). • [11] R. Brun et al., GEANT 3.15, CERN DD/EE/84-1. • [12] By B. Andersson, G. Gustafson, B. Soderberg, Z.Phys. C20, 317, (1983) • [13] G. Fox and S. Wolfram, Phys. Rev. Lett. 41, 1581 (1978). • [14] L. Lyons, W. Allison and J. Panella Comellas, NIM A245, 530 (1986) R. Barlow, NIM A297, 496 (1990) • [15] G. Feldman and R. Cousins, Phys. Rev. D57 (1998) 3873. • [16] I. Narsky, hep-ex/9904025. • [17] P. Colangelo et al, Phys. Lett. B395 339 (1997). • [18] ALEPH Collaboration ICHEP96 (PA-10-019); DELPHI Collaboration CERN PPE/96-67.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021857976913452, "perplexity": 1475.8604310213193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057329.74/warc/CC-MAIN-20210922041825-20210922071825-00687.warc.gz"}
http://mathhelpforum.com/math-topics/47369-quick-conversion-question-2.html
# Math Help - Quick Conversion question RE: qbkr21 2. ## RE: RE: 3. Originally Posted by qbkr21 RE: so the diameter of a Cu atom is 1.4 Angstroms? maybe it is the way you are inputting the answer. maybe it requires it in standard form or something Page 2 of 2 First 12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9858327507972717, "perplexity": 2483.6151269149855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678680766/warc/CC-MAIN-20140313024440-00064-ip-10-183-142-35.ec2.internal.warc.gz"}
https://link.springer.com/article/10.1007%2Fs41478-019-00167-3
The Journal of Analysis , Volume 27, Issue 4, pp 1163–1177 # $$L^p$$ inequalities for polynomials • B. A. Zargar • Shahista Bashir Original Research Paper ## Abstract In this paper we establish some $$L^p$$ inequalities for polynomials having no zeros in $$|z|<1,$$ where $$k\ge 1$$ except for t-fold zeros at origin. Our results not only generalize some known polynomial inequalities, but also a variety of interesting results can be deduced from these by a fairly uniform procedure. ## Keywords Polynomials Zeros $$L^p$$ inequalities Zygmund inequality ## Mathematics Subject Classification 30A10 30C15 26D05 ## Notes ### Conflicts of interest The authors declare that they have no conflict of interest. ## References 1. 1. Abdullah Mir, K.K. Dewan, and Naresh Singh. 2010. Some $$L^p$$ inequalities for polynomials. Functions and Approximation 42 (2): 131–143. 2. 2. Arestove, V.V. 1981. On integral inequalities for trigonometric polynomials and their derivatives. Izv Akad Nauk SSSR Ser Mat 45: 3–22. 3. 3. Ankeny, N.C., and T.J. Rivilin. 1955. On a Theorem of S. Bernstein. Pacific Journal of Mathematics 5: 849–852. 4. 4. Aziz, A. 1987. Growth of polynomials whose zeros are within or outside a circle. Bulletin of the Australian Mathematical Society 35: 247–256. 5. 5. Aziz, A., and N.A. Rather. 1998. New $$L^p$$ inequalities for polynomials. Mathematical Inequalities and Applications 1: 177–191. 6. 6. Aziz, A., and W.M. Shah. 2004. Inequalities for a polynomials and its derivative. Mathematical Inequalities and Applications 7: 379–391. 7. 7. Chan, T.N., and M.A. Malik. 1983. On Erdös–Lax Theorem. Proceedings of the Indian Academy of Sciences 92: 191–193. 8. 8. De-Bruijn, N.G. 1947. Inequalities concerning polynomials in the complex domain. Nederl Akad Wetensch Proc 50: 1265–1272. 9. 9. Gardner, R.B., N.K. Govil, and S.R. Musukula. 2005. Rate of growth of polynomials not vanishing inside a circle. Journal of Inequation in Pure and Applied Mathematics 6 (2): 1–9. 10. 10. Gardner, R.B., N.K. Govil, and A. Weems. 2004. Some results concerning rate of growth of polynomials. East Journal on Approximations 10: 301–312. 11. 11. Gardner, R.B., and A. Weems. 1998. A Bernstein type $$L^p$$ inequality for a certain class of polynomials. Journal of Mathematical Analysis and Applications 219: 472–478. 12. 12. Govil, N.K., and Q.I. Rahman. 1969. Functions of exponential type, not vanishing in a half plane and related polynomials. Transactions of the American Mathematical Society 137: 501–517. 13. 13. Hardy, G.H. 1915. The mean value of the modulus of an analytic function. Proceedings of the London Mathematical Society 14: 269–277. 14. 14. Milovanovic, G.V., D.S. Mitrinovic, and ThM Rassias. 1994. Topics in polynomials, extremal problems, inequalities and zeros. Singapore: World Scientific. 15. 15. Qazi, M.A. 1992. On the maximum modulus of polynomials. Proceedings of the American Mathematical Society 115: 337–343. 16. 16. Rahman, Q.I., and G. Schmeisser. 1988. $$L^p$$ inequalities for polynomials. Journal of Approximation Theory 53: 26–32. 17. 17. Schaeffer, A.C. 1941. Inequalities of A. Markoff and S. Bernstein for polynomials and related functions. Bulletin of the American Mathematical Society 47: 565–579. 18. 18. Zygmund, A. 1932. A remark on conjugate series. Proceedings of the London Mathematical Society 34: 392–400.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8303858637809753, "perplexity": 3773.544839562513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496666229.84/warc/CC-MAIN-20191113063049-20191113091049-00486.warc.gz"}
http://repub.eur.nl/pub/22805/
Various studies have shown the emergence of cooperative behavior in evolutionary models with spatially distributed agents. We investigate to what extent these findings generalize to evolutionary models of price competition among spatially distributed firms. We consider both one- and two-dimensional models, and we vary the amount of information firms have about competitors in their neighborhood. Our computer simulations show that the emergence of cooperative behavior depends strongly on the amount of information available to firms. Firms tend to behave most cooperatively if they have only a very limited amount of information about their competitors. We provide an intuitive explanation for this phenomenon. Our simulations further indicate that three other factors in our models, namely the accuracy of firms’ information, the probability of experimentation, and the spatial distribution of consumers, have little effect on the emergence of cooperative behavior.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.803216814994812, "perplexity": 419.1421484698804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929418.92/warc/CC-MAIN-20150521113209-00238-ip-10-180-206-219.ec2.internal.warc.gz"}
https://testbook.com/objective-questions/mcq-on-limiting-error--5eea6a1039140f30f369e99f
# An ammeter of 0-25 A range has a guaranteed accuracy of 1% of full scale reading. The current measured is 5 A. The limiting error is 1. 2% 2. 2.5% 3. 4% 4. 5% Option 4 : 5% ## Limiting Error MCQ Question 1 Detailed Solution Concept: Limiting error: The maximum allowable error in the measurement is specified in terms of true value, is known as limiting error. It will give a range of errors. It is always with respect to the true value, so it is a variable error. Guaranteed accuracy error: The allowable error in measurement is specified in terms of full-scale value is known as guaranteed accuracy error. It is a variable error seen by the instrument since it is with respect to full-scale value. Calculation: Given that, full-scale reading = 25 A Guaranteed accuracy error = 1% of full-scale reading = 1% of 25 = 0.25 A True value = 5 A Limiting error $$= \frac{{0.25}}{5} = 0.05$$ Limitting error = 5% # The following measurements are obtained on a single-phase load: V = 220 V ± 1%, I = 5.0 A ± 1% and W = 55 W ± 2%. If the power factor is calculated using these measurements, the worst-case error in the calculated power factor in percent is _________. (Give answer up to once decimal) ## Limiting Error MCQ Question 2 Detailed Solution Concept: Combination of Quantities with Limiting Error: Two or more quantity each having a limiting error are combined it is advantageous to be able to compute the limiting error of the combination. The limiting error can easily obtain by considering the relative increment of the function. Considered y be the final result which is the function of measured quantity u, v, and w. Product or Quotient of Quantities: Let, $$y = uvw\;or\;y = \frac{u}{{vw}}\;or\;y = \frac{1}{{uvw}}$$ The corresponding limiting error can be written as, $$\frac{{\delta y}}{y} = \pm \left( {\frac{{\delta u}}{u} + \frac{{\delta v}}{v} + \frac{{\delta w}}{w}} \right)$$ Calculation: Given that, V = 220 V ± 1% I = 5 A ± 1% W = 55 W ± 2% Power factor = cos ϕ W = VI cos ϕ $$\Rightarrow \cos \phi = \frac{W}{{VI}}$$ Hence, the total error in the measurement = ± (1 + 1 + 2)% = ± 4% Sum of Quantities: Let, y = u + v The relative increment of the function is given by, $$\frac{{dy}}{y} = \frac{{d\left( {u + v} \right)}}{y} = \frac{{du}}{y} + \frac{{dv}}{y}$$ The expression the result in terms of relative increment of the component quantities $$\frac{{dy}}{y} = \frac{u}{y}\frac{{du}}{u} + \frac{v}{y}\frac{{dv}}{v}$$ The error in the component quantity is represented by ± δu and ± δv. The corresponding limiting error can be written as, $$\frac{{\delta y}}{y} = \pm \left( {\frac{u}{y}\frac{{du}}{u} + \frac{v}{y}\frac{{dv}}{v}} \right)$$ # An ammeter of range 0-25 Amp has an accuracy of 1% of full scale reading. The current measured by the ammeter is 5 Amp. The limiting error in the reading is: 1. 2% 2. 2.5% 3. 4% 4. 5% Option 4 : 5% ## Limiting Error MCQ Question 3 Detailed Solution Concept of Limiting error: Limiting error is defined as the difference between the measured quantity (Am) and the true quantity (At). Limiting error LE = Am - At %Limiting error (%LE): It can be calculated as the ratio of limiting error to the true quantity. %LE = $$\frac{A_m -A_t }{A_t } \times 100$$ Guaranteed accuracy error (GAE): It can be calculated as the ratio of limiting error to the full-scale value. % GAE = $$\frac{Limiting\ error}{Full\ scale\ reading}\times 100=\frac{A_m -A_t }{Full\ scale\ reading} \times 100$$ Calculation: Given that, Full-scale reading = 25 A % GAE = 1 % True value of ammeter = At % GAE = $$\frac{L E}{FSR}\times 100=\frac{A_m -A_t }{FSR} \times 100$$ Where, FSR is Full scale reading ⇒ 1 = (Limiting error / 25 ) x 100 ∴ Limiting error = 0.25 A % LE = (Limiting error / True value) x 100 ⇒ % LE = (0.25 / 5) x 100 ∴ % Limiting error = 5 % # A non-ideal Si-based pn junction diode is tested by sweeping the bias applied across its terminals from -5 V to +5 V. The effective thermal voltage, VT, for the diode is measured to be (29 ± 2) mV. The resolution of the voltage source in the measurement range is 1 mV. The percentage uncertainty (rounded off to 2 decimal places) in the measured current at a bias voltage of 0.02 V is______ ## Limiting Error MCQ Question 4 Detailed Solution Concept: The equation of a diode current is given by $${I_D} = {I_0}{e^{\frac{{{V_D}}}{{\eta {V_T}}}}}$$ By applying log on both sides $$\ln {I_D} = \ln {I_0} + \frac{{{V_D}}}{{\eta {V_T}}}$$ By differentiating with the above equation with respect to VT $$\frac{{\partial I}}{I} = 0 + \left( {\frac{{{V_D}}}{\eta }} \right)\left( {1 - \frac{1}{{V_T^2}} \times \partial {V_T}} \right)$$ $$\Rightarrow \frac{{\partial I}}{{\partial {V_T}}} = - \frac{{{V_D}I}}{{\eta V_T^2}}$$ For η = 1, $$\frac{{\partial I}}{{\partial {V_T}}} = - \frac{{{V_D}I}}{{V_T^2}}$$ Now, by differentiating the diode current equation with respect to VD: $$\frac{{\partial I}}{I} = 0 + \frac{1}{{\eta {V_T}}}\partial {V_D}$$ $$\Rightarrow \frac{{\partial I}}{{\partial {V_D}}} = - \frac{I}{{\eta {V_T}}}$$ For η = 1,  $$\frac{{\partial I}}{{\partial {V_D}}} = - \frac{I}{{{V_T}}}$$ % uncertainty $$= \sqrt {{{\left[ {\frac{{\partial I}}{{\partial {V_T}}}} \right]}^2}W_{VT}^2 + {{\left[ {\frac{{\partial I}}{{\partial {V_D}}}} \right]}^2}W_{VD}^2}$$ $$\sqrt {\frac{{V_D^2{I^2}}}{{V_T^4}}W_{VT}^2 + {{\left[ {\frac{I}{{{V_T}}}} \right]}^2}W_{VD}^2}$$ Calculation: VT = (29 ± 2) mV. VT = (0.029 ± 0.002) V, VD = 0.02 V, WVD = 0.001 V, WVT = 0.002 V Therefore, $$\% \frac{{{W_{res}}}}{I} = \pm 11.7\%$$ # Four ammeters M1, M2, M3 and M4 with the following specifications are available. (Full scale, accuracy value as percentage of FS)M1 = 20 ± 0.10M2 = 10 ± 0.20M3 = 5 ± 0.50M4 = 1 ± 1.00 A current 1 A is to be measured to obtain minimum error in the reading one should select the meter 1. M1 2. M2 3. M3 4. M4 Option 4 : M4 ## Limiting Error MCQ Question 5 Detailed Solution M1 = 20 ± 0.100 $$Error = \frac{{20 \times 0.1}}{{100}} = 0.02\;A$$ Limiting error $$= \frac{{0.02}}{1} \times 100 = 2\%$$ M2 = 10 ± 0.20% $$Error = \frac{{10 \times 0.2}}{{100}} = 0.02\;A$$ Limiting error $$= \frac{{0.02}}{1} \times 100 = 2\%$$ M3 = 5 ± 0.50% $$Error = \frac{{5 \times 0.5}}{{100}} = 0.025A$$ Limiting error $$= \frac{{0.025}}{1} \times 100 = 2.5\%$$ M4 = 1 ± 1.00% $$Error = \frac{{1 \times 1}}{{100}} = 0.01\;A$$ Limiting error $$= \frac{{0.01}}{1} \times 100 = 1\%$$ Therefore, ammeter M4 will give minimum error. # A 0 – 10 A ammeter has a guaranteed accuracy of 1 percent of full scale deflection. What will be the limiting error in the reading of 2.5 A? 1. 1 per cent 2. 4 per cent 3. 2 per cent 4. 3 per cent ## Answer (Detailed Solution Below) Option 2 : 4 per cent ## Limiting Error MCQ Question 6 Detailed Solution Limiting error: Limiting error is defined as the difference between the measured quantity (Am) and the true quantity (At). Limiting error LE = Am - At %Limiting error (%LE): It can be calculated as the ratio of limiting error to the true quantity. %LE = $$\frac{A_m -A_t }{A_t } \times 100$$ Guaranteed accuracy error (GAE): It can be calculated as the ratio of limiting error to the full-scale value. % GAE = $$\frac{Limiting\ error}{Full\ scale\ reading}\times 100=\frac{A_m -A_t }{Full\ scale\ reading} \times 100$$ Calculation: GIven that, Full-scale reading =10 A % GAE = 1 % True value of ammeter = At % GAE = $$\frac{Limiting\ error}{Full\ scale\ reading}\times 100=\frac{A_m -A_t }{Full\ scale\ reading} \times 100$$ ⇒ 1 = (Limiting error / 10 ) x 100 ∴ Limiting error = 0.1 A % LE = (Limiting error / True value) x 100 ⇒ % LE = (0.1 / 2.5) x 100 ∴ % Limiting error = 4 % # Two resistors with nominal resistance values R1 and R2 have additive uncertainties ΔR1 and ΔR2, respectively. When these resistances are connected in parallel, the standard deviation of the error in the equivalent resistance R is 1. $$\pm \sqrt {{{\left( {\frac{{\partial R}}{{\partial {R_1}}}{\rm{\Delta }}{R_1}} \right)}^2} + {{\left( {\frac{{\partial R}}{{\partial {R_2}}}{\rm{\Delta }}{R_2}} \right)}^2}}$$ 2. $$\pm \sqrt {{{\left( {\frac{{\partial R}}{{\partial {R_2}}}{\rm{\Delta }}{R_1}} \right)}^2} + {{\left( {\frac{{\partial R}}{{\partial {R_1}}}{\rm{\Delta }}{R_2}} \right)}^2}}$$ 3. $$\pm \sqrt {{{\left( {\frac{{\partial R}}{{\partial {R_1}}}} \right)}^2}{\rm{\Delta }}{R_2} + {{\left( {\frac{{\partial R}}{{\partial {R_2}}}} \right)}^2}{\rm{\Delta }}{R_1}}$$ 4. $$\pm \sqrt {{{\left( {\frac{{\partial R}}{{\partial {R_1}}}} \right)}^2}{\rm{\Delta }}{R_1} + {{\left( {\frac{{\partial R}}{{\partial {R_2}}}} \right)}^2}{\rm{\Delta }}{R_2}}$$ ## Answer (Detailed Solution Below) Option 1 : $$\pm \sqrt {{{\left( {\frac{{\partial R}}{{\partial {R_1}}}{\rm{\Delta }}{R_1}} \right)}^2} + {{\left( {\frac{{\partial R}}{{\partial {R_2}}}{\rm{\Delta }}{R_2}} \right)}^2}}$$ ## Limiting Error MCQ Question 7 Detailed Solution Nominal resistance values = R1 and R2 Additive uncertainties = ΔR1 and ΔR2 Standard deviation of the error in the equivalent resistance R is $$\sigma = \pm \sqrt {{{\left( {\frac{{\partial R}}{{\partial {R_1}}}} \right)}^2}\sigma _1^2 + {{\left( {\frac{{\partial R}}{{\partial {R_2}}}} \right)}^2}\sigma _2^2}$$ $$= \pm \sqrt {{{\left( {\frac{{\partial R}}{{\partial {R_1}}}} \right)}^2}{{\left( {{\rm{\Delta }}{R_1}} \right)}^2} + {{\left( {\frac{{\partial R}}{{\partial {R_2}}}} \right)}^2}{{\left( {{\rm{\Delta }}{R_2}} \right)}^2}}$$ # Resistance is determined by the voltmeter ammeter method. The voltmeter reads 100 V with a probable error of ±12 V and ammeter reads 10 A with a probable error of ±2 A. The probable error in the computed value of resistance will be nearly 1. 2.33Ω 2. 3.33Ω Option 3 : 2.33Ω ## Limiting Error MCQ Question 8 Detailed Solution We have resistance: $$R = \frac{V}{I} = V{I^{ - 1}}$$ Weighted probable error in the resistance due to voltage is: $${r_{RV}} = \frac{{\partial R}}{{\partial V}}{r_V}$$ $$= {I^{ - 1}}{r_V} = \frac{{{r_V}}}{I}$$ $$= \frac{{ \pm 12}}{{10}} = \pm 1.2\;{\text{Ω}}$$ Weighted probable error in the resistance due to current is: $${r_{RI}}\frac{{\partial R}}{{\partial V}}{r_V} = \frac{V}{{{I^2}}}{r_I}$$ $$= \frac{{100}}{{{{\left( {10} \right)}^2}}} \times \left( { \pm 2} \right) = \pm 2\;{\text{Ω }}$$ Probable error in computed resistance is: $${r_R} = \;\sqrt {{{\left( {{r_{RV}}} \right)}^2} + {{\left( {{r_{RI}}} \right)}^2}}$$ $$\sqrt {{{\left( {1.2} \right)}^2} + {{\left( 2 \right)}^2}} = 2.33\;{\text{Ω}}$$ # In a DC motor, the input is measured as 2 kW ± 4% and the losses are 200 ± 5 W. The percentage limiting error of the output power is 1. 9% 2. 6.5% 3. 1.5% 4. 4.7% Option 4 : 4.7% ## Limiting Error MCQ Question 9 Detailed Solution Input = 2 kW ± 4% Error in the input $$= \frac{4}{{100}} \times 2 \times {10^3} = 80\;W$$ Losses = 200 ± 5 W Output = Input – Loses = (2000 ± 80) – (200 ± 5) = 1800 ± 85 W Limiting error in the measurement of output Losses $$= \frac{{85}}{{1800}} \times 100 = 4.72\%$$ # A 200 milli-Ampere meter has accuracy of ± 0.5 percent. Its accuracy while reading 100 milli-Ampere will be 1. ± 2.5 percent 2. ± 5 percent 3. ± 7.5 percent 4. ± 1 percent ## Answer (Detailed Solution Below) Option 4 : ± 1 percent ## Limiting Error MCQ Question 10 Detailed Solution Concept: Limiting Error: The maximum allowable error in the measurement is specified in terms of true value, which is known as limiting error. It will give a range of errors. It is always with respect to the true value, so it is a variable error. 1) If two quantities are getting added, then their limiting errors also gets added. 2) If two quantities are getting multiplied or divided, then their percentage limiting errors get added. Calculation: The ammeter reads 0A to 200 mA GAE of Ammeter = ± 0.5 % Error in voltmeter measurement on FSD = 0.005 × 200 = 1 A Percentage error at 100 mA Limiting error $$= \frac{{1}}{100} \times 100 = 1.0\%$$ Therefore the percentage limiting error at 100 mA is ± 1 percent Note: Absolute Error: The deviation of the measured value from the true value (or) actual value is called error. Absolute error (E) = Am – At Am = Measured value At = True value Relative Static Error: The ratio of absolute error to the true value is called relative static error. $$R.S.E = \frac{{\left| {{A_m} - {A_t}} \right|}}{{{A_t}}} \times 100$$ Guaranteed Accuracy Error: The allowable error in measurement is specified in terms of full-scale value is known as a guaranteed accuracy error. It is a constant error seen by the instrument since it is with respect to full-scale value. # The unknown resistance R4 measured in a Wheatstone bridge by the formula $${R_4} = {\frac{{{R_2}{R_3}}}{{{R_1}}}_{}}$$  withR1 = 100 ± 0.5% Ω,R2 = 1000 ± 0.5% Ω,R3 = 842 ± 0.5% Ωresulting in R4 1. 8420 ± 0.5% Ω 2. 8420 ± 1.0% Ω 3. 8420 ± 1.5% Ω 4. 8420 ± 0.125% Ω ## Answer (Detailed Solution Below) Option 3 : 8420 ± 1.5% Ω ## Limiting Error MCQ Question 11 Detailed Solution $${R_4} = {\frac{{{R_2}{R_3}}}{{{R_1}}}_{}}$$ R1 = 100 ± 0.5% Ω, R2 = 1000 ± 0.5% Ω, R3 = 842 ± 0.5% Ω Value of R4 $${R_4} = {\frac{{{R_2}{R_3}}}{{{R_1}}}_{}}$$ $${R_4} = {\frac{{{1000}\times{842}}}{{{100}}}_{}}$$ R4 = 8420 Ω Percentage limiting error: $$\frac{{\partial {R_4}}}{{{R_4}}} =± \left( {\frac{{\partial {R_2}}}{{{R_2}}} + \frac{{\partial {R_3}}}{{{R_3}}} + \frac{{\partial {R_1}}}{{{R_1}}}} \right)$$ $$\frac{{\partial {R_4}}}{{{R_4}}} =±(0.5+0.5+0.5)$$ = ± 1.5% R4 = 8420 ± 1.5% Ω # In a parallel circuit having two branches, the current in one branch is I1 = 100 ± 2 A and in the other is I2 = 200 ± 5 A. Considering errors in both I1 and I2 as limiting errors, the total current will be 1. 300 ± 5 A 2. 300 ± 6 A 3. 300 ± 7 A 4. 300 ± 8 A ## Answer (Detailed Solution Below) Option 3 : 300 ± 7 A ## Limiting Error MCQ Question 12 Detailed Solution Concept: Limiting error: • The maximum allowable error in the measurement is specified in terms of true value, is known as limiting error. • It will give a range of errors. It is always with respect to the true value, so it is a variable error. • If two quantities are getting added, then their limiting errors also gets added. • If two quantities are getting multiplied or divided, then their percentage limiting errors gets added. Calculation: Given that, I1 = 100 ± 2 A, I2 = 200 ± 5 A As the above currents are passes through two parallel branches, the total current will be I = I1 + I2 The total current, I = 100 + 200 = 300 A As both the quantities are getting added, their limiting errors are getting added. Now, the limiting error of total current is = ± 7 A Therefore, total current I = 300 ± 7 A # An 820 Ω resistance with an accuracy of ± 10% carries a current of 10 mA. The current was measured by an analog meter of 25 mA range with an accuracy of ± 2% of full scale. The accuracy in the measurement of power dissipated in the resistor is 1. 5% 2. 20% 3. 10% 4. 15% Option 2 : 20% ## Limiting Error MCQ Question 13 Detailed Solution Given that, R = 820 ± 10% Ω I = 10 mA Full scale reading of ammeter = 25 mA ± 2% error $$= \pm \frac{2}{{100}} \times 25 = \pm 0.5mA$$ Power dissipated in the resistor, P = I2R By applying logarithmic on both sides, ⇒ ln P = ln I2R ⇒ ln P = 2 ln I + ln R Differentiate w.r.t. ‘P’, $$\frac{1}{P} = \frac{2}{I}\frac{{dI}}{{dP}} + \frac{1}{R}\frac{{dR}}{{dP}}$$ $$\frac{{{\rm{\Delta }}P}}{P} \times 100 = \frac{{2{\rm{\Delta }}I}}{I} \times 100 + \frac{{{\rm{\Delta }}R}}{R} \times 100$$ = 2 × (± 5%) + 10% = 20% # A liquid flows through a pipe of 100 mm diameter at a velocity of 1 m/s. If the diameter is guaranteed within ±1% and the velocity is known to be within ±3% of measured value, the limiting error for the rate of flow is 1. ±1% 2. ±2% 3. ±3% 4. ±5% Option 4 : ±5% ## Limiting Error MCQ Question 14 Detailed Solution Concept: Limiting error: • The maximum allowable error in the measurement is specified in terms of true value, is known as limiting error. • It will give a range of errors. It is always with respect to the true value, so it is a variable error. • If two quantities are getting added, then their limiting errors also gets added. • If two quantities are getting multiplied or divided, then their percentage limiting errors gets added. Calculation: Rate of flow of a liquid = Volume/time = Area × velocity $$Q = \frac{{\pi {d^2}}}{4}V$$ Where d is the diameter and V is the velocity Given that, limiting error of diameter on full scale = ±1% The limiting error of velocity = ±3% As d and v are getting multiplied, their limiting errors will get added. Since there is a d2 term, we need to consider the error for d two times. Now, the limiting error in the measurement of flow = 2(±1) + (±3) = ±5%
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8366797566413879, "perplexity": 3417.507544037281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00010.warc.gz"}
https://calculus.subwiki.org/wiki/Differentiation_rule_for_power_functions
# Differentiation rule for power functions Jump to: navigation, search ## Statement We have the following differentiation rule: $\! \frac{d}{dx}(x^r) = rx^{r-1}$ where $r$ is a constant. Some notes on the validity: Case on $r$ Values of real $x$ for which this makes sense $r = 0$ all nonzero $x$. Also makes sense at $x = 0$ if we interpret the left side as 1 (constant equal to the list at 0) and the right side as 0. $r$ a rational number with odd denominator and greater than or equal to 1 All $x$ $r$ a real number greater than 1 that is not rational with odd denominator All $x > 0$. One-sided derivative makes sense at 0. $r$ a rational number with odd denominator and between 0 and 1 All $x \ne 0$. At 0, we have a vertical tangent or vertical cusp depending on the numerator of the rational function. $r$ a real number between 0 and 1 that is not rational with odd denominator All $x > 0$. One-sided vertical tangent at 0. $r$ a rational number with odd denominator and less than 0 All $x \ne 0$. At 0, we have a vertical asymptote $r$ a real number less than 0 that is not rational with odd denominator All $x > 0$. One-sided vertical asymptote at 0.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781307578086853, "perplexity": 231.86609667874467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039490226.78/warc/CC-MAIN-20210420183658-20210420213658-00356.warc.gz"}
https://dictionnaire.sensagent.leparisien.fr/Natural%20number/en-en/
Publicité ▼ anglais ▼ rechercher # définition - Natural number signaler un problème natural number (n.) 1.the number 1 and any other number obtained by adding 1 to it repeatedly ## définition (complément) voir la définition de Wikipedia Publicité ▼ # synonymes - Natural number signaler un problème natural number (n.) counting number ## dictionnaire analogique mathematics[Domaine] PositiveInteger[Domaine] number[Hyper.] natural number (n.) Publicité ▼ Wikipedia # Natural number Natural numbers can be used for counting (one apple, two apples, three apples, ...) from top to bottom. In mathematics, the natural numbers are the ordinary whole numbers used for counting ("there are 6 coins on the table") and ordering ("this is the 3rd largest city in the country"). These purposes are related to the linguistic notions of cardinal and ordinal numbers, respectively (see English numerals). A later notion is that of a nominal number, which is used only for naming. Properties of the natural numbers related to divisibility, such as the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partition enumeration, are studied in combinatorics. There is no universal agreement about whether to include zero in the set of natural numbers: some define the natural numbers to be the positive integers {1, 2, 3, ...}, while for others the term designates the non-negative integers {0, 1, 2, 3, ...}. The former definition is the traditional one, with the latter definition first appearing in the 19th century. Some authors use the term "natural number" to exclude zero and "whole number" to include it; others use "whole number" in a way that excludes zero, or in a way that includes both zero and the negative integers. ## History of natural numbers and the status of zero The natural numbers had their origins in the words used to count things, beginning with the number 1. The first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers. The ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1, 10, and all the powers of 10 up to over one million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds, 7 tens, and 6 ones; and similarly for the number 4,622. The Babylonians had a place-value system based essentially on the numerals for 1 and 10. A much later advance was the development of the idea that zero can be considered as a number, with its own numeral. The use of a zero digit in place-value notation (within other numbers) dates back as early as 700 BC by the Babylonians, but they omitted such a digit when it would have been the last symbol in the number.[1] The Olmec and Maya civilizations used zero as a separate number as early as the 1st century BC, but this usage did not spread beyond Mesoamerica. The use of a numeral zero in modern times originated with the Indian mathematician Brahmagupta in 628. However, zero had been used as a number in the medieval computus (the calculation of the date of Easter), beginning with Dionysius Exiguus in 525, without being denoted by a numeral (standard Roman numerals do not have a symbol for zero); instead nulla or nullae, genitive of nullus, the Latin word for "none", was employed to denote a zero value.[2] The first systematic study of numbers as abstractions (that is, as abstract entities) is usually credited to the Greek philosophers Pythagoras and Archimedes. Note that many Greek mathematicians did not consider 1 to be "a number", so to them 2 was the smallest number.[3] Independent studies also occurred at around the same time in India, China, and Mesoamerica.[citation needed] Several set-theoretical definitions of natural numbers were developed in the 19th century. With these definitions it was convenient to include 0 (corresponding to the empty set) as a natural number. Including 0 is now the common convention among set theorists, logicians, and computer scientists. Many other mathematicians also include 0, although some have kept the older tradition and take 1 to be the first natural number.[4] Sometimes the set of natural numbers with 0 included is called the set of whole numbers or counting numbers. On the other hand, integer being Latin for whole, the integers usually stand for the negative and positive whole numbers (and zero) altogether. ## Notation Mathematicians use N or $\mathbb{N}$ (an N in blackboard bold, displayed as in Unicode) to refer to the set of all natural numbers. This set is countably infinite: it is infinite but countable by definition. This is also expressed by saying that the cardinal number of the set is aleph-null $(\aleph_0)$. Typically, if a mathematician uses $\mathbb{N}$ for the set $\{ 1, 2, 3, \ldots \}$ and he needs in the same scientific context this set including $0$, then he mostly writes $\mathbb{N}_0$ for the latter. On the other hand, if he uses $\mathbb{N}$ for the set $\{ 0, 1, 2, \ldots \}$ and he needs in the same scientific context this set excluding $0$, then he mostly writes $\mathbb{N}^+$ or $\mathbb{N}^*$ for the latter. To be unambiguous about whether zero is included or not, sometimes an index (or superscript) "0" is added in the former case, and a superscript "$*$" or subscript "$1$" is added in the latter case: $\mathbb{N}^0 = \mathbb{N}_0 = \{ 0, 1, 2, \ldots \}$ $\mathbb{N}^* = \mathbb{N}^+ = \mathbb{N}_1 = \mathbb{N}_{>0}= \{ 1, 2, \ldots \}.$ Some authors who exclude zero from the naturals use the terms natural numbers with zero, whole numbers, or counting numbers, denoted W, for the set of nonnegative integers. Others use the notation P for the positive integers if there is no danger of confusing this with the prime numbers. In that case, a popular notation is to use a script P for positive integers (which extends to using script N for negative integers, and script Z for zero). Set theorists often denote the set of all natural numbers including zero by a lower-case Greek letter omega: ω. This stems from the identification of an ordinal number with the set of ordinals that are smaller. One may observe that adopting the von Neumann definition of ordinals and defining cardinal numbers as minimal ordinals among those with same cardinality, one gets $\,\mathbb N_0=\aleph_0=\omega$. Lowercase omega ω is also similar to W. ## Algebraic properties The addition (+) and multiplication (×) operations on natural numbers have several algebraic properties: • Closure under addition and multiplication: for all natural numbers a and b, both a + b and a × b are natural numbers. • Associativity: for all natural numbers a, b, and c, a + (b + c) = (a + b) + c and a × (b × c) = (a × b) × c. • Commutativity: for all natural numbers a and b, a + b = b + a and a × b = b × a. • Existence of identity elements: for every natural number a, a + 0 = a and a × 1 = a. • Distributivity of multiplication over addition for all natural numbers a, b, and c, a × (b + c)  =  (a × b) + (a × c) • No zero divisors: if a and b are natural numbers such that a × b = 0   then a = 0 or b = 0 ## Properties One can recursively define an addition on the natural numbers by setting a + 0 = a and a + S(b) = S(a + b) for all a, b. Here S should be read as "successor". This turns the natural numbers (N, +) into a commutative monoid with identity element 0, the so-called free monoid with one generator. This monoid satisfies the cancellation property and can be embedded in a group. The smallest group containing the natural numbers is the integers. If we define 1 := S(0), then b + 1 = b + S(0) = S(b + 0) = S(b). That is, b + 1 is simply the successor of b. Analogously, given that addition has been defined, a multiplication × can be defined via a × 0 = 0 and a × S(b) = (a × b) + a. This turns (N*, ×) into a free commutative monoid with identity element 1; a generator set for this monoid is the set of prime numbers. Addition and multiplication are compatible, which is expressed in the distribution law: a × (b + c) = (a × b) + (a × c). These properties of addition and multiplication make the natural numbers an instance of a commutative semiring. Semirings are an algebraic generalization of the natural numbers where multiplication is not necessarily commutative. The lack of additive inverses, which is equivalent to the fact that N is not closed under subtraction, means that N is not a ring; instead it is a semiring (also known as a rig). If we interpret the natural numbers as "excluding 0", and "starting at 1", the definitions of + and × are as above, except that we start with a + 1 = S(a) and a × 1 = a. For the remainder of the article, we write ab to indicate the product a × b, and we also assume the standard order of operations. Furthermore, one defines a total order on the natural numbers by writing a ≤ b if and only if there exists another natural number c with a + c = b. This order is compatible with the arithmetical operations in the following sense: if a, b and c are natural numbers and a ≤ b, then a + c ≤ b + c and acbc. An important property of the natural numbers is that they are well-ordered: every non-empty set of natural numbers has a least element. The rank among well-ordered sets is expressed by an ordinal number; for the natural numbers this is expressed as "ω". While it is in general not possible to divide one natural number by another and get a natural number as result, the procedure of division with remainder is available as a substitute: for any two natural numbers a and b with b ≠ 0 we can find natural numbers q and r such that a = bq + r and r < b. The number q is called the quotient and r is called the remainder of division of a by b. The numbers q and r are uniquely determined by a and b. This, the Division algorithm, is key to several other properties (divisibility), algorithms (such as the Euclidean algorithm), and ideas in number theory. ## Generalizations Two generalizations of natural numbers arise from the two uses: • A natural number can be used to express the size of a finite set; more generally a cardinal number is a measure for the size of a set also suitable for infinite sets; this refers to a concept of "size" such that if there is a bijection between two sets they have the same size. The set of natural numbers itself and any other countably infinite set has cardinality aleph-null ($\aleph_0$). • Linguistic ordinal numbers "first", "second", "third" can be assigned to the elements of a totally ordered finite set, and also to the elements of well-ordered countably infinite sets like the set of natural numbers itself. This can be generalized to ordinal numbers which describe the position of an element in a well-ordered set in general. An ordinal number is also used to describe the "size" of a well-ordered set, in a sense different from cardinality: if there is an order isomorphism between two well-ordered sets they have the same ordinal number. The first ordinal number that is not a natural number is expressed as $\omega$; this is also the ordinal number of the set of natural numbers itself. Many well-ordered sets with cardinal number $\aleph_0$ have an ordinal number greater than ω (the latter is the lowest possible). The least ordinal of cardinality $\aleph_0$ (i.e., the initial ordinal) is $\omega$. For finite well-ordered sets, there is one-to-one correspondence between ordinal and cardinal numbers; therefore they can both be expressed by the same natural number, the number of elements of the set. This number can also be used to describe the position of an element in a larger finite, or an infinite, sequence. Hypernatural numbers are part of a non-standard model of arithmetic due to Skolem. Other generalizations are discussed in the article on numbers. ## Formal definitions Historically, the precise mathematical definition of the natural numbers developed with some difficulty. The Peano axioms state conditions that any successful definition must satisfy. Certain constructions show that, given set theory, models of the Peano postulates must exist. ### Peano axioms The Peano axioms give a formal theory of the natural numbers. The axioms are: • There is a natural number 0. • Every natural number a has a natural number successor, denoted by S(a). Intuitively, S(a) is a+1. • There is no natural number whose successor is 0. • S is injective, i.e. distinct natural numbers have distinct successors: if ab, then S(a) ≠ S(b). • If a property is possessed by 0 and also by the successor of every natural number which possesses it, then it is possessed by all natural numbers. (This postulate ensures that the proof technique of mathematical induction is valid.) It should be noted that the "0" in the above definition need not correspond to what we normally consider to be the number zero. "0" simply means some object that when combined with an appropriate successor function, satisfies the Peano axioms. All systems that satisfy these axioms are isomorphic, the name "0" is used here for the first element (the term "zeroth element" has been suggested to leave "first element" to "1", "second element" to "2", etc.), which is the only element that is not a successor. For example, the natural numbers starting with one also satisfy the axioms, if the symbol 0 is interpreted as the natural number 1, the symbol S(0) as the number 2, etc. In fact, in Peano's original formulation, the first natural number was 1. ### Constructions based on set theory #### A standard construction A standard construction in set theory, a special case of the von Neumann ordinal construction, is to define the natural numbers as follows: We set 0 := { }, the empty set, and define S(a) = a ∪ {a} for every set a. S(a) is the successor of a, and S is called the successor function. By the axiom of infinity, the set of all natural numbers exists and is the intersection of all sets containing 0 which are closed under this successor function. This then satisfies the Peano axioms. Each natural number is then equal to the set of all natural numbers less than it, so that • 0 = { } • 1 = {0} = {{ }} • 2 = {0, 1} = {0, {0}} = { { }, {{ }} } • 3 = {0, 1, 2} = {0, {0}, {0, {0}}} = { { }, {{ }}, {{ }, {{ }}} } • n = {0, 1, 2, ..., n-2, n-1} = {0, 1, 2, ..., n-2,} ∪ {n-1} = {n-1} ∪ (n-1) = S(n-1) and so on. When a natural number is used as a set, this is typically what is meant. Under this definition, there are exactly n elements (in the naïve sense) in the set n and nm (in the naïve sense) if and only if n is a subset of m. Also, with this definition, different possible interpretations of notations like Rn (n-tuples versus mappings of n into R) coincide. Even if the axiom of infinity fails and the set of all natural numbers does not exist, it is possible to define what it means to be one of these sets. A set n is a natural number means that it is either 0 (empty) or a successor, and each of its elements is either 0 or the successor of another of its elements. #### Other constructions Although the standard construction is useful, it is not the only possible construction. For example: one could define 0 = { } and S(a) = {a}, producing • 0 = { } • 1 = {0} = {{ }} • 2 = {1} ={{{ }}}, etc. Each natural number is then equal to the set of the natural number preceding it. Or we could even define 0 = {{ }} and S(a) = a ∪ {a} producing • 0 = {{ }} • 1 = {{ }, 0} = {{ }, {{ }}} • 2 = {{ }, 0, 1}, etc. The oldest and most "classical" set-theoretic definition of the natural numbers is the definition commonly ascribed to Frege and Russell under which each concrete natural number n is defined as the set of all sets with n elements.[5][6] This may appear circular, but can be made rigorous with care. Define 0 as {{ }} (clearly the set of all sets with 0 elements) and define S(A) (for any set A) as {x ∪ {y} | xAyx } (see set-builder notation). Then 0 will be the set of all sets with 0 elements, 1 = S(0) will be the set of all sets with 1 element, 2 = S(1) will be the set of all sets with 2 elements, and so forth. The set of all natural numbers can be defined as the intersection of all sets containing 0 as an element and closed under S (that is, if the set contains an element n, it also contains S(n)). One could also define "finite" independently of the notion of "natural number", and then define natural numbers as equivalence classes of finite sets under the equivalence relation of equipollence. This definition does not work in the usual systems of axiomatic set theory because the collections involved are too large (it will not work in any set theory with the axiom of separation); but it does work in New Foundations (and in related systems known to be relatively consistent) and in some systems of type theory. ## Notes 1. ^ "... a tablet found at Kish ... thought to date from around 700 BC, uses three hooks to denote an empty place in the positional notation. Other tablets dated from around the same time use a single hook for an empty place." 2. ^ Michael L. Gorodetsky (2003-08-25). "Cyclus Decemnovennalis Dionysii - Nineteen year cycle of Dionysius". Hbar.phys.msu.ru. Retrieved 2012-02-13. 3. ^ This convention is used, for example, in Euclid's Elements, see Book VII, definitions 1 and 2. 4. ^ This is common in texts about Real analysis. See, for example, Carothers (2000) p.3 or Thomson, Bruckner and Bruckner (2000), p.2. 5. ^ Die Grundlagen der Arithmetik: eine logisch-mathematische Untersuchung über den Begriff der Zahl (1884). Breslau. 6. ^ Whitehead, Alfred North, and Bertrand Russell. Principia Mathematica, 3 vols, Cambridge University Press, 1910, 1912, and 1913. Second edition, 1925 (Vol. 1), 1927 (Vols 2, 3). Abridged as Principia Mathematica to *56, Cambridge University Press, 1962. ## References Contenu de sensagent • définitions • synonymes • antonymes • encyclopédie • definition • synonym dictionnaire et traducteur pour sites web Alexandria Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Essayer ici, télécharger le code; Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Les jeux de lettre français sont : ○   Anagrammes ○   jokers, mots-croisés ○   Lettris ○   Boggle. Lettris Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. boggle Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française Principales Références La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. 5256 visiteurs en ligne calculé en 0,078s Je voudrais signaler : section : une faute d'orthographe ou de grammaire un contenu abusif (raciste, pornographique, diffamatoire)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8700597286224365, "perplexity": 826.1034680903484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00562.warc.gz"}
http://crypto.stanford.edu/seclab/sem-13-14/boyle.html
On Extractability (aka Differing-Inputs) Obfuscation Elette Boyle Abstract: Program obfuscation serves to "scramble" a computer program, hiding its implementation details while preserving its functionality. Unfortunately, the "dream" notion of security, guaranteeing that obfuscated code does not reveal information beyond black-box access to the original program, has historically run into strong impossibility results, and is known to be impossible to achieve for general programs. Recently, the first plausible candidate of general-purpose obfuscation was presented (Garg et al FOCS 2013) for a relaxed notion of security, known as {\em indistinguishability} obfuscation, which requires only that obfuscations of functionally equivalent programs are indistinguishable. We initiate the study of the stronger notion of {\em extractability} or "differing-inputs" obfuscation: An extractability obfuscator for a class of algorithms M guarantees that if an efficient attacker A can distinguish between obfuscations of two algorithms M_1, M_2 in M, then A can efficiently recover (given M_1 and M_2) an input on which M_1 and M_2 provide different outputs (Barak et al JACM 2012). We demonstrate that extractability obfuscation provides several new applications, including obfuscation of Turing machines, indistinguishability-secure functional encryption for an unbounded number of key queries and unbounded message spaces, and a new notion of {\em functional witness encryption}. We also explore the relation between extractability obfuscation and other cryptographic notions. We show that in special cases, extractability obfuscation is in fact implied by indistinguishability obfuscation. On the other hand, we demonstrate a standoff between certain cryptographic extractability assumptions when security is required against even {\em fixed distributions} of auxiliary input. In particular, we demonstrate efficiently computable distributions of auxiliary input Z, Z' such that (assuming collision-resistant hash functions), either (1) extractability obfuscation for NC^1 w.r.t. Z does not exist, or (2) Succinct Non-Interactive Arguments of Knowledge (SNARKs) w.r.t. Z' do not exist. Based on joint works with Kai-Min Chung and Rafael Pass. Time and Place Tuesday, February 18, 4:15pm Gates 463A
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8611531257629395, "perplexity": 3693.7776681592836}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636255.43/warc/CC-MAIN-20150417045716-00301-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/electric-field-to-find-mass.370707/
# Homework Help: Electric Field to find Mass 1. Jan 18, 2010 ### krony23 1. The problem statement, all variables and given/known data In a laboratory, a particle of charge -6C is held stationary because it is placed in an electric field E=(0,0,-15)N/C which suspends it against gravity. What is the mass of the particle? Give answer in kg. 2. Relevant equations I THINK E= F/q F= ma 3. The attempt at a solution Ok, so I haven't actually had a chance to attempt this yet because A) I don't understand the notation of E in this equation, so I don't know if it is acceptable to just use -15 for E, and B) if I even have the right equations to use, I don't know what the acceleration is. Please help 2. Jan 18, 2010 ### rock.freak667 It suspends it against gravity, so what is the relationship between the electric force and the mass? 3. Jan 18, 2010 ### krony23 I don't really get what that means about it though. 4. Jan 19, 2010 ### rock.freak667 The only forces acting on the mass are its weight and the electric force. If it is being suspended, what is the resultant force on it?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9344839453697205, "perplexity": 740.5854387390609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863980.55/warc/CC-MAIN-20180621001211-20180621021211-00600.warc.gz"}
http://mathoverflow.net/questions/150963/how-do-ideal-sheaves-behave-on-the-special-fibers-of-the-projective-line-over-th
# How do ideal sheaves behave on the special fibers of the projective line over the integers? Let $X=\mathbb{P}^1_{\mathbb{Z}}$ and $Y\subset X$ be a local complete intersection of codimension two with Ideal sheaf $I_Y$. (I'm mostly interested in the case where $Y$ is a single point $x$ lying over a prime $p$, i.e. $f(x)=p$, where $f: X \rightarrow Spec(\mathbb{Z})$, or a finite set of points, each lying over a different prime). My question is: how does the restriction of $I_Y$ to a special fiber $\mathbb{P}^1_{\mathbb{F}_q}$ look like (for some prime q)? That is: can we describe the sheaf $I_{Y{|\mathbb{P}^1_{\mathbb{F}_q}}}=j^{*}I_Y$, where $j: \mathbb{P}^1_{\mathbb{F}_q} \rightarrow \mathbb{P}^1_{\mathbb{Z}}$? If $Y$ is a point $x$ lying over $p$, then we have: $0\rightarrow I_Y \rightarrow \mathcal{O}_X \rightarrow k(x) \rightarrow 0$, let us assume $k(x)=\mathbb{F}_p$. We have to look at $0\rightarrow \mathcal{T}or^1_\mathbb{Z}(k(x),\mathbb{F}_q)\rightarrow I_Y\otimes_{\mathbb{Z}}\mathbb{F}_q \rightarrow \mathcal{O}_X\otimes_{\mathbb{Z}}\mathbb{F}_q \rightarrow k(x)\otimes_{\mathbb{Z}}\mathbb{F}_q \rightarrow 0$. Now if $p\neq q$ this should give $I_Y\otimes_{\mathbb{Z}}\mathbb{F}_q \cong \mathcal{O}_X\otimes_{\mathbb{Z}}\mathbb{F}_q=\mathcal{O}_{\mathbb{P}^1_{\mathbb{F}_q}}$. But i'm not able to see what happens for $q=p$. Do we possibly get the ideal sheaf of $x$ in $\mathbb{P}^1_{\mathbb{F}_p}$? Or something more complicated? Background: Assume $E$ is a locally free sheaf of rank $2$ on $X$ sitting in an exact sequence $0\rightarrow \mathcal{O}_X(d)\rightarrow E \rightarrow I_Y \rightarrow 0$. Now i'm trying to understand the splitting types of $E$ on the special fibers $\mathbb{P}^1_{\mathbb{F}_p}$. Or is it easier to compute $0\rightarrow \mathcal{T}or^1_\mathbb{Z}(I_Y,\mathbb{F}_q)\rightarrow \mathcal{O}_X(d)\otimes_{\mathbb{Z}}\mathbb{F}_q \rightarrow E\otimes_{\mathbb{Z}}\mathbb{F}_q \rightarrow I_Y\otimes_{\mathbb{Z}}\mathbb{F}_q \rightarrow 0$? Edit: Using Will's answer we see that we can write $I_Y\otimes \mathbb{F}_q\rightarrow O_{\mathbb{P}^1_{\mathbb{F}_q}}(-1)\rightarrow 0$. This shows that we have: $E\otimes \mathbb{F}_q \rightarrow O_{\mathbb{P}^1_{\mathbb{F}_q}}(-1)\rightarrow 0$. Using that the degree is constant on $Spec(\mathbb{Z})$ we can compute that we have in fact $0\rightarrow O_{\mathbb{P}^1_{\mathbb{F}_q}}(d+1)\rightarrow E\otimes \mathbb{F}_q \rightarrow O_{\mathbb{P}^1_{\mathbb{F}_q}}(-1)\rightarrow 0$. Now we just have to compute some $Ext$ to see for which $d$ this already splits. - If you consider the ideal $I_y=(x,p)$ and tensor with $\mathbb Z/p$, you get the module $(x,p)/( xp, p^2)$. On the other hand, the ideal of the point $x$ on $\mathbb A^1_{\mathbb F_p}$ is $x/(xp)$. The difference betweeen the two modules is that the first one contains $p/p^2$, where the second one doesn't. We can see that with your exact sequence, specialized to $k(x)=\mathbb F_p$ and $\mathbb F_q=\mathbb F_p$: $0 \to \mathcal Tor^1(\mathbb F_p, \mathbb F_p) \to I_Y \otimes_{\mathbb Z} \to \mathcal O_{\mathbb P^1_{\mathbb F_P} } \to \mathbb F_p \otimes_Z \mathbb F_p$ We have $\mathbb F_P \otimes_Z \mathbb F_p = \mathcal Tor^1(\mathbb F_p, \mathbb F_p) = \mathbb F_p$, so we have the exact sequence: $0 \to \mathbb F_p \to I_Y \otimes_{\mathbb Z} \to \mathcal O_{\mathbb P^1_{\mathbb F_P} } \to \mathbb F_p$ The kernel of the third morphism is exactly the ideal sheaf of $x$ in $\mathbb P^1_{\mathbb F_p}$. So we see that the tensor product is the extension of this by an additional copy of $\mathbb F_p$. Note that this is not an ideal sheaf. The case of a different ideal is directly analogous. - Fantastic. In the other case we can write $O_Y=\oplus F_{p_i}$ for different primes $p_i$ and the same computation works. So we have an exact sequence $F_p\rightarrow I_Y\otimes F_p \rightarrow O(-1)$. Is it possible to choose $Y$ such that we get an exact sequence: $Tor(O_Y,F_{p_i})\rightarrow I_Y\otimes F_{p_i}\rightarrow O(-n_i)$ for finitely many primes $p_i$ with numbers $n_i$?. – DonD Dec 6 '13 at 18:47 Yes. You just need to choose an ideal of degree $n_i$ in $\mathbb A^1_{\mathbb F_{p_i}}$. Then choose the ideal of elements that are in the $i$th ideal mod $p_i$. By the Chinese remainder theorem, adding conditions at $p_j$ for $j\neq i$ will not change the ideal modulo $p_i$, which was constructed to be $O(-n_i)$. – Will Sawin Dec 7 '13 at 5:02 Very good. I accepted your answer, but unfortunately i don't have enough reputation to give a "+1". But this answer was really helpful. – DonD Dec 7 '13 at 12:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9782295227050781, "perplexity": 98.07530652726587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00201-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/understanding-time-dilation.820439/
# Understanding time dilation 1. Jun 23, 2015 ### kweagle Hi Everyone, first I want to say I have no formal education or background on these topics, but find them very interesting and research and learn as much as I can on my own. With that in mind, I am hoping some of you will have the patience to explain what I don't seem to understand. What I am curious most about it 'what is time' from a scientific standpoint. I can't seem to find a definitive answer to this, which maybe there is not one. It would however help me in understanding the next part of my question. I am thinking of this in term of GPS positioning satellites. I understand and accept that these satellites experience a time dilation relative to the earths surface due to the speed they travel and the gravitational difference. I understand that they use atomic clocks which are the most accurate way we have of measuring time. What I don't understand is how do we know that speed and gravity are actually affecting 'time itself', and not just affecting the atomic vibrations we are measuring instead? 2. Jun 23, 2015 ### rootone Time dilation was predicted by special relativity, and later proved to be real. As for what time actually is, Einstien said that it's the thing that gets measured by clocks. 3. Jun 23, 2015 ### Bandersnatch Hi, kweagle. Welcome to PF! The usual definition of time used in physics is 'time is what clocks measure'. It is the definition Einstein adopted when developing his special and general theories of relativity. So there's no difference between saying that 'time slows down' and 'all physical processes slow down', since a clock is just some physical process (usually a multitude of those). We could posit a hypothesis that it's not time (all processes) slowing down, but just one specific type of a process (e.g., radioactive decay), but so far all observations agree with the 'all processes' assertion. The hypothesis would need to specify which kind of process is not affected, and then test it. But that would be a bit backwards, since the predictions of time dilation/contraction of SR and GR follow from the definition that treats time as encompassing all processes. 4. Jun 23, 2015 ### kweagle So with the idea that clocks are just a physical process, is correct to say there a difference between 'time' and a 'physical process'? I guess what I don't understand is if clocks are affected by speed and gravity, how can they be a reliable tool to measure time? 5. Jun 23, 2015 ### bcrowell Staff Emeritus To define time as what a clock measures is an example of a philosophy called operationalism: http://plato.stanford.edu/entries/operationalism/ But we don't have a more reliable tool. 6. Jun 23, 2015 ### rootone Because the physical processes change their (apparent) speed, in a way which is consistent with what relativity describes. 7. Jun 23, 2015 ### kweagle Hmmm... I accept GR and all that, I am not trying to dispute it, but can it not be said that time is a constant, and its the tools we use to measure time that change with speed and gravity, and therefore the time dilation we see with a clock is not necessarily a change in time itself, and only in the tools we are using to measure it? 8. Jun 23, 2015 ### bcrowell Staff Emeritus If this approach were to be useful, you would need some way to define and measure this true or undistorted time. Nobody has any way to do that. 9. Jun 23, 2015 ### kweagle Is my way of looking at it somehow flawed though? Is it possible that the time dilation we see is a result of the effect on the instruments we use to measure time, and not time itself? This has always been my biggest problem with understanding time dilation. 10. Jun 23, 2015 ### rootone One way of putting it is that there is no universal absolute clock which can be referred to. Relativity makes predictions (such as time dilation) which are useful in some cases (GPS), and more accurate than Newtons (equally amazing earlier) proposals, which make the assumption of 'time' as being a universal absolute constant. 11. Jun 23, 2015 ### kweagle So can the following statements all be said to be true? -Clocks measure time -The definition of time is a measurement of a physical process -All physical processes are affected by speed and gravity 12. Jun 23, 2015 ### rootone 1, Yes, 2,Yes, and 3.I don't know( but it seems to be so). 13. Jun 23, 2015 ### Mentz114 The first one is true but can be qualified to 'clocks measure their own time'. In SR the time elapsed on a clock has a clear mathematical definition and is an invariant. The nearest thing to a deifinition of time is 1. So probably 2 is redundant. Your third point is true as regards gravity. Relative velocities are important physically if two things collide or interact in some way. Otherwise relative velocity causes Doppler shifts and strange length measurements. 14. Jun 23, 2015 ### Staff: Mentor The third one is difficult because it lacks a mechanism or even a theoretical basis. According to the principle of relativity that has been a part of physics since physics was first invented by Galileo, speed is only relevant as measured between two objects. So at any time, any object can have an infinite number of different speeds depending on what frame of reference you choose to measure it in. That makes the clock rate an essentially arbitrary choice and not a single value for a particular clock. Even with gravitational time dilation, there is a difficulty in detecting an actual impact of gravity itself on the processes. That is wholly different from the way gravity affects a pendulum clock, for example. So the idea that gravity/relative speed affects time and not just individual physical processes is theoretically simpler -- the opposite view is highly problematic and has no evidence for it whatsoever. It isn't necessarily wrong, but it is scientifically inferior due to its complexity and lack of support (Occam's razor). The way you say that implies that you think that some other instrument, not yet invented, might be able to measure how time "really" works and not be influenced by gravity/speed. While not completely impossible, that would be a pretty huge coincidence that all current clocks show exactly the same "error" (if accurate enough to measure it), even though we have methods of measuring time that use vastly different operating principles. 15. Jun 24, 2015 ### harrylin All those statements can be said to be true - but the third one is not generally accepted even though it works as a model (the theoretical basis is nowadays often called "LET"). As no absolute speed can be determined (measurements of uniform speed are relative, that's relativity in a nutshell), many people hold that it doesn't exist, and of course physical processes can not be affected by the speed with respect to your freely chosen reference system! Debates on this forum on that rather philosophical topic have been ended with a formal interdiction, see the FAQ here, https://www.physicsforums.com/threads/what-is-the-pfs-policy-on-lorentz-ether-theory-and-block-universe.772224/ [Broken] Concerning gravity, indeed according GR, a "clock goes therefore slower when it is placed in the neighbourhood of ponderable masses" - https://en.wikisource.org/wiki/The_...Perihelion-motion_of_the_paths_of_the_Planets. [edit: slight correction mine] Last edited by a moderator: May 7, 2017 16. Jun 24, 2015 ### Josh S Thompson This is wrong because there are different tools for measuring time and they all yeild the same results for time dilation. Theres no way, I'm sorry, that its just our tools and not varying time. Your totaly disputing GR your saying Einstien was wrong and time is constant. Why would gravity make a clock go slower, I could use a sand clock and gravity would make time go faster. Think of time as changes, and we are in a high entropy universe so everything is changing. That the best way to think of time as rate of change . 17. Jun 25, 2015 ### Stephanus Hi Kweagle, welcome to PF forum. I'm no physicist either, much less scientist. But let me try to help you with a layman point of view. Yes. But, Grandfather clocks/pendulums measure time? Yes, but it depends on gravitation Digital clocks measure time? Yes, it doesn't depend on gravity but it depends on the power of its battery Bacteria fermentation measures time? Yes, it does not depend on electric force but it depends if you put it in a refrigerator or not. Atomic clocks measure time? Yes, but a moving atomic clocks run faster than stationary atomic clocks. SR theory. So I think the standard clock is atomic clock without acceleration force applied to it. It can move, as long as it does not experience acceleration. I'm at lost here also. What defines standard clock? The vibration of caesium atom or the time it takes for light to travel 1 metre? Or something else? Yes. 1 second is the time it takes for the second hand in analog clock to rotate 60 clock wise (of course) caveat: If somehow the clock axis has rust and dragging the second hand, the time is slow according to the clock. 1 second is the time it takes for the temperature of 1cc of water to be raised 10 Celcius if applied by 4.2 watt. 1 second is the time it takes for caesium to vibrate 9,192,631,770 times. https://en.wikipedia.org/wiki/Caesium_standard The definition of time is a measurement of a physical process. But first, you have to define the physical process. You just can't put 4.2 joules in 1 cc of any water to increase it 10 celcius. It has to be pure H2O in certain pressure and in liquid state. But the definition of standard time is usually by atomic clock. Gravity perhaps..., But speed? I don't think so. Speed is relative. Of course if you move one object speed is affective, or in the case of kinetic energy. Perhaps someone else can answer...? 18. Jun 25, 2015 ### harrylin Right. The original standard clock was the Earth's rotation in its orbit, giving the solar day as reference (divided in 24 hours->min->s). Sundials were replaced by pendulum clocks and nowadays as atomic clocks are more regular than the Earth's rotation, they have mostly replaced the solar clock for precise time keeping. [edit:] The basis for long periods remains the solar day, that's why we need to insert a leap second now and then - the next one will be in a few days from now! -https://en.wikipedia.org/wiki/Leap_second Those two statements sound contradictory to me (and "faster" should be "slower") . Several clarifications have been given already, mine is in #15 Last edited: Jun 25, 2015 19. Jun 25, 2015 ### Stephanus The moving clock runs SLOWER wrt rest. Sorry, sorry, sorry 20. Jun 25, 2015 ### Staff: Mentor What is "time itself"? That is something pretty hard to pin down, if you don't want to use the operational definition given above. However, let's look at this alternative as a possibility. Suppose that there is a background "time itself" which is not subject to time dilation. If that were the case then we would need some mechanism to explain why clocks based on EM (e.g. atomic clocks) are coincidentally time dilated to exactly the same degree as we would expect from relativity, despite time itself continuing unaffected. OK, so we propose such a mechanism. But some clocks use the weak force as their clock mechanism, and we find that such clocks also are coincidentally time dilated by exactly the same degree as expected from relativity. But since the mechanism is different then we need a separate mechanism for the time dilation. OK, so we propose another mechanism which accomplishes that. But some clocks use the strong force as their clock mechanism, and we find that such clocks are also coincidentally time dilated by the same degree as expected from relativity. I'm sure you see the point. The number of theories that you would have to derive and have to tune exactly correctly for it to all coincidentally turn out to be the same is not something which is taken seriously. It is asking for coincidence upon coincidence upon coincidence, all to arrive at the same place as you get from the postulates of relativity. Similar Discussions: Understanding time dilation
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8209503293037415, "perplexity": 954.3585472637768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515313.13/warc/CC-MAIN-20171212095356-20171212115356-00050.warc.gz"}
http://www.physicsforums.com/showthread.php?t=588121
# Air pressure vessel safety valve size by guideonl Tags: pressure, safety, size, valve, vessel P: 17 Hi everyone, I have to determine the size of safety valve for an air pressure vessel. I found in ASME and BS 5500 a procedure to calculate the size of safety valve, which relates the mass flow capacity to the: pressure, temprature, discharge area, s.v coefficient, and air parameters. This can be helpfull if the flow capacity is known, so you can get the discharge area (valve size). But if the flow capacity is unknown (such in case manufacturing perss. vessels without compression units). The next question is there any relation between the discharge capacity of the s.v to the compression unit flow capacity? I suppose yes, but is it equal/higher? how much? Thank you, Guideon Related Discussions General Physics 13 Engineering Systems & Design 2 Engineering, Comp Sci, & Technology Homework 3 Introductory Physics Homework 2 General Physics 3
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.822533369064331, "perplexity": 2656.337963645604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/217142-solid-rev-surface-area.html
# Math Help - Solid of rev and Surface Area 1. ## Solid of rev and Surface Area I think I posted it in a wrong forum, being a newbie here and needs some help on my math problem. Thanks for all the help guys. Hi I'm stuck with my homework for solid of revolution and surface area of this figure Trapezoid with slope of -5/L. R2 = 2L and R1 = 3L/2. ANy help is appreciated. / ----- |---------\ / -----| ---------\ slope = -5/L /_____|____|__\ ...............3L/2 ...2L a. Find the equation in terms of L (vol and surface area) b. Vol as function of L c. Value of L that minimizes the volume d. Minimum Volume 2. ## Re: Solid of rev and Surface Area What have you been able to do with this? -Dan 3. ## Re: Solid of rev and Surface Area What is the axis of rotation? Isn't part (b) repeating the first half of part (a)? And doesn't the volume decrease as L decreases, so (c) and (d) have no solution, assuming L>0? - Hollywood 4. ## Re: Solid of rev and Surface Area This is what I've got so far. 5. ## Re: Solid of rev and Surface Area The axis of revolution is at the Y-axis. 6. ## Re: Solid of rev and Surface Area Okay, taking the origin at the center of the base, the side is a line with slope -5/L passing through (2L, 0). That has equation y= (-5/L)(x- 2L). When x= 3L/2, y= (-5/L)(3L/2- 2L)= (-5/L)(-3L/2)= 15/2. Rotating around the y- axis, you have disks with radius x, thickness dy, so area $\pi x^2 dy$. That's why you need to integrate $\pi\int_0^{15/2} x^2 dy$. If y= (-5/L)(x- 2L), what is x as a function of y? Now, when you say "surface area" do you mean the slant area only or do you want to incude the area of the top and bottom? 7. ## Re: Solid of rev and Surface Area Only the slant area, excluding the top and bottom 8. ## Re: Solid of rev and Surface Area Okay, so you only need to do the integral I gave you. 9. ## Re: Solid of rev and Surface Area I thought that formula is for volume? The surface area is 2 pi times all others? 10. ## Re: Solid of rev and Surface Area Oh, blast! That's right. Then you can do this: a line with slope m is of the form y= mx+ b. If we increase x by dx then we increase (decrease for negative m) y by y+ dy= m(x+ dx)+ b subtracting y from the left side and mx+ b from the right we get dy= m dx (which is just saying, again, that for a straight line the slope is the same as the derivative). A right triangle with legs dx and dy has hypotenus of length, by the Pythagorean theorem, $\sqrt{dx^2+ dy^2}= \sqrt{1+ \left(\frac{dy}{dx}\right)^2} dx$. Since the circumference of a circle, of radius r, is $2\pi r$, the surface area the product of those two lengths, $2\pi x\sqrt{1+ \left(\frac{dy}{dx}\right)^2} dx$. That's the formula you had before, right? 11. ## Re: Solid of rev and Surface Area Yep, but using all the equation and applying my limits, I was getting negative results. 2 Pi X ( 1 + (dy/dx)^2)^1/2 dx from 0 to 5/2 for Y = 5X + 10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.923499584197998, "perplexity": 2002.2200105354607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990900.28/warc/CC-MAIN-20150728002310-00081-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-2-review-true-false-quiz-page-166/12
Calculus: Early Transcendentals 8th Edition The statement is false. Here's an example to disprove it: $\lim\limits_{x\to0}f(x)=\lim\limits_{x\to0}\frac{x^2+5x+7}{x}=\frac{0^2+5\times0+7}{0}=\frac{7}{0}=\infty$ $\lim\limits_{x\to0}g(x)=\lim\limits_{x\to0}\frac{x^2+5x+6}{x}=\frac{0^2+5\times0+6}{0}=\frac{6}{0}=\infty$ However, we see that $\lim\limits_{x\to0}[f(x)-g(x)]=\lim\limits_{x\to0}\Big(\frac{x^2+5x+7}{x}-\frac{x^2+5x+6}{x}\Big)=\lim\limits_{x\to0}\Big(\frac{1}{x}\Big)=\frac{1}{0}=\infty\ne0$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8963956236839294, "perplexity": 188.23711197295577}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511642.47/warc/CC-MAIN-20181018022028-20181018043528-00012.warc.gz"}
https://www.physicsforums.com/threads/primary-ideals-prime-ideals-and-maximal-ideals-d-f-section-15-2.721811/
# Primary Ideals, prime ideals and maximal ideals - D&F Section 15.2 1. Nov 9, 2013 ### Math Amateur I am studying Dummit and Foote Section 15.2. I am trying to understand the proof of Proposition 19 Part (5) on page 682 (see attachment) Proposition 19 Part (5) reads as follows: ---------------------------------------------------------------------------------------------------------------------------- Proposition 19. ... ... (5) Suppose M is a maximal ideal and Q is an ideal with $M^n \subseteq Q \subseteq M$ for some $n \ge 1$. Then Q is a primary idea, with rad Q = M ---------------------------------------------------------------------------------------------- The proof of (5) above reads as follows: ---------------------------------------------------------------------------------------------- Proof. Suppose $M^n \subseteq Q \subseteq M$ for some $n \ge 1$ where M is a maximal idea. Then $Q \subseteq M$ so $rad \ Q \subseteq rad \ M = M$. ... ... etc ---------------------------------------------------------------------------------------------- My problem is as follows: Why can we be sure that rad M = M? I know that M is maximal and so no ideal in R can contain M. We also know that $M \subseteq rad \ M$ Thus either rad M = M (the conclusion D&F use) or rad M = R? How do we know that $rad \ M \ne R$? Would appreciate some help. Peter File size: 75.8 KB Views: 71 2. Nov 9, 2013 ### R136a1 Try to use contradiction. Assume that $\textrm{rad}(M) = R$. Then $1\in \text{rm}(M)$. Now use the definition of the radical. 3. Nov 10, 2013 ### Math Amateur Thanks R136a1! But just thinking over this ... Is the following thinking along the right track ...? $rad \ M = \{a \in R \ | \ a^k \in M$ for some $k \ge 1$ So then $1 \in rad \ M \Longrightarrow 1^k \in M$ for some $k \ge 1$ $\Longrightarrow \ 1 \in M$ $\Longrightarrow \ M = R$ Thus would mean $M^n = R$ so $Q = R$ also .... This result is not a contradiction but it leads to the collapse of the conditions of the Proposition to triviality .... Can you confirm my reasoning ... or indeed point out errors/inadequacies in my thinking Peter 4. Nov 10, 2013 ### Math Amateur Just thinking further ... maybe in my reasoning in the last post I have indeed achieved a contradiction since my reasoning (if correct!) establishes that M = R .,,, where of course M is a maximal ideal by assumption ... but by D&F's definition of a maximal ideal, this is not possible ... so contradiction! Can someone confirm that this is correct? Note: Definition of maximal ideal, Dummit and Foote, page 253: "An ideal M in an arbitrary ring R is called a maximal ideal if $M \ne R$ and the only ideals containing M are M and R." 5. Nov 10, 2013 ### R136a1 Yes, it is a contradiction because $M=R$ is not possible by definition.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786171913146973, "perplexity": 869.9640950725943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.25/warc/CC-MAIN-20180718193656-20180718213656-00402.warc.gz"}
https://www.physicsforums.com/threads/is-work-done-by-friction-the-same-as-thermal-energy.580431/
# Homework Help: Is work done by friction the same as thermal energy? 1. Feb 22, 2012 ### Supernejihh 1. The problem statement, all variables and given/known data I ask this because my initial assumption was that work done by a non-conservative force (friction in this case) is also equal to thermal energy. However, in my book, it gave an equation with W = Emec + E thermal. They also had an example where they added up the work and the Emec, which in the example was the work done by friction, to get E thermal. This confuses me because I thought work done by friction was equal to E thermal. If they are not, can someone please explain why? Thank you. 2. Relevant equations W = Emec + E thermal 3. The attempt at a solution 2. Feb 22, 2012 ### PhanthomJay This equation is not correct. The work done by non-conservative forces is the negative of the change of other energy besides mechanical energy, where other energy includes thermal, sound, or chemical energy, etc. Total energy of a system is always conserved. This implies that $\Delta U + \Delta K + \Delta E_{other} = 0$, where $\Delta U + \Delta K$ represents the change in mechanical energy of the system. Since by the work-energy theorem $W_c + W_{nc} = \Delta K$, and since $W_c = -\Delta U$, then substituting these 2 equations into the first yields $W_{nc} = -\Delta E_{other}$ If friction is the only non conservative force acting, and if we ignore sound, chemical, light, and all other forms of non-mechanical energy except heat, then $W_{friction} = -\Delta E_{thermal}$ In general, friction mostly causes a change in thermal energy, but there is sound energy as well, and some other forms of energy change. 3. Feb 22, 2012 ### Supernejihh I can kinda understand what you are saying, but the equation is somehow in the book. In the book's example, they found the work done by the force to be 20J. They wanted to find the increase in E thermal. They used W = E mec + E thermal => E thermal = W - E mec. The work was 20J and the E mec was simple the change in KE due to the fact that there was no potential energy. The change in KE turned out to be -2.2 J, which translates to 20-(-2.2) = 22 J. My first try at this was that the change in thermal energy would just be the change in E mec, which is the change in KE, due to no potential energy. This led me to have an answer of -2.2 J; I thought 2.2J was the change in thermal energy, but I was wrong.. 4. Feb 23, 2012 ### PhanthomJay Where does W = 20 J come from? There must be other forces acting besides friction that do work, Please state the problem in its entirety.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.952630341053009, "perplexity": 516.0069458453127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583508988.18/warc/CC-MAIN-20181015080248-20181015101748-00425.warc.gz"}
https://www.physicsforums.com/threads/projectile-problem.314587/
# Projectile Problem 1. May 17, 2009 ### Lord Dark 1. The problem statement, all variables and given/known data hi guys ,, got this problem about projectile ,, A ball is thrown from the top of a building with a velocity of 40 m/s at an angle of 53 with the horizontal. After 2 s, it is seen to be at height of 84 m above the ground. a) Find the height of the building b) At which other time will the ball again be at a height of 84 m? c) If the ball hits a wall at a height of 52 m above the ground, what is the distance of the wall from the building? d) Find the magnitude and direction of the velocity of the ball when it reaches the wall 2. Relevant equations 3. The attempt at a solution i tried to get the height of the building by 84+h(b)=40sin(53)*2-9.8*4 ,, and I got h(b)=59.31m ,, anyway ,, the way is right ?? ,, and about (b) i tried to get the second time ,, but i get it negative (59.31+84=40sin(53)x-9.8*x^2 = 9.8x^2-40sin(53)x-143.31 and i get x=5.7 , x=-2.5 and i think it's wrong because none of them equal 2 ,, and one of them should equal 2 ,, any ideas what did I do wrong here ? #### Attached Files: • ###### Q3.JPG File size: 35.8 KB Views: 47 2. May 17, 2009 ### Staff: Mentor Your expression for vertical position is not quite right. What's the general formula? 3. May 17, 2009 ### Lord Dark delta(y)=Voyt-0.5*g*t^2 ,, (I forgot half) thanks :D 4. May 17, 2009 ### Lord Dark now i get the height 72m ( which i think it's too high ) and i still cant get the second time, 5. May 17, 2009 ### Staff: Mentor Show the final equation that you solved, with the numbers that you used. 6. May 17, 2009 ### Lord Dark 123.71=40sin(53)t-0.5*9.8*t^2 ,, now i don't even get an answer (they are imaginary numbers) 7. May 17, 2009 ### Staff: Mentor Show me the equation that you used to solve for the height of the building. 8. May 17, 2009 ### Lord Dark 123.71=40sin(53)t-0.5*9.8*t^2 ,, now i don't even get an answer (they are imaginary numbers) 9. May 17, 2009 ### Staff: Mentor Show me the equation that you used to solve for the height of the building. (There should be some variable representing the height in there somewhere.) 10. May 17, 2009 ### Lord Dark 84+h(b)=40sin(53)*2-0.5*9.8*4 (Y-Yo=Voy*t-0.5*g*t^2) 11. May 17, 2009 ### Staff: Mentor Compare the first equation to the one in parentheses. (Careful with signs.) 12. May 17, 2009 ### Lord Dark ....... Did it ,, (84-39.71)=40sin(53)t-0.5*9.8*t^2 and i get t1=1.9 = 2s t2=4.5 s ,, about (c) = first I got t=6.1s then (X=Vox*t) =147.1m (d) Vx=24 m/s (never change) Vy=-27.8 then get lVl and get the angle which is -49 thanks :D Similar Discussions: Projectile Problem
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9202226996421814, "perplexity": 2276.347861969365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807650.44/warc/CC-MAIN-20171124104142-20171124124142-00110.warc.gz"}
https://indico.cern.ch/event/278032/contributions/1623587/
# Astroparticle Physics - A Joint TeVPA/ IDM Conference 23-28 June 2014 Amsterdam Europe/Zurich timezone ## Invited Talk: Still life: the Standard Model Higgs boson and beyond 26 Jun 2014, 10:00 30m Room 1 (Tuschinski Theatre) Presentation ### Speaker Christophe Grojean (ICREA - Institucio catalana de recerca estudis avancats (ES)) ### Description With the discovery of the long sought-after Higgs boson at CERN in July 2012, a new state of matter and a new dynamical principle have been revealed as essential building blocks of the fundamental laws of physics. The Brout-Englert-Higgs mechanism also provides a solution to the half-century-old mass conundrum, i.e. the apparent incompatibility between the mass spectrum of the elementary particles and their fundamental interactions. Moreover, the Higgs boson is an open door to the uncharted New Physics territory soon-to-be explored during the upcoming operation of the LHC and later at various possible future colliders. Slides
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9505854845046997, "perplexity": 3514.0941436022194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138718.61/warc/CC-MAIN-20200712113546-20200712143546-00465.warc.gz"}
https://www.groundai.com/project/brits-bidirectional-recurrent-imputation-for-time-series/
BRITS: Bidirectional Recurrent Imputation for Time Series # BRITS: Bidirectional Recurrent Imputation for Time Series Wei Cao &Dong Wang &Jian Li &Hao Zhou &Lei Li &Yitan Li \ANDTsinghua University  Duke University   Toutiao AILab ###### Abstract Time series are widely used as signals in many classification/regression tasks. It is ubiquitous that time series contains many missing values. Given multiple correlated time series data, how to fill in missing values and to predict their class labels? Existing imputation methods often impose strong assumptions of the underlying data generating process, such as linear dynamics in the state space. In this paper, we propose BRITS, a novel method based on recurrent neural networks for missing value imputation in time series data. Our proposed method directly learns the missing values in a bidirectional recurrent dynamical system, without any specific assumption. The imputed values are treated as variables of RNN graph and can be effectively updated during the backpropagation. BRITS has three advantages: (a) it can handle multiple correlated missing values in time series; (b) it generalizes to time series with nonlinear dynamics underlying; (c) it provides a data-driven imputation procedure and applies to general settings with missing data. We evaluate our model on three real-world datasets, including an air quality dataset, a health-care data, and a localization data for human activity. Experiments show that our model outperforms the state-of-the-art methods in both imputation and classification/regression accuracies. BRITS: Bidirectional Recurrent Imputation for Time Series Wei Cao Dong Wang Jian Li Hao Zhou Lei Li Yitan Li Tsinghua University  Duke University   Toutiao AILab \@float noticebox[b]Preprint. Work in progress.\end@float ## 1 Introduction Multivariate time series data are abundant in many application areas, such as financial marketing  [5, 4], health-care [10, 22], meteorology [31, 26], and traffic engineering [29, 35]. Time series are widely used as signals for classification/regression in various applications of corresponding areas. However, missing values in time series are very common, due to unexpected accidents, such as equipment damage or communication error, and may significantly harm the performance of downstream applications. Much prior work proposed to fix the missing data problem with statistics and machine learning approaches. However most of them require fairly strong assumptions on missing values. We can fill the missing values using classical statistical time series models such as ARMA or ARIMA (e.g., [1]). But these models are essentially linear (after differencing). Kreindler et al. [19] assume that the data are smoothable, i.e., there is no sudden wave in the periods of missing values, hence imputing missing values can be done by smoothing over nearby values. Matrix completion has also been used to address missing value problems (e.g., [30, 34]). But it typically applies to only static data and requires strong assumptions such as low-rankness. We can also predict missing values by fitting a parametric data-generating model with the observations [14, 2], which assumes that data of time series follow the distribution of hypothetical models. These assumptions make corresponding imputation algorithms less general, and the performance less desirable when the assumptions do not hold. In this paper, we propose BRITS, a novel method for filling the missing values for multiple correlated time series. Internally, BRITS adapts recurrent neural networks (RNN) [16, 11] for imputing missing values, without any specific assumption over the data. Much prior work uses non-linear dynamical systems for time series prediction [9, 24, 3]. In our method, we instantiate the dynamical system as a bidirectional RNN, i.e., imputing missing values with bidirectional recurrent dynamics. In particular, we make the following technical contributions: • We design a bidirectional RNN model for imputing missing values. We directly use RNN for predicting missing values, instead of tuning weights for smoothing as in [10]. Our method does not impose specific assumption, hence works more generally than previous methods. • We regard missing values as variables of the bidirectional RNN graph, which are involved in the backpropagation process. In such case, missing values get delayed gradients in both forward and backward directions with consistency constraints, which makes the estimation of missing values more accurate. • We simultaneously perform missing value imputation and classification/regression of applications jointly in one neural graph. This alleviates the error propagation problem from imputation to classification/regression. Additionally, the supervision of classification/regression makes the estimation of missing values more accurate. • We evaluate our model on three real-world datasets, including an air quality dataset, a health-care dataset and a localization dataset of human activities. Experimental results show that our model outperforms the state-of-the-art models for both imputation and classification/regression accuracies. ## 2 Related Work There is a large body of literature on the imputation of missing values in time series. We only mention a few closely related ones. The interpolate method tries to fit a "smooth curve" to the observations and thus reconstruct the missing values by the local interpolations [19, 14]. Such method discards any relationships between the variables over time. The autoregressive method, including ARIMA, SARIMA etc., eliminates the non-stationary parts in the time series data and fit a parameterized stationary model. The state space model further combines ARIMA and Kalman Filter [14, 15], which provides more accurate results. Multivariate Imputation by Chained Equations (MICE) [2] first initializes the missing values arbitrarily and then estimates each missing variable based on the chain equations. The graphical model [20] introduces a latent variable for each missing value, and finds the latent variables by learning their transition matrix. There are also some data-driven methods for missing value imputation. Yi et al. [32] imputed the missing values in air quality data with geographical features. Wang et al. [30] imputed missing values in recommendation system with collaborative filtering. Yu et al. [34] utilized matrix factorization with temporal regularization to impute the missing values in regularly sampled time series data. Recently, some researchers attempted to impute the missing values with recurrent neural networks [7, 10, 21, 12, 33]. The recurrent components are trained together with the classification/regression component, which significantly boosts the accuracy. Che et al. [10] proposed GRU-D, which imputes missing values in health-care data with a smooth fashion. It assumes that a missing variable can be represented as the combination of its corresponding last observed value and the global mean. GRU-D achieves remarkable performance on health-care data. However, it has many limitations on general datasets [10]. A closely related work is M-RNN proposed by Yoon et al. [33]. M-RNN also utilizes bi-directional RNN to impute missing values. Differing from our model, it drops the relationships between missing variables. The imputed values in M-RNN are treated as constants and cannot be sufficiently updated. ## 3 Preliminary We first present the problem formulation and some necessary preliminaries. ###### Definition 1 (Multivariate Time Series) We denote a multivariate time series as a sequence of observations. The -th observation consists of features , and was observed at timestamp (the time gap between different timestamps may be not same). In reality, due to unexpected accidents, such as equipment damage or communication error, may have the missing values (e.g., in Fig. 1, in is missing). To represent the missing values in , we introduce a masking vector where, mdt={0{\em if }xdt\em{ is not observed}1{\em otherwise}. In many cases, some features can be missing for consecutive timestamps (e.g., blue blocks in Fig. 1). We define to be the time gap from the last observation to the current timestamp , i.e., δdt=⎧⎪ ⎪⎨⎪ ⎪⎩st−st−1+δdt−1{\em if }t>1,mdt−1=0st−st−1{\em if }t>1,mdt−1=10{\em if }t=1. See Fig. 1 for an illustration. In this paper, we study a general setting for time series classification/regression problems with missing values. We use to denote the label of corresponding classification/regression task. In general, can be either a scalar or a vector. Our goal is to predict based on the given time series . In the meantime, we impute the missing values in as accurate as possible. In another word, we aim to design an effective multi-task learning algorithm for both classification/regression and imputation. ## 4 Brits In this section, we describe the BRITS. Differing from the prior work which uses RNN to impute missing values in a smooth fashion [10], we learn the missing values directly in a recurrent dynamical system [25, 28] based on the observed data. The missing values are thus imputed according to the recurrent dynamics, which significantly boosts both the imputation accuracy and the final classification/regression accuracy. We start the description with the simplest case: the variables observed at the same time are mutually uncorrelated. For such case, we propose algorithms for imputation with unidirectional recurrent dynamics and bidirectional recurrent dynamics, respectively. We further propose an algorithm for correlated multivariate time series in Section 4.3. ### 4.1 Unidirectional Uncorrelated Recurrent Imputation For the simplest case, we assume that for the -th step, and are uncorrelated if (but may be correlated with some ). We first propose an imputation algorithm by unidirectional recurrent dynamics, denoted as RITS-I. In a unidirectional recurrent dynamical system, each value in the time series can be derived by its predecessors with a fixed arbitrary function [9, 24, 3]. Thus, we iteratively impute all the variables in the time series according to the recurrent dynamics. For the -th step, if is actually observed, we use it to validate our imputation and pass to the next recurrent steps. Otherwise, since the future observations are correlated with the current value, we replace with the obtained imputation, and validate it by the future observations. To be more concrete, let us consider an example. ###### Example 1 Suppose we are given a time series , where and are missing. According to the recurrent dynamics, at each time step , we can obtain an estimation based on the previous steps. In the first steps, the estimation error can be obtained immediately by calculating the estimation loss function for . However, when , we cannot calculate the error immediately since the true values are missing. Nevertheless, note that at the -th step, depends on to . We thus obtain a “delayed" error for at the -th step. #### 4.1.1 Algorithm We introduce a recurrent component and a regression component for imputation. The recurrent component is achieved by a recurrent neural network and the regression component is achieved by a fully-connected network. A standard recurrent network [17] can be represented as ht=σ(Whht−1+Uhxt+bh), where is the function, , and are parameters, and is the hidden state of previous time steps. In our case, since may have missing values, we cannot use as the input directly as in the above equation. Instead, we use a “complement" input derived by our algorithm when is missing. Formally, we initialize the initial hidden state as an all-zero vector and then update the model by: ^xt = Wxht−1+bx, (1) xct = mt⊙xt+(1−mt)⊙^xt, (2) γt = exp{−max(0,Wγδt+bγ)}, (3) ht = σ(Wh[ht−1⊙γt]+Uh[xct∘mt]+bh), (4) ℓt = ⟨mt,Le(xt,^xt)⟩. (5) Eq. (1) is the regression component which transfers the hidden state to the estimated vector . In Eq. (2), we replace missing values in with corresponding values in , and obtain the complement vector . Besides, since the time series may be irregularly sampled, in Eq. (3), we further introduce a temporal decay factor to decay the hidden vector . Intuitively, if is large (i.e., the values are missing for a long time), we expect a small to decay the hidden state. Such factor also represents the missing patterns in the time series which is critical to imputation [10]. In Eq. (4), based on the decayed hidden state, we predict the next state . Here, indicates the concatenate operation. In the mean time, we calculate the estimation error by the estimation loss function in Eq. (5). In our experiment, we use the mean absolute error for . Finally, we predict the task label as ^y=fout(T∑i=1αihi), where can be either a fully-connected layer or a softmax layer depending on the specific task, and is the weight for different hidden state which can be derived by the attention mechanism111The design of attention mechanism is out of this paper’s scope. or the mean pooling mechanism, i.e., . We calculate the output loss by . Our model is then updated by minimizing the accumulated loss . #### 4.1.2 Practical Issues In practice, we use LSTM as the recurrent component in Eq. (4) to prevent the gradient vanishing/exploding problems in vanilla RNN [17]. Standard RNN models including LSTM treat as a constant. During backpropagation, gradients are cut at . This makes the estimation errors backpropagate insufficiently. For example, in Example 1, the estimation errors of to are obtained at the -th step as the delayed errors. However, if we treat to as constants, such delayed error cannot be fully backpropagated. To overcome such issue, we treat as a variable of RNN graph. We let the estimation error passes through during the backpropagation. Fig. 2 shows how RITS-I  method works in Example 1. In this example, the gradients are backpropagated through the opposite direction of solid lines. Thus, the delayed error is passed to steps and . In the experiment, we find that our models are unstable during training if we treat as a constant. See Appendix C for details. ### 4.2 Bidirectional Uncorrelated Recurrent Imputation In the RITS-I, errors of estimated missing values are delayed until the presence of the next observation. For example, in Example 1, the error of is delayed until is observed. Such error delay makes the model converge slowly and in turn leads to inefficiency in training. In the meantime, it also leads to the bias exploding problem [6], i.e., the mistakes made early in the sequential prediction are fed as input to the model and may be quickly amplified. In this section, we propose an improved version called BRITS-I. The algorithm alleviates the above-mentioned issues by utilizing the bidirectional recurrent dynamics on the given time series, i.e., besides the forward direction, each value in time series can be also derived from the backward direction by another fixed arbitrary function. To illustrate the intuition of BRITS-I  algorithm, again, we consider Example 1. Consider the backward direction of the time series. In bidirectional recurrent dynamics, the estimation reversely depends on to . Thus, the error in the -th step is back-propagated from not only the -th step in the forward direction (which is far from the current position), but also the -th step in the backward direction (which is closer). Formally, the BRITS-I  algorithm performs the RITS-I  as shown in Eq. (1) to Eq. (5) in forward and backward directions, respectively. In the forward direction, we obtain the estimation sequence and the loss sequence . Similarly, in the backward direction, we obtain another estimation sequence and another loss sequence . We enforce the prediction in each step to be consistent in both directions by introducing the “consistency loss”: ℓconst=Discrepancy(^xt,^xt′) (6) where we use the mean squared error as the discrepancy in our experiment. The final estimation loss is obtained by accumulating the forward loss , the backward loss and the consistency loss . The final estimation in the -th step is the mean of and . ### 4.3 Correlated Recurrent Imputation The previously proposed algorithms RITS-I  and BRITS-I  assume that features observed at the same time are mutually uncorrelated. This may be not true in some applications. For example, in the air quality data [32], each feature represents one measurement in a monitoring station. Obviously, the observed measurements are spatially correlated. In general, the measurement in one monitoring station is close to those values observed in its neighboring stations. In this case, we can estimate a missing measurement according to both its historical data, and the measurements in its neighbors. In this section, we propose an algorithm, which utilizes the feature correlations in the unidirectional recurrent dynamical system. We refer to such algorithm as RITS. The feature correlated algorithm for bidirectional case (i.e., BRITS) can be derived in the same way. Note that in Section 4.1, the estimation is only correlated with the historical observations, but irrelevant with the the current observation. We refer to as a “history-based estimation". In this section, we derive another “feature-based estimation" for each , based on the other features at time . Specifically, at the -th step, we first obtain the complement observation by Eq. (1) and Eq. (2). Then, we define our feature-based estimation as where ^zt=Wzxct+bz, (7) , are corresponding parameters. We restrict the diagonal of parameter matrix to be all zeros. Thus, the -th element in is exactly the estimation of , based on the other features. We further combine the historical-based estimation and the feature-based estimation . We denote the combined vector as , and we have that βt = σ(Wβ[γt∘mt]+bβ) (8) ^ct = βt⊙^zt+(1−βt)⊙^xt. (9) Here we use as the weight of combining the history-based estimation and the feature-based estimation . Note that is derived from by Eq. (7). The elements of can be history-based estimations or truly observed values, depending on the presence of the observations. Thus, we learn the weight by considering both the temporal decay and the masking vector as shown in Eq. (8). The rest parts are similar as the feature uncorrelated case. We first replace missing values in with the corresponding values in . The obtained vector is then fed to the next recurrent step to predict memory : cct = mt⊙xt+(1−mt)⊙^ct (10) ht = σ(Wh[ht−1⊙γt]+Uh[cct∘mt]+bh). (11) However, the estimation loss is slightly different with the feature uncorrelated case. We find that simply using leads to a very slow convergence speed. Instead, we accumulate all the estimation errors of , and : ℓt=Le(xt,^xt)+Le(xt,^zt)+Le(xt,^ct). ## 5 Experiment Our proposed methods are applicable to a wide variety of applications. We evaluate our methods on three different real-world datasets. The download links of the datasets, as well as the implementation codes can be found in the GitHub page. ### 5.1 Dataset Description #### 5.1.1 Air Quality Data We evaluate our models on the air quality dataset, which consists of PM2.5 measurements from monitoring stations in Beijing. The measurements are hourly collected from 2014/05/01 to 2015/04/30. Overall, there are values are missing. For this dataset, we do pure imputation task. We use the exactly same train/test setting as in prior work [32], i.e., we use the , , and months as the test data and the other months as the training data. See Appendix D for details. To train our model, we randomly select every consecutive steps as one time series. #### 5.1.2 Health-care Data We evaluate our models on health-care data in PhysioNet Challenge 2012 [27], which consists of multivariate clinical time series from intensive care unit (ICU). Each time series contains measurements such as Albumin, heart-rate etc., which are irregularly sampled at the first hours after the patient’s admission to ICU. We stress that this dataset is extremely sparse. There are up to missing values in total. For this dataset, we do both the imputation task and the classification task. To evaluate the imputation performance, we randomly eliminate of observed measurements from data and use them as the ground-truth. At the same time, we predict the in-hospital death of each patient as a binary classification task. Note that the eliminated measurements are only used for validating the imputation, and they are never visible to the model. #### 5.1.3 Localization for Human Activity Data The UCI localization data for human activity [18] contains records of five people performing different activities such as walking, falling, sitting down etc (there are activities). Each person wore four sensors on her/his left/right ankle, chest, and belt. Each sensor recorded a 3-dimensional coordinates for about to millisecond. We randomly select consecutive steps as one time series, and there are time series in total. For this dataset, we do both imputation and classification tasks. Similarly, we randomly eliminate observed data as the imputation ground-truth. We further predict the corresponding activity of observed time series (i.e., walking, sitting, etc). ### 5.2 Experiment Setting #### 5.2.1 Model Implementations We fix the dimension of hidden state in RNN to be . We train our model by an Adam optimizer with learning rate and batch size . For all the tasks, we normalize the numerical values to have zero mean and unit variance for stable training. We use different early stopping strategies for pure imputation task and classification tasks. For the imputation tasks, we randomly select of non-missing values as the validation data. The early stopping is thus performed based on the validation error. For the classification tasks, we first pre-train the model as an imputation task. Then we use -fold cross validation to further optimize both the imputation and classification losses simultaneously. We evaluate the imputation performance in terms of mean absolute error () and mean relative error (). Suppose that is the ground-truth of the -th item, is the output of the -th item, and there are items in total. Then, and are defined as MAE=∑i|predi−labeli|N,MRE=∑i|predi−labeli|∑i|labeli|. For air quality data, the evaluation is performed on its original data. For heath-care data and activity data, since the numerical values are not in the same scale, we evaluate the performances on their normalized data. To further evaluate the classification performances, we use area under ROC curve () [8] for health-care data, since such data is highly unbalanced (there are patients who died in hospital). We then use standard accuracy for the activity data, since different activities are relatively balanced. #### 5.2.2 Baseline Methods We compare our model with both RNN-based methods and non-RNN based methods. The non-RNN based imputation methods include: • Mean: We simply replace the missing values with corresponding global mean. • KNN: We use -nearest neighbor [13] to find the similar samples, and impute the missing values with weighted average of its neighbors. • Matrix Factorization (MF): We factorize the data matrix into two low-rank matrices, and fill the missing values by matrix completion [13]. • MICE: We use Multiple Imputation by Chained Equations (MICE), a widely used imputation method, to fill the missing values. MICE creates multiple imputations with chained equations [2]. • ImputeTS: We use ImputeTS package in R to impute the missing values. ImputeTS is a widely used package for missing value imputation, which utilizes the state space model and kalman smoothing [23]. • STMVL: Specifically, we use STMVL for the air quality data imputation. STMVL is the state-of-the-art method for air quality data imputation. It further utilizes the geo-locations when imputing missing values [32]. We implement KNN, MF and MICE based on the python package fancyimpute. In recent studies, RNN-based models in missing value imputations achieve remarkable performances [10, 21, 12, 33]. We also compare our model with existing RNN-based imputation methods, including: • GRU-D: GRU-D is proposed for handling missing values in health-care data. It imputes each missing value by the weighted combination of its last observation, and the global mean, together with a recurrent component [10]. • M-RNN: M-RNN also uses bi-directional RNN. It imputes the missing values according to hidden states in both directions in RNN. M-RNN treats the imputed values as constants. It does not consider the correlations among different missing values[33]. We compare the baseline methods with our four models: RITS-I  (Section 4.1), RITS  (Section 4.2), BRITS-I (Section 4.3) and BRITS (Section 4.3). We implement all the RNN-based models with PyTorch, a widely used package for deep learning. All models are trained with GPU GTX 1080. ### 5.3 Experiment Results Table 1 shows the imputation results. As we can see, simply applying naıve mean imputation is very inaccurate. Alternatively, KNN, MF, and MICE perform much better than mean imputation. ImputeTS achieves the best performance among all the non-RNN methods, especially for the heath-care data (which is smooth and contains few sudden waves). STMVL performs well on the air quality data. However, it is specifically designed for geographical data, and cannot be applied in the other datasets. Most of RNN-based methods, except GRU-D, demonstrates significantly better performances in imputation tasks. We stress that GRU-D does not impute missing value explicitly. It actually performs well on classification tasks (See Appendix A for details). M-RNN uses explicitly imputation procedure, and achieves remarkable imputation results. Our model BRITS  outperforms all the baseline models significantly. According to the performances of our four models, we also find that both bidirectional recurrent dynamics, and the feature correlations are helpful to enhance the model performance. We also compare the accuracies of all RNN-based models in classification tasks. GRU-D achieves an AUC of on health-care data , and accuracy of on human activity; M-RNN is slightly worse. It achieves and on two tasks. Our model BRITS  outperforms the baseline methods. The accuracies of BRITS  are and respectively. See Appendix A for more classification results. Due to the space limitation, we provide more experimental results in Appendix A and B for better understanding our model. ## 6 Conclusion In this paper, we proposed BRITS, a novel method to use recurrent dynamics to effectively impute the missing values in multivariate time series. Instead of imposing assumptions over the data-generating process, our model directly learns the missing values in a bidirectional recurrent dynamical system, without any specific assumption. Our model treats missing values as variables of the bidirectional RNN graph. Thus, we get the delayed gradients for missing values in both forward and backward directions, which makes the imputation of missing values more accurate. We performed the missing value imputation and classification/regression simultaneously within a joint neural network. Experiment results show that our model demonstrates more accurate results for both imputation and classification/regression than state-of-the-art methods. ## References • [1] C. F. Ansley and R. Kohn. On the estimation of arima models with missing values. In Time series analysis of irregularly observed data, pages 9–37. Springer, 1984. • [2] M. J. Azur, E. A. Stuart, C. Frangakis, and P. J. Leaf. Multiple imputation by chained equations: what is it and how does it work? International journal of methods in psychiatric research, 20(1):40–49, 2011. • [3] A. Basharat and M. Shah. Time series prediction by chaotic modeling of nonlinear dynamical systems. In Computer Vision, 2009 IEEE 12th International Conference on, pages 1941–1948. IEEE, 2009. • [4] B. Batres-Estrada. Deep learning for multivariate financial time series, 2015. • [5] S. Bauer, B. Schölkopf, and J. Peters. The arrow of time in multivariate time series. In International Conference on Machine Learning, pages 2043–2051, 2016. • [6] S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages 1171–1179, 2015. • [7] M. Berglund, T. Raiko, M. Honkala, L. Kärkkäinen, A. Vetek, and J. T. Karhunen. Bidirectional recurrent neural networks as generative models. In Advances in Neural Information Processing Systems, pages 856–864, 2015. • [8] A. P. Bradley. The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern recognition, 30(7):1145–1159, 1997. • [9] P. Brakel, D. Stroobandt, and B. Schrauwen. Training energy-based models for time-series imputation. The Journal of Machine Learning Research, 14(1):2771–2797, 2013. • [10] Z. Che, S. Purushotham, K. Cho, D. Sontag, and Y. Liu. Recurrent neural networks for multivariate time series with missing values. Scientific reports, 8(1):6085, 2018. • [11] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. • [12] E. Choi, M. T. Bahadori, A. Schuetz, W. F. Stewart, and J. Sun. Doctor ai: Predicting clinical events via recurrent neural networks. In Machine Learning for Healthcare Conference, pages 301–318, 2016. • [13] J. Friedman, T. Hastie, and R. Tibshirani. The elements of statistical learning, volume 1. Springer series in statistics Springer, Berlin, 2001. • [14] D. S. Fung. Methods for the estimation of missing values in time series. 2006. • [15] A. C. Harvey. Forecasting, structural time series models and the Kalman filter. Cambridge university press, 1990. • [16] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735, 1997. • [17] G. Ian, B. Yoshua, and C. Aaron. Deep learning. Book in preparation for MIT Press, 2016. • [18] B. Kaluža, V. Mirchevska, E. Dovgan, M. Luštrek, and M. Gams. An agent-based approach to care in independent living. In International joint conference on ambient intelligence, pages 177–186. Springer, 2010. • [19] D. M. Kreindler and C. J. Lumsden. The effects of the irregular sample and missing data in time series analysis. Nonlinear Dynamical Systems Analysis for the Behavioral Sciences Using Real Data, page 135, 2012. • [20] L. Li, J. McCann, N. S. Pollard, and C. Faloutsos. Dynammo: Mining and summarization of coevolving sequences with missing values. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 507–516. ACM, 2009. • [21] Z. C. Lipton, D. Kale, and R. Wetzel. Directly modeling missing data in sequences with rnns: Improved classification of clinical time series. In Machine Learning for Healthcare Conference, pages 253–270, 2016. • [22] Z. Liu and M. Hauskrecht. Learning linear dynamical systems from multivariate time series: A matrix factorization based framework. In Proceedings of the 2016 SIAM International Conference on Data Mining, pages 810–818. SIAM, 2016. • [23] S. Moritz and T. Bartz-Beielstein. imputeTS: Time Series Missing Value Imputation in R. The R Journal, 9(1):207–218, 2017. • [24] T. Ozaki. 2 non-linear time series models and dynamical systems. Handbook of statistics, 5:25–83, 1985. • [25] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pages 1310–1318, 2013. • [26] S. Rani and G. Sikka. Recent techniques of clustering of time series data: a survey. International Journal of Computer Applications, 52(15), 2012. • [27] I. Silva, G. Moody, D. J. Scott, L. A. Celi, and R. G. Mark. Predicting in-hospital mortality of icu patients: The physionet/computing in cardiology challenge 2012. In Computing in Cardiology (CinC), 2012, pages 245–248. IEEE, 2012. • [28] D. Sussillo and O. Barak. Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural computation, 25(3):626–649, 2013. • [29] D. Wang, W. Cao, J. Li, and J. Ye. Deepsd: supply-demand prediction for online car-hailing services using deep neural networks. In Data Engineering (ICDE), 2017 IEEE 33rd International Conference on, pages 243–254. IEEE, 2017. • [30] J. Wang, A. P. De Vries, and M. J. Reinders. Unifying user-based and item-based collaborative filtering approaches by similarity fusion. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 501–508. ACM, 2006. • [31] S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In Advances in neural information processing systems, pages 802–810, 2015. • [32] X. Yi, Y. Zheng, J. Zhang, and T. Li. St-mvl: filling missing values in geo-sensory time series data. 2016. • [33] J. Yoon, W. R. Zame, and M. van der Schaar. Multi-directional recurrent neural networks: A novel method for estimating missing data. 2017. • [34] H.-F. Yu, N. Rao, and I. S. Dhillon. Temporal regularized matrix factorization for high-dimensional time series prediction. In Advances in neural information processing systems, pages 847–855, 2016. • [35] J. Zhang, Y. Zheng, and D. Qi. Deep spatio-temporal residual networks for citywide crowd flows prediction. AAAI, November 2017. ## Appendix A Performance Comparison for Classification Tasks Table 2 shows the performances of different RNN-based models on the classification tasks. As we can see, our model BRITS  outperforms other baseline methods for classifications. Comparing with Table 1, GRU-D performs well for classification tasks. Furthermore, we find that it is very important for GRU-D to carefully apply the dropout techniques in order to prevent the overfitting (we use dropout layer on the top classification layer). However, our models further utilize the imputation errors as the supervised signals. It seems that dropout is not necessary for our models during training. ## Appendix B Performance Comparison for Univariate Synthetic Data To better understand our model, we generate a set of univariate synthetic time series. Speficically, we randomly generate a time series with length , using the state-space representation [15]: xt = μt+θt+ϵt, μt = μt−1+λt−1+ξt, λt = λt−1+ζt, θt = s−1∑j=1−θt−j+ωt, where is the -th value in time series. The residual terms , , and are randomly drawn from a normal distribution . We eliminate about values from the generated series, and compare our model BRITS-I (note the data is univariate) and ImputeTS. We show three examples in Fig. 3. The first row corresponds to the imputations of ImputeTS and the second row corresponds to our model BRITS-I. As we can see, our model demonstrates better performance than ImputeTS. Especially, ImputeTS fails when the start part of time series is missing. However, for our model, the imputation errors are backpropagated to previous time steps. Thus, our model can adjust the imputations in the start part with delayed gradient, which leads to much more accurate results. ## Appendix C Performance for Non-differentiable ^xt As we claimed in Section 4.1, we regard the missing values as variables of RNN graph. During the backpropagation, the imputed values can be further updated sufficiently. In the experiment, we find that if we cut the gradient of (i.e., regard it as constant), the models are unstable and easy to overfit during the training. We refer to the model with non-differentiate as BRITS-cut. Fig. 4 shows the validation errors of BRITS  and BRITS-cut during training for the health-care data imputation. At the first iterations, the validation error of BRITS-cut decreases fast. However, it soon fails due to the overfitting. ## Appendix D Test Data of Air Quality Imputation We use the same method as in prior work [32] to select the test data for air quality imputation. Specifically, we use the , , and months as the test set and the rest data as the training set. To evaluate our model, we select the test timeslots by the following rule: if a measurement is observed at a timeslot in one of these four months (e.g., 8 o’clock 2015/03/07), we check the corresponding position in its previous month (e.g., 8 o’clock 2015/02/07). If it is absent in the previous month, we eliminate such value and use it as the imputation ground-truth of test data. Recall that to train our model, we randomly select consecutive time steps as one time series. Thus, a test timeslot may be contained in multiple time series. We use the mean of the imputations in different time series as the final result. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8138144016265869, "perplexity": 1457.5723315512462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869933.16/warc/CC-MAIN-20201020050920-20201020080920-00349.warc.gz"}
http://math.stackexchange.com/questions/189990/solving-for-a-coefficent-term-of-factored-polynomial
# solving for a coefficent term of factored polynomial. Given: the coefficent of $x^2$ in the expansion $(1+2x+ax^2)(2-x)^6$ is 48, find the value of the constant a. I expanded it and got $64-64\,x-144\,{x}^{2}+320\,{x}^{3}-260\,{x}^{4}+108\,{x}^{5}-23\,{x}^{ 6}+2\,{x}^{7}+64\,a{x}^{2}-192\,a{x}^{3}+240\,a{x}^{4}-160\,a{x}^{5}+ 60\,a{x}^{6}-12\,a{x}^{7}+a{x}^{8}$ because of the given info $48x^2=64x^2-144x^2$ solve for a, $a=3$. Correct? ps. is there an easier method other than expanding the terms? I have tried using the bionomal expansion; however, one needs still to multiply the term. expand $(2-x)^6$ which is not very fast. - You can use the binomial theorem to expand $(2-x)^6$ –  Mhenni Benghorbal Sep 2 '12 at 15:17 It would be much easier to just compute the coefficient at $x^2$ in the expansion of $(1+2x+ax^2)(2-x)^6$. You can begin by computing: $$(2-x)^6 = 64 - 6 \cdot 2^5 x + 15 \cdot 2^4 x^2 + x^3 \cdot (...) = 64 - 192 x + 240 x^2 + x^3 \cdot (...)$$ Now, multiply this by $(1+2x+ax^2)$. Again, you're only interested in the term at $x^2$, so you can spare yourself much effort by just computing this coefficient: to get $x^2$ in the product, you need to take $64, \ -192 x, \ 240 x^2$ from the first polynomial, and $ax^2,\ 2x, 1$ from the second one (respectively). $$(1+2x+ax^2)(2-x)^6 = (...)\cdot 1 + (...) \cdot x + (64a - 2\cdot 192 + 240 )\cdot x^2 + x^3 \cdot(...)$$ Now, you get the equation: $$64a - 2\cdot 192 + 240 = 48$$ whose solution is indeed $a = 3$. As an afterthought: there is another solution, although it might be an overkill. Use that the term at $x^2$ in polynomial $p$ is $p''(0)/2$. Your polynomial is: $$p(x) = (1+2x+ax^2)(2-x)^6$$ so you can compute easily enough: $$p'(x) = (2+2ax)(2-x)^6 + 6(1+2x+ax^2)(2-x)^5$$ and then: $$p''(x) = 2a(2-x)^6 + 2 \cdot 6(2+2ax)(2-x)^5 + 30(1+2x+ax^2)(2-x)^4$$ You can now plug in $x=0$: $$p''(0) = 2a \cdot 2^6 + 2 \cdot 6 \cdot 2 \cdot 2^5 + 30 \cdot 2^4$$ On the other hand, you have $$p''(0) = 2 \cdot 48$$ These two formulas for $p''(0)$ let you write down an equation for $a$. - thanks for your wonderful answer. Is there a formula other than mulitplying each term to find the coeffiecents of a factored polynomal? –  yiyi Sep 2 '12 at 12:51 could you explain <<Use that the term at x2 in polynomial p is p′′(0)/2.>> you mean the coefficent of $x^2$ of $p(x)$ is equal to the first derivative of $p(0)/2$? $p(x)=6x^2$ $p^{\prime}(0)/2 = 12(0) = 0$ –  yiyi Sep 2 '12 at 13:06 misread it. Yes the 2nd derivative would be that. –  yiyi Sep 2 '12 at 13:55 All you need of the expansion of $(2-x)^6$ is the first three terms, $$(2-x)^6=2^6-(6)2^5x+(15)2^4x^2+\cdots=64-192x+240x^2+\cdots$$ Then multiplying by $1+2x+ax^2$ you can pick out the coefficient of $x^2$ as $$(1)(240)-(2)(192)+64a$$ - The coefficient of $x^2$ is obtained by adding three contributions : • $a\, 2^6$ : the coefficient of $x^2$ at the left and $2\,x^0$ at the right ($2$ at power $6$) • $2(6\cdot 2^5(-1))$: the $x$ coefficients on both sides ((constant term $2$)$^5$ $\times\ x$ coef. at the right) • $1\binom{6}{2}2^4$ : the constant coefficient of the left $\times$ the $x^2$ coefficient at the right ($2^4$) The sum of all this is : $\quad 2^6\, a-12\cdot 2^5+15\cdot 2^4=48$ Divided by $2^4=16$ this becomes $\ 4a-24+15=3\$ i.e. $\ \boxed{a=3}$ - You need not expand completely as only low powers play a role: $(2-x)^6 = 2^6-6\cdot 2^5\cdot x + \frac{6\cdot5}2\cdot 2^4\cdot x^2+\ldots = 64-192x+240x^2+\ldots$ where the dots represent anything involving $x^3$ or even higher powers. After this $(1+2x+ax^2)(2-x)^6 = (64-192x+240x^2+\ldots) + (128x-384x^2+\ldots)+ (64 a x^2+\ldots) = 64 -64 x + (-144+64a)x^2+\ldots$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9679792523384094, "perplexity": 398.4838465093365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657110730.89/warc/CC-MAIN-20140914011150-00092-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://openstax.org/books/physics/pages/7-extended-response
Physics # Extended Response PhysicsExtended Response ### Extended Response #### 7.1Kepler's Laws of Planetary Motion 35. The orbit of Halley’'s Comet has an eccentricity of 0.967 and stretches to the edge of the solar system. Part A. Describe the shape of the comet’s orbit. Part B. Compare the distance traveled per day when it is near the sun to the distance traveled per day when it is at the edge of the solar system. Part C. Describe variations in the comet's speed as it completes an orbit. Explain the variations in terms of Kepler's second law of planetary motion. 1. Part A. The orbit is circular, with the sun at the center. Part B. The comet travels much farther when it is near the sun than when it is at the edge of the solar system. Part C. The comet decelerates as it approaches the sun and accelerates as it leaves the sun. 2. Part A. The orbit is circular, with the sun at the center. Part B. The comet travels much farther when it is near the sun than when it is at the edge of the solar system. Part C. The comet accelerates as it approaches the sun and decelerates as it leaves the sun. 3. Part A. The orbit is very elongated, with the sun near one end. Part B. The comet travels much farther when it is near the sun than when it is at the edge of the solar system. Part C. The comet decelerates as it approaches the sun and accelerates as it moves away from the sun. 36. For convenience, astronomers often use astronomical units (AU) to measure distances within the solar system. One AU equals the average distance from Earth to the sun. Halley’s Comet returns once every 75.3 years. What is the average radius of the orbit of Halley’s Comet in AU? 1. 0.002 AU 2. 0.056 AU 3. 17.8 AU 4. 653 AU #### 7.2Newton's Law of Universal Gravitation and Einstein's Theory of General Relativity 37. It took scientists a long time to arrive at the understanding of gravity as explained by Galileo and Newton. They were hindered by two ideas that seemed like common sense but were serious misconceptions. First was the fact that heavier things fall faster than light things. Second, it was believed impossible that forces could act at a distance. Explain why these ideas persisted and why they prevented advances. 1. Heavier things fall faster than light things if they have less surface area and greater mass density. In the Renaissance and before, forces that acted at a distance were considered impossible, so people were skeptical about scientific theories that invoked such forces. 2. Heavier things fall faster than light things because they have greater surface area and less mass density. In the Renaissance and before, forces that act at a distance were considered impossible, so people were skeptical about scientific theories that invoked such forces. 3. Heavier things fall faster than light things because they have less surface area and greater mass density. In the Renaissance and before, forces that act at a distance were considered impossible, so people were quick to accept scientific theories that invoked such forces. 4. Heavier things fall faster than light things because they have larger surface area and less mass density. In the Renaissance and before, forces that act at a distance were considered impossible because of people’s faith in scientific theories. 38. The masses of Earth and the moon are 5.97×1024 kg and 7.35×1022 kg, respectively. The distance from Earth to the moon is 3.80×105 km. At what point between the Earth and the moon are the opposing gravitational forces equal? (Use subscripts e and m to represent Earth and moon.) 1. 3.42×105 km from the center of Earth 2. 3.80×105 km from the center of Earth 3. 3.42×106 km from the center of Earth 4. 3.10×107 km from the center of Earth Order a print copy As an Amazon Associate we earn from qualifying purchases.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8602244257926941, "perplexity": 861.2094120589296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301488.71/warc/CC-MAIN-20220119185232-20220119215232-00624.warc.gz"}
https://ai.stackexchange.com/questions/26473/why-is-openais-ppo2-implementation-differentiable
# Why is Openai's PPO2 implementation differentiable? I'm trying to understand the concept behind the implementation of the OpenAI PPO2 algorithm. The loss function that is minimized is as follows: loss = pg_loss - entropy * ent_coef + vf_loss * vf_coef. First question: The computation of pg_loss requires to use operations like tf.reduce_mean and tf.maximum. Are these two functions differentiable? Apparently, they are, otherwise, it would not work. Can someone explain why so I can understand the implementation? Second question: During training, an action is sampled by using the Gumbel Distribution: Noise from such a distribution is added to the logits and then tf.argmax is applied. This index is then used to calculate the negative log-likelihood. However, the tf.argmax should also not be differentiable, so how can this work? • Please, edit your post to include a link to the implementation you're referring to. – nbro Feb 23 at 10:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932577967643738, "perplexity": 1078.4876481339918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00249.warc.gz"}
https://arxiv-export-lb.library.cornell.edu/abs/2205.07588v1
cs.IT (what is this?) # Title: Characterization of the Gray-Wyner Rate Region for Multivariate Gaussian Sources: Optimality of Gaussian Auxiliary RV Abstract: Examined in this paper, is the Gray and Wyner achievable lossy rate region for a tuple of correlated multivariate Gaussian random variables (RVs) $X_1 : \Omega \rightarrow {\mathbb R}^{p_1}$ and $X_2 : \Omega \rightarrow {\mathbb R}^{p_2}$ with respect to square-error distortions at the two decoders. It is shown that among all joint distributions induced by a triple of RVs $(X_1,X_2, W)$, such that $W : \Omega \rightarrow {\mathbb W}$ is the auxiliary RV taking continuous, countable, or finite values, the Gray and Wyner achievable rate region is characterized by jointly Gaussian RVs $(X_1,X_2, W)$ such that $W$ is an $n$-dimensional Gaussian RV. It then follows that the achievable rate region is parametrized by the three conditional covariances $Q_{X_1,X_2|W}, Q_{X_1|W}, Q_{X_2|W}$ of the jointly Gaussian RVs. Furthermore, if the RV $W$ makes $X_1$ and $X_2$ conditionally independent, then the corresponding subset of the achievable rate region, is simpler, and parametrized by only the two conditional covariances $Q_{X_1|W}, Q_{X_2|W}$. The paper also includes the characterization of the Pangloss plane of the Gray-Wyner rate region along with the characterizations of the corresponding rate distortion functions, their test-channel distributions, and structural properties of the realizations which induce these distributions. Subjects: Information Theory (cs.IT) Cite as: arXiv:2205.07588 [cs.IT] (or arXiv:2205.07588v1 [cs.IT] for this version) ## Submission history From: Evagoras Stylianou [view email] [v1] Mon, 16 May 2022 11:47:29 GMT (986kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318781852722168, "perplexity": 1150.603585861777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00450.warc.gz"}
http://www.zazzle.com.au/lord+aprons
Showing All Results 3,540 results Page 1 of 59 Related Searches: jesus, god, christian Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo No matches for Showing All Results 3,540 results Page 1 of 59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8207886219024658, "perplexity": 4430.296148106386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011237144/warc/CC-MAIN-20140305092037-00071-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/is-it-possible-to-calculate-earths-massiveness.393043/
# Is it possible to calculate earth's massiveness • #1 18 0 ## Main Question or Discussion Point Hi everybody , This is the first time i am posting on this forum. A couple of days back i was having a conversation with a religious person who went on telling that the position of earth and gravity is so perfect that it was at right distance from sun and earth is massive enough that oxygen is present as gas in the atmosphere ... so i wondered how massive the earth should be that the atmosphere, under the influence of gravity compresses itself under its own weight such that oxygen is present as a liquid .... by calculating this we would know how much variation in earths mass is required for this to happen and hence determine whether this is divine plan or not. Regards Pankaj D #### Attachments • 9.1 KB Views: 362 Related Classical Physics News on Phys.org • #2 53 0 Ummm. Your post is confusing. You should proofread what you write before posting. At least I am having a very hard time understanding what you are saying. First, of course the earth is massive enough to have oxygen as a gas in the atmosphere, its what you are breathing. I think you were asking how massive a planet must be to create liquid oxygen on the surface right? Also i don't see what this has to do with God. • #3 18 0 Ummm. Your post is confusing. You should proofread what you write before posting. At least I am having a very hard time understanding what you are saying. First, of course the earth is massive enough to have oxygen as a gas in the atmosphere, its what you are breathing. I think you were asking how massive a planet must be to create liquid oxygen on the surface right? Also i don't see what this has to do with God. Sorry for my style of delivery ... yes it was even confusing to me ... i guess what i was asking is how can we calculate .... the massiveness (the mass) of a planet so that ... it creates liquid oxygen on the surface. It is related to a question that a religious person posed infront of me, he said that the gravity of earth is so right that we have gaseous oxygen in the atmosphere. other wise if it were little less earth would be incapable of holding the atmosphere and if it were to much, there would be no gaseous oxygen ... in his view it is gods divine plan ... so i thought about calculating how much variation in earths mass would create liquid oxygen ... to know how precise is the gods plan anyways. I hope u understand. • #4 Staff Emeritus 2019 Award 23,843 6,290 Whether oxygen liquifies is not so much a function of a planet's mass as its temperature. • #5 18 0 So liquifiability of a gas is more relevant to temperature than pressure ... ? If that is so do we have to consider distance of earth from sun to calculate the temperature variation that would be needed to account for oxygen's availability as a atmospheric gas ..? • #6 18 0 Whether oxygen liquifies is not so much a function of a planet's mass as its temperature. So liquifiability of a gas is more relevant to temperature than pressure ... ? If that is so do we have to consider distance of earth from sun to calculate the temperature variation that would be needed to account for oxygen's availability as a atmospheric gas ..? • #7 53 0 Ah ok I understand now. Thank you for clarifying. This is a very interesting problem. I've been trying to find a phase diagram that shows what pressure would be required to liquify water at room temperature,but to no avail. I don't understand the way different phases work above the critical point enough. However This I think is unnecessary. First, the boiling point of water is 90K at 1 Atmosphere of pressure. Seeing how we live at around 300-400K we can assume that the pressure would have to be a great deal of a lot more to liquify the water at these temperatures. I have found something that gives a more quantitative argument of this. If you look at the diagram on the first page of this document, http://iweb.tntech.edu/albu/teaching/CHEM3520-S04/HW06S.pdf You can see that there is a crude phase space diagram made for oxygen. The point on the liquid gas line is the 90K boiling point. if we look at around 150K(around the critical point) the pressure required to keep the oxygen a liquid is approximately e^10 torr, or about 30 atmospheres. This means at about -130 degrees Celcius. there would have to be 30 times as much pressure as normal to liquify water. At higher temperatures i would imagine that this would become exponentially greater. As i said before though my knowledge of materials above the critical point is minimal. I still feel my argument is valid to say that a quite a bit of extra mass on the earth would definitely not liquify oxygen. As for the calculation of the mass required to do this, we would first need to find the pressure at room temperature where water liquefies. the lower pressure of a smaller earth not being able to hold oxygen is a less extreme assumption than the liquid water thing. but i think it is definitely safe to say that the earth would have to lose quite a bit of mass before life was not possible. however if there were less oxygen, then life would have just evolved to deal with it (to a certain extreme of course). Last edited by a moderator: • #8 18 0 Your deduction is very thoughtful and interesting , from what i understand is that u are trying to say That at about 90K almost 30 times pressure would be required to keep oxygen as liquid. What i see form this link here http://hypertextbook.com/facts/2005/JudyTang.shtml is that average temperature of earth is 15 degree Celsius. At thermodynamic equilibrium it would be -18 celsius , so would it be safe to assume that even if earth was 30 times as massive, still it would be very tough to produce liquid oxygen on the surface untill the distance of earth from sun is so far (atleast 10 times farther) so that surface temperature of earth drops to around -130 celcius …only then can we expect oxygen on the surface ….? • #9 To convert oxygen into water at atmospheric temperature 300 K, pressure required is 5 GPa or 5000 MPa. See link http://en.citizendium.org/images/thumb/b/bc/Oxygen_phase_diagram.png/250px-Oxygen_phase_diagram.png Atmospheric pressure is 0.1 MPa. So, 50,000 times more pressure is required to convert oxygen into liquid. So it is required to increase the weight of vertical air column by 50,000 times. To increase the weight of air column, acceleration of gravity is required to increase 50,000 times more so, it is 9.81 * 50,000. To increase gravity acceleration g= G * M/R. G= universal gravity const. M & R are mass & radius of earth. Here we assume that as mass of earth increases there is no increase is radius, only increase in density of earth. So, mass of earth required 50,000 times more than exist now. After this also only lower layer of air undergo this much higher pressure & so oxygen is converted into liquid. Above this layer weight of air column is decreases & so air pressure is also decreases so oxygen on layer above not converted into liquid. Here we take only atmospheric pressure is equal to oxygen pressure but it is required to take partial pressure of oxygen into consideration. Oxygen present into air is 21% say fifth part of all gases. So partial pressure of oxygen is fifth part of 0.1 MPa. So, to convert it into liquid five times more pressure & so five time more gravity acceleration & so mass required. So, mass of earth required 50,000 * 5=2,50,000 times more. If, we take earth of only mass of 50,000 times more, than also if liquid oxygen put in open atmosphere, it is not converted into gas immediately, because only top layer of liquid oxygen undergo partial pressure of oxygen (Fifth time of atmosphere) & so start converted into gas. But the layer below this undergo pressure of oxygen is equal to atmosphere pressure because it is surrounded by liquid oxygen, so it remain liquid. • #10 18 0 To convert oxygen into water at atmospheric temperature 300 K, pressure required is 5 GPa or 5000 MPa. See link http://en.citizendium.org/images/thumb/b/bc/Oxygen_phase_diagram.png/250px-Oxygen_phase_diagram.png Atmospheric pressure is 0.1 MPa. Here Oxygen conversion takes place at 300K which is roughly 30 degree Celsius or room temperature. The answer is complete, except i have one doubt that we dont take temperature into consideration. The earth is able to hold heat in the atmosphere due to green house effect .... what would happen if we were so far away from sun that the average temperature drops . Now lets assume that if earth were not 50000*5 times massive and let alone temperature play the equation would it be possible to achieve liquid oxygen on the surface by only decreasing the temperature and not changing the pressure at all, because as you observed that partial pressure of liquid oxygen below the top layer in liquid oxygen is equal to atmospheric pressure or 5 times partial pressure. And If so .. how far fro sun do we have to be observe this phenomenon. Last edited: • #11 22 0 It's a combination of different factors, gravity/mass, distance from sun/temperature, atmosphere, external gravitational forces etc. Your friend has probably half understood some version of the 'goldilocks' theory where hundreds of coincidences make our existance possible, everything is 'just right' exactly. An argument against that is that there are countless billions of solar systems, and the one we are part of is by definition the one able to support life. There is a deeper argument on the nature of quantum physics, on the nature of consciousness, on the nature of the big bang, and on the magic of the symetry of equations. Your friend means well, he is trying to bring you closer to heaven, but his science is poor. • #12 24 0 I'm gonna feed this debate, and maybe provide a stronger argument to counter your religious acquaintance. Consider the moon of Saturn, Enceladus (don't know it-- Wikipedia it. It was in a recent Scientific American). It is clearly very far from the sun. It is much smaller than the earth. And it has water. How? It's made out of the right material, and a tidal force caused by the eccentricity of its orbit makes it generate heat. There are clearly many circumstances in which water can arise. Granted, Enceladus' situation is rather unique--one of Saturn's other moons is "pulling it along" in their synchronized orbits. Still, my point stands, that there are many situations in which we can have liquid water, and so to think that the only variables involved are mass and distance from a star, is narrow-minded. Plaster is absolutely right: there are many other variables involved, and the earth, while albeit it is still unique to our knowledge, our discoveries are making it seem less and less unique every day. --Jake • #13 DaveC426913 Gold Member 18,599 2,055 If Earth were too distant and cold for God to put people on it, he'd have just put them on Venus instead, where it's warmer. Just like if Mars were too distant and cold for God to put people on it, he'd have just put them on Earth instead... oh wait, he did. • #14 Here Oxygen conversion takes place at 300K which is roughly 30 degree Celsius or room temperature. The answer is complete, except i have one doubt that we dont take temperature into consideration. The earth is able to hold heat in the atmosphere due to green house effect .... what would happen if we were so far away from sun that the average temperature drops . Now lets assume that if earth were not 50000*5 times massive and let alone temperature play the equation would it be possible to achieve liquid oxygen on the surface by only decreasing the temperature and not changing the pressure at all, because as you observed that partial pressure of liquid oxygen below the top layer in liquid oxygen is equal to atmospheric pressure or 5 times partial pressure. And If so .. how far fro sun do we have to be observe this phenomenon. if we were so far away from sun that the average earth temperature drops & as it attain 90 K, oxygen start to liquefy, as 90 K is boiling temp. of oxygen at atm. Pressure. To calculate distance so many assumptions are required. Take only sun as a source of energy for earth, no other source. Take earth has constant specific heat. Average temperature of empty space is 3 K. May be average temperature of empty space in the galaxy may by different & higher than 3 K. So If sun is not exist the temp. of earth comes down to 3 K But for simplicity take it as 0 K. Average temp. of earth is 13 C, 286 K. Sun gives some energy Q & increase temp. of earth 286 K. To increase temp. of earth to temp. 90 K only (90/286) * Q energy is required. It is required energy from sun to earth decreases to (286/90)=3.1777th part. If distance increased to double, radiation energy decreased to fourth part (Inverse square law). So, If we want to decrease sun energy to 3.1777th part, the distance required to increase by sqrt. of 3.1777=1.7824. So, minimum earth distance required from sun to attain temp. below 90 K is 149 million km * 1.7824= 265 million km • #15 18 0 if we were so far away from sun that the average earth temperature drop……. sqrt. of 3.1777=1.7824. So, minimum earth distance required from sun to attain temp. below 90 K is 149 million km * 1.7824= 265 million km Thanks Suhag u are amazing ….. • #16 18 0 I'm gonna feed this debate, and maybe provide a stronger argument to counter your religious acquaintance. Consider the moon of Saturn, Enceladus (don't know it-- Wikipedia it. It was in a recent Scientific American). It is clearly very far from the sun. It is much smaller than the earth. And it has water. How? It's made out of the right material, and a tidal force caused by the eccentricity of its orbit makes it generate heat. There are clearly many circumstances in which water can arise. Granted, Enceladus' situation is rather unique--one of Saturn's other moons is "pulling it along" in their synchronized orbits. Still, my point stands, that there are many situations in which we can have liquid water, and so to think that the only variables involved are mass and distance from a star, is narrow-minded. Plaster is absolutely right: there are many other variables involved, and the earth, while albeit it is still unique to our knowledge, our discoveries are making it seem less and less unique every day. --Jake Thanks Mazerkham got some ore material to counter the religious fanatic …. • Last Post Replies 5 Views 343 • Last Post Replies 12 Views 5K • Last Post Replies 7 Views 2K • Last Post Replies 42 Views 15K • Last Post Replies 10 Views 3K • Last Post Replies 5 Views 1K • Last Post Replies 1 Views 1K • Last Post Replies 1 Views 1K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488359451293945, "perplexity": 897.6813675169424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145260.40/warc/CC-MAIN-20200220162309-20200220192309-00402.warc.gz"}
https://socratic.org/questions/how-do-you-write-769-5-in-scientific-notation
Algebra Topics # How do you write 769.5 in scientific notation? You keep the number of significant figures as the original number. Therefore, it is $7.695 \times {10}^{2}$. We do not know any more decimal places than that. It was not $769.50$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8776696920394897, "perplexity": 284.2239738602507}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00439.warc.gz"}
https://socratic.org/questions/how-do-you-solve-for-p-in-s-b-1-2-pl
Algebra Topics # How do you solve for P in S=B+(1/2)Pl? Mar 24, 2017 See the entire solution process below: #### Explanation: First, subtract $\textcolor{red}{B}$ from each side of the equation to isolate the term containing $P$ while keeping the equation balanced: $S - \textcolor{red}{B} = - \textcolor{red}{B} + B + \left(\frac{1}{2}\right) P l$ $S - B = 0 + \left(\frac{1}{2}\right) P l$ $S - B = \left(\frac{1}{2}\right) P l$ Next, multiply each side of the equation by $\textcolor{red}{2}$ to eliminate the fraction while keeping the equation balanced: $\textcolor{red}{2} \left(S - B\right) = \textcolor{red}{2} \times \left(\frac{1}{2}\right) P l$ $2 \left(S - B\right) = \cancel{\textcolor{red}{2}} \times \left(\frac{1}{\textcolor{red}{\cancel{\textcolor{b l a c k}{2}}}}\right) P l$ $2 \left(S - B\right) = P l$ Now, divide each side of the equation by $\textcolor{red}{l}$ to solve for $P$ while keeping the equation balanced: $\frac{2 \left(S - B\right)}{\textcolor{red}{l}} = \frac{P l}{\textcolor{red}{l}}$ $\frac{2 \left(S - B\right)}{l} = \frac{P \textcolor{red}{\cancel{\textcolor{b l a c k}{l}}}}{\cancel{\textcolor{red}{l}}}$ $\frac{2 \left(S - B\right)}{l} = P$ $P = \frac{2 \left(S - B\right)}{l}$ ##### Impact of this question 3057 views around the world You can reuse this answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9044500589370728, "perplexity": 429.7785836666708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900335.76/warc/CC-MAIN-20200709131554-20200709161554-00097.warc.gz"}
https://www.ias.ac.in/describe/article/pram/037/03/0281-0292
• Decoupling potentials for the three-body final state Schrödinger equation of knockout reactions • # Fulltext https://www.ias.ac.in/article/fulltext/pram/037/03/0281-0292 • # Keywords Three-body decoupled equation; distorted wave impulse approximation • # Abstract In the conventional distorted wave impulse approximation (DWIA) approach the three-body final state of a knockout reaction is decoupled by assuming a plane wave form for the coupling term. The influence of this decoupling approximation on the analyses of cluster knockout reactions has been investigated for a test case where the exact solution is obtainable. A proper treatment of the coupling term causes large oscillations in the effective distorting optical potentials for the decoupled Schrödinger equation. These decoupling potentials depend strongly not only on the partial wave angular momentum,l but also on their azimuthal projection,m. • # Author Affiliations 1. Nuclear Physics Division, Bhabha Atomic Research Centre, Bombay - 400 085, India • # Pramana – Journal of Physics Current Issue Volume 93 | Issue 5 November 2019 • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.906542956829071, "perplexity": 4730.727408601329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572516.46/warc/CC-MAIN-20190916080044-20190916102044-00328.warc.gz"}
http://cab.unime.it/mus/348/
# Non-Hamiltonian Commutators in Quantum Mechanics Sergi, Alessandro (2005) Non-Hamiltonian Commutators in Quantum Mechanics. Physical Review E. (Submitted) Preview PDF nhqcb_ff.pdf Download (197kB) ## Abstract The symplectic structure of quantum commutators is first unveiled and then exploited to introduce generalized non-Hamiltonian brackets in quantum mechanics. It is easily recognized that quantum-classical systems are described by a particular realization of such a bracket. In light of previous work, this introduces a unified approach to classical and quantum-classical non-Hamiltonian dynamics. In order to illustrate the use of non-Hamiltonian commutators, it is shown how to define thermodynamic constraints in quantum-classical systems. In particular, quantum-classical Nos\'e-Hoover equations of motion and the associated stationary density matrix are derived. The non-Hamiltonian commutators for both Nos\'e-Hoover chains and Nos\'e-Andersen (constant-pressure constant temperature) dynamics are also given. Perspectives of the formalism are discussed. Item Type: Other Submitted to Physical Review E on August 8 2005 M.U.S. - Contributi Scientifici > 01 - Scienze matematiche e informaticheM.U.S. - Contributi Scientifici > 02 - Scienze fisiche Dr Alessandro Sergi 14 Nov 2005 13 Apr 2010 11:19 http://cab.unime.it/mus/id/eprint/348 ### Actions (login required) View Item
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307048082351685, "perplexity": 3674.7790860601635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00183-ip-10-171-10-70.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/437308/relation-between-settling-time-and-natural-frequency-for-a-critically-damped-sys
# Relation between settling time and natural frequency for a critically damped system In an article by Wie et al. (1989), for the purpose of calculating the gains of a controller, the equations of motion are approximated by the expression for a damped harmonic oscillation: $$\ddot{\theta} + d \dot{\theta} + \frac{k}{2} \theta = 0$$ where, of course: $$d = 2 \zeta \omega _n \quad ; \quad \frac{k}{2} = \omega^2 _n$$ In the design example presented in the article, a settling time of $$T_s = 50 \text{ sec}$$ is assumed. Now, the article states that for a critically damped response, $$\omega _n = 0.158 \text{ rad/s.}$$ Indeed, using this natural frequency to calculate the controller gains yields responses that settle after about 50 seconds: However, when using the relation between the settling time and the natural frequency for a critically damped system as presented in this answer, one obtaines: $$\omega_n \approx \frac{5.8335}{T_s} = \frac{5.8335}{50} = 0.117 \text{ rad/s.}$$ So the question is, how was the natural frequency calculated in the article? You need to be careful when using settle time approximations that relate $$\\omega_n\$$ and $$\\zeta\$$ to a $$\T_s\$$. First, $$\T_s\$$ is defined as the time where the signal remains within $$\\pm x\$$% of the final value. Typically the settle limits are $$\\pm 2\$$% or $$\\pm 5\$$%, and these limits impact all second order system settle time approximations. Yes, every 2nd order settle time equation is an approximation, and they all differ as a function of $$\\zeta\$$. Next, $$\omega_n \approx \frac{5.8335}{T_s},$$ is only a numerical approximation to the normalized unit step response for a critically damped system (i.e., $$\\zeta=1\$$), which is $$\theta(t)= \left(1-(1+\omega_n t)e^{-\omega_n t}\right)u(t),$$ where $$\u(t)\$$ is a unit step (aka Heaviside step). The settle time approximation for this is eloquently derived in your "this answer" link using a $$\\pm 2\$$% window to numerically arrive at $$\\omega_n \approx \frac{5.8335}{T_s}\$$. When I look at the graphs of the data you shared, it looks like the 2% settle time for the critically damped trace is closer to 35. This results in $$\\omega_n\approx\frac{5.8335}{35}= 0.167\$$ rad/sec. A result is closer to the actual $$\\omega_n\$$ in the article. Better graphs, and clarification on which trace is actually the critically damped one, will improve this estimate. Sorry, but I do not have free access to your article, so cannot answer your question directly, but most likely the authors knew $$\\omega_n\$$ directly. • Thanks @uRog, I am aware that the relation mentioned is merely an approximation. In the article, the authors first define the settling time to be 50 seconds. Next, they calculate $\omega _n$, assuming that $\zeta = 1$. Yet how remains a mystery. – woeterb May 7 '19 at 13:36 • Yes, without seeing the paper, it is hard to say. Since you know $\omega_n$ and $\zeta$, you can use Python + controls package or MATLAB + control toolbox to simulate an initial condition response to replicate Q1. Another option is to plot $\theta(t)= \left(0.58(1+\omega_n t)e^{-\omega_n t}\right)u(t),$ vs $t$, assuming the i.c. was 0.58, and see what $T_s$ really is. This will help you understand what is happening at $t=50$ – uRog May 7 '19 at 15:57 • Already did that, settling times turned out to be much lower than 50. Wondering then why the authors would so explicitly state 50. Regarding access to scientific papers, have you ever heard of Sci-Hub? sci-hub.se – woeterb May 7 '19 at 16:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920802652835846, "perplexity": 458.4879509548892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737206.16/warc/CC-MAIN-20200807172851-20200807202851-00430.warc.gz"}
https://en.wikibooks.org/wiki/Engineering_Acoustics/Electro-acoustic_analogies
# Engineering Acoustics/Electro-acoustic analogies Edit this template Part 1: Lumped Acoustical Systems – 1.1 – 1.2 – 1.3 – 1.4 – 1.5 – 1.6 – 1.7 – 1.8 – 1.9 – 1.10 – 1.11 Part 2: One-Dimensional Wave Motion – 2.1 – 2.2 – 2.3 Part 3: Applications – 3.1 – 3.2 – 3.3 – 3.4 – 3.5 – 3.6 – 3.7 – 3.8 – 3.9 – 3.10 – 3.11 – 3.12 – 3.13 – 3.14 – 3.15 – 3.16 – 3.17 – 3.18 – 3.19 – 3.20 – 3.21 – 3.22 – 3.23 – 3.24 ## Electro-acoustical Analogies ### Acoustical Mass Consider a rigid tube-piston system as following figure. Piston is moving back and forth sinusoidally with frequency of f. Assuming $f << \frac{c}{l\ or\ \sqrt{S}}$ (where c is sound velocity $c=\sqrt{\gamma R T_0}$), volume of fluid in tube is, $\Pi_v=S\ l$ Then mass (mechanical mass) of fluid in tube is given as, $M_M= \Pi_v \rho_0 = \rho_0 S\ l$ For sinusoidal motion of piston, fluid move as rigid body at same velocity as piston. Namely, every point in tube moves with the same velocity. Applying the Newton's second law to the following free body diagram, $SP'=(\rho_0Sl)\frac{du}{dt}$ $\hat{P}=\rho_0l(j\omega)\hat{u}=j\omega(\frac{\rho_0l}{S})\hat{U}$ Where, plug flow assumption is used. "Plug flow" assumption: Frequently in acoustics, the velocity distribution along the normal surface of fluid flow is assumed uniform. Under this assumption, the acoustic volume velocity U is simply product of velocity and entire surface. $U=Su$ #### Acoustical Impedance Recalling mechanical impedance, $\hat{Z}_M=\frac{\hat{F}}{\hat{u}}=j\omega(\rho_0Sl)$ acoustical impedance (often termed an acoustic ohm) is defined as, $\hat{Z}_A=\frac{\hat{P}}{\hat{U}}=\frac{Z_M}{S^2}=j\omega(\frac{\rho_0l}{S})\quad \left[\frac{N s}{m^5}\right]$ where, acoustical mass is defined. $M_A=\frac{\rho_0l}{S}$ #### Acoustical Mobility Acoustical mobility is defined as, $\hat{\xi}_A=\frac{1}{\hat{Z}_A}=\frac{\hat{U}}{\hat{P}}$ #### Acoustical Resistance Acoustical resistance models loss due to viscous effects (friction) and flow resistance (represented by a screen). File:Ra analogs.png rA is the reciprocal of RA and is referred to as responsiveness. ### Acoustical Generators The acoustical generator components are pressure, P and volume velocity, U, which are analogus to force, F and velocity, u of electro-mechanical analogy respectively. Namely, for impedance analog, pressure is analogus to voltage and volume velocity is analogus to current, and vice versa for mobility analog. These are arranged in the following table. Impedance and Mobility analogs for acoustical generators of constant pressure and constant volume velocity are as follows: File:Acoustic gen.png ### Acoustical Compliance Consider a piston in an enclosure. File:Enclosed Piston.png When the piston moves, it displaces the fluid inside the enclosure. Acoustic compliance is the measurement of how "easy" it is to displace the fluid. Here the volume of the enclosure should be assumed to be small enough that the fluid pressure remains uniform. Assume no heat exchange 1.adiabatic 2.gas compressed uniformly , p prime in cavity everywhere the same. from thermo equitation File:Equ1.jpg it is easy to get the relation between disturbing pressure and displacement of the piston File:Equ3.gif where U is volume rate, P is pressure according to the definition of the impendance and mobility, we can getFile:Equ4.gif Mobility Analog VS Impedance Analog File:Comp.gif ### Examples of Electro-Acoustical Analogies Example 1: Helmholtz Resonator Assumptions - (1) Completely sealed cavity with no leaks. (2) Cavity acts like a rigid body inducing no vibrations. Solution: - Impedance Analog - File:Example2holm1sol.JPG Example 2: Combination of Side-Branch Cavities File:Exam2prob.JPG Solution: - Impedance Analog - File:Exam2sol.JPG Back to Main page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645761013031006, "perplexity": 2082.877200821047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036749.19/warc/CC-MAIN-20150601214356-00082-ip-10-180-206-219.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/27874-identity.html
# Math Help - identity 1. ## identity How do you prove the following: Given the direction cosines of two lines $\cos \alpha, \ \cos \beta, \ \cos \gamma$ and $\cos \alpha ', \cos \beta ', \cos \gamma ',$, then the cosine of the angle $\theta$ between the two lines is $\cos \theta = \cos \alpha \cos \alpha ' + \cos \beta \cos \beta ' + \cos \gamma \cos \gamma '$? 2. Hello, heathrowjohnny! Given the direction cosines of two lines: . $[\cos\alpha,\cos\beta,\cos\gamma]$ and $[\cos\alpha',\cos\beta',\cos\gamma']$, then the cosine of the angle $\theta$ between the two lines is: . . $\cos\theta \;= \;\cos\alpha\cos\alpha' + \cos\beta\cos\beta' + \cos\gamma\cos\gamma'$ Line $L_1$ has vector: . $\vec{u} \:=\:\langle\cos\alpha,\,\cos\beta,\,\cos\gamma\ra ngle$ . . . and: . $|\vec{u}| \:=\:1$ Line $L_2$ has vector: . $\vec{v} \:=\:\langle\cos\alpha',\,\cos\beta',\,\cos\gamma' \rangle$ . . . and: . $|\vec{v}| \:=\:1$ . . That is, a set of direction cosines always forms a unit vector. The angle $\theta$ between two vectors $\vec{u}\text{ and }\vec{v}$ is given by: . $\cos\theta \;=\;\frac{\vec{u}\cdot\vec{v}}{|\vec{u}||\vec{v}| }$ So we have: . $\cos\theta \;=\;\frac{\langle\cos\alpha,\cos\beta,\cos\gamma\ rangle\cdot\langle\cos\alpha',\beta',\gamma'\rangl e}{(1)(1)}$ Therefore: . $\cos\theta \;=\;\cos\alpha\cos\alpha' + \cos\beta\cos\beta' + \cos\gamma\cos\gamma'$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941872358322144, "perplexity": 400.1930861651726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257831771.10/warc/CC-MAIN-20160723071031-00025-ip-10-185-27-174.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/364137/ideal-op-amp-why-isnt-output-voltage-zero
# Ideal OP-AMP - why isn't output voltage zero? Here is what I know about the Ideal Op-Amp. • The open loop voltage gain is infinite • The output voltage is given by the following $v_o = A(v_+ - v_-)$ • Only with a negative feedback loop is $v_+ = v_-$ My query is with regards to the negative feedback loop as shown below: My queries are as follows: 1. Clearly, $v_o = A(v_+ - v_-)$ should still apply and since $v_+ = v_-$, shouldn't the output voltage $v_o = 0$ always? 2. Since $v_o = A(v_+ - v_-)$ should still apply, is A still the open-loop voltage gain which for an ideal op-amp is infinity. Thus, would the output voltage always be infinity? • google offsets in opamps – Transistor Mar 23 '18 at 2:04 • v- will actually be very slightly lower than v+. In fact how much lower it is depends on the inverse of the op-amp's gain. As the gain goes to infinity the difference will go to zero. – immibis Mar 23 '18 at 2:47 • $A$ is the open loop gain and it is infinity as you said and $lim_{x\rightarrow\infty}x\times 0$ can be anthing – Curd Mar 23 '18 at 11:39 ## 3 Answers When analyzing an op-amp circuit, we don't assume a priori that the differential input voltage is 0. We assume that the gain of the amplifier is very large, and the input impedances are very large. If we then set up a negative feedback circuit, we find that then in the limit as the op-amp gains goes to infinity, the differential input voltage will go to zero. The differential input voltage would not go to zero if the output voltage were always zero. (And of course in a real op-amp the gain is not actually infinite and therefore the input voltage is not actually zero) Main point: The input voltage goes to zero as a result of the gain being very high and the output voltage going to some non-zero value, not the other way around. • Ah. That makes much more sense. One more thing. Why is it only when we have negative feedback circuit that the differential input voltage goes to zero? I understand why thats true but why isn't true for just open-loop op-amp since the gain is still infinite there. – AlfroJang80 Mar 23 '18 at 3:17 • With open loop, the output would just be the gain times the input voltage (limited by the power supply voltage). With a closed negative feedback loop, a deviation of the input from zero drives the output in such a way that it drives the input back towards zero. – The Photon Mar 23 '18 at 4:43 • Ah. I think I understand now. One last thing, so with an open loop op-amp circuit, the output voltage would be extremely high correct and would drive it to the supply voltages? The addition of the feedback controls this for this. Would that be correct? – AlfroJang80 Mar 23 '18 at 16:41 • @Alfro, yes, that's basically correct. – The Photon Mar 23 '18 at 16:51 • Hmm. I just came across input offset voltage defined as being a DC voltage that needs to be applied to the input to force the output to zero. That doesn't make sense to me. Why would we want to force the output to zero? Is this talking about the open-loop case? – AlfroJang80 Mar 23 '18 at 17:28 Zero times infinity is indeterminate. It may be zero or infinity or something in between. You should calculate the output voltage for a large open-loop gain and look at what happens to the output voltage as the gain approaches infinity. You should find that it approaches -Vin*R2/R1 And you should find that the differential input voltage at the op-amp approaches zero, but for every (finite) value of gain, no matter how large, they will be a small, but non-zero, value for the differential input voltage. • The open loop voltage gain is infinite • The output voltage is given by the following $v_o=A(v_+$−$v_−$) Then A must be infinity and for a finite output voltage the two inputs have to be equal i.e. infinity x zero is finite.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8845633864402771, "perplexity": 531.8680394486057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997801.20/warc/CC-MAIN-20190616062650-20190616084650-00439.warc.gz"}
http://blogs.ethz.ch/kowalski/category/exercise/
# E. Kowalski's blog ## Archive for Exercise 27.10.2014 ### 0.00023814967230605090687395214144185337601 Filed under: Exercise,Mathematics,Science @ 19:38 Yesterday my younger son was playing dice; the game involved throwing 6 dices simultaneously, and he threw a complete set 1, 2, 3, 4, 5, 6, twice in a row! Is that a millenial-style coincidence worth cosmic pronouncements? Actually, not that much: since the dices are indistinguishable, the probability of a single throw of this type is $\frac{6!}{6^6}\simeq 0.015432098765432098765432098765432098765,$ so about one and a half percent. And for two, assuming independence, we get a probability $\frac{(6!)^2}{6^{12}}\simeq 0.00023814967230605090687395214144185337601,$ or a bit more than one chance in five throusand. This is small, but not extraordinarily so. (The dices are thrown from a cup, so the independence assumption is quite reliable here.) 11.06.2014 ### Leo’s first theorem Filed under: Exercise,Mathematics @ 11:46 I learnt the following from my son Léo: the teacher asks to compute $9+9$; that’s easy $9\ +\ 9\ =\ 18.$ But no! The actual question is to compute $9$ times $9$! We must correct this! But it’s just as easy without starting from scratch: we turn the “plus” cross a quarter turn on the left-hand side: $9\ \times\ 9$ and then switch the digits on the right-hand side: $9\ \times\ 9\ =\ 81.$ This is a fun little random fact about integers and decimal expansions, certainly. But there’s a bit more to it than that: it is in fact independent of the choice of base $10$, in the sense that if we pick any other integer $b\geq 2$, and consider base $b$ expansions, then we also have $(b-1)\ +\ (b-1)\ =\ 2b-2\ =\ b+(b-2)= \underline{1}\,\underline{b-2}$ as well as $(b-1)\ \times\ (b-1)\ =\ (b-1)^2=b(b-2)+1= \underline{b-2}\,\underline{1},$ (where we underline individual digits in base $b$ expansion.) At this point it is natural to ask if there are any other Léo-pairs $(x,y)$ to base $b$, i.e., pairs of digits in base $b$ such that the base $b$ expansions of the sum and the product of $x$ and $y$ are related by switching the two digits (where we always get two digits in the result by viewing a one-digit result $z$ as $\underline{0}\, \underline{z}$). It turns out that, whatever the base $b$, the only such pairs are $(b-1,b-1)$ and the “degenerate” case $(0,0)$. To see this, there are two cases: either the addition $x+y$ leads to a carry, or not. If it does, this means that $y=b-z$ where $x>z$. The sum is then $x+y=b+(x-z)=\underline{1}\,\underline{x-z}.$ So this is a Léo-pair if and only if $xy=\underline{x-z}\,\underline{1}.$ This equation, in terms of $x$ and $z$, becomes $x(b-z)=b(x-z)+1,$ which holds if and only if $z(b-x)=1$. Since the factors are integers and non-negative, this is only possible if $z=b-x=1$, which means $x=y=b-1$, the solution found by Léo. Now suppose there is no carry. This means that we have $0\leq x,y\leq b-1$ and $x+y\leq b-1$. Then $x+y=\underline{0}\,\underline{x+y},$ and we have a Léo-pair if and only if $xy=\underline{x+y}\,\underline{0},$ i.e., if and only if $xy=b(x+y)$. This is not an uninteresting little equation! For a fixed $b$ (which could now be any non-zero rational), this defines a simple quadratic curve. Without the restrictions on the size of the solution $(x,y)$, there is always a point on this curve, namely $(x_0,y_0)=(2b,2b).$ This does not fit our conditions, of course. But we can use it to find all other integral solutions, as usual for quadratic curves. First, any line through $(x_0,y_0)$ intersects the curve in a a second point, which has rational coordinates if the line is also defined by rational coefficients, and conversely. Doing this, some re-arranging and checking leads to the parameterization $\begin{cases} x=b+k\\ y=b+\frac{b^2}{k}\end{cases}$ of the rational solutions to $xy=b(x+y)$, where $k$ is an arbitrary non-zero rational number. In this case, this can also be found more easily by simply writing the equation in the form $0=xy-bx-by=(x-b)(y-b)-b^2\ldots$ Now assume that $b\geq 2$ is an integer, and we want $(x,y)$ to be integers. This holds if and only if $k$ is an integer such that $k\mid b^2$. Such solutions certainly exist, but do they satisfy the digit condition? The answer is yes if and only if $k=-b$, which means $x=y=0$, giving the expected degenerate pair. Indeed, to have $x, the parameter $k$ must be a negative divisor of $k^2$. We write $k=-d$ with $d\mid k^2$ positive. Then to have non-negative digits, we must have $\begin{cases} x=b-d\geq 0\\ y=b-\frac{b^2}{d}\geq 0\end{cases},$ the first one of these inequalities means $b\geq d$, while the second means that $b\leq d$ 11.05.2014 ### The discrete spectrum is discrete Filed under: Exercise,Mathematics @ 19:06 No, this post is not an exercise in tautological reasoning: the point is that the word “discrete” is relatively overloaded. In the theory of automorphic forms, “discrete spectrum” (or “spectre discret”) is the same as “cuspidal spectrum”, and refers to those automorphic representations (of a given group $G$ over a given global field $F$) which are realized as closed subspaces of the relevant representation space on $L^2(G(\mathbf{Q})\backslash G(\mathbf{A}_F))$, by opposition with the “continuous spectrum”, whose components may fail to be actual subrepresentations. However, we can also think of automorphic representations as points in the unitary dual of the relevant adélic group $G(\mathbf{A}_F)$, and this has a natural topology (the Fell topology), for which it then makes sense to ask whether the set of all cuspidal automorphic representations is discrete or not. The two notions might well be unrelated: for instance, if we take the group $\mathrm{SL}_2(\mathbf{R})$ and the direct sum over $t\in\mathbf{Q}$ of the principal series with parameter $t$, we obtain a representation containing only “discrete” spectrum, but parameterized by a “non-discrete” subset of the unitary dual. It is nevertheless true that the discrete spectrum is discrete in the unitary dual, in the automorphic case. I am sure this is well-known. (Unless the topology on the unitary dual is much weirder than I expect; my expertise in this respect is quite limited, but I remember asking A. Venkatesh who told me that the pathologies of the unitary dual topology are basically irrelevant as far as the automorphic case is concerned). I think this must be well-known, but I don’t remember seeing it mentioned anywhere (hence this post…) Here is the argument, at least over number fields, and for $\mathrm{GL}_n$. Let $\pi$ be a given cuspidal representation. The unitary dual topology on the adélic group is, if I understand right, the restricted direct product topology with respect to the unramified spectrum. So a neighborhood of $\pi$ is determined by a finite set $S$ of places and corresponding neighborhoods $U_s$ of $\pi_s$ for $s\in S$. We want to find such a neighborhood in which $\pi$ is the unique automorphic cuspidal representation. First, we can fix a neighborhood $U_{\infty}$ of the archimedean (Langlands) parameters that is relatively compact. Next, we note (or claim…) that for a finite place $v$, the exponent of the conductor is locally constant, so we get neighborhoods $U_v$ of $\pi_v$, with $U_v$ the unramified spectrum when $\pi$ is unramified at $v$, such that all automorphic representations in $U=U_{\infty}\times \prod_{v} U_v$ have arithmetic conductor equal to that of $\pi$. With the archimedean condition, it follows that the Iwaniec-Sarnak analytic conductor of cuspidal representations in $U$ is bounded. However, a basic “height-like” property is that there are only finitely many cuspidal representations with bounded analytic conductor (this is proved by Michel-Venkatesh in Section 2.6.5 of this paper, and a different proof based on spherical codes, as suggested by Venkatesh, is due to Brumley). Thus $U$ almost isolates $\pi$, in the sense that only finitely many other cuspidal representations can lie in $U$. Denote by $X$ this set of representations. Now, the unitary dual is (or should be…) at least minimally separated so that, for any two cuspidal representations $\pi_1$ and $\pi_2$, there is an open set which contains $\pi_1$ and not $\pi_2$, say $V_{\pi_1,\pi_2}$. Then $U'=U\cap \bigcap_{\rho\in X-\{\pi\}} V_{\pi,\rho}$ is an open neighborhood of $\pi$ which only contains the cuspidal representation $\pi$ 03.05.2014 ### More conjugation shenanigans Filed under: Exercise,Mathematics @ 16:46 After I wrote my last post on the condition $\xi H\xi =H$ in a group, I had a sudden doubt concerning the case in which this arose: there we assume that we have a coset $T=\xi H$ such that $g^2\in H$ for all $g\in T$. I claimed that this implies $\xi H \xi =H$, but really the argument I wrote just means that $\xi H \xi \subset H$: for all $g\in H$, we get $\xi g \xi g \in H$, hence $\xi g \xi \in H$. But what about the other inclusion? I had in mind a case where the groups involved are finite, or close enough, so that the reverse inclusion is indeed obviously true. But it is amusing to see that in fact what I wrote is correct in all cases: if $H$ is a subgroup of an arbitrary group $G$ and $\xi\in H$ satisfies $\xi H \xi\subset H$, then in fact $\xi H \xi = H$: taking the inverse of the inclusion gives $\xi^{-1} H\xi^{-1}=\xi^{-1}H^{-1}\xi^{-1}\subset H^{-1}=H.$ I find this interesting because, when it comes to the normalizer, the analogue of this fact is not true: the condition $\xi H\xi^{-1}\subset H$ is not, in general, equivalent with $\xi H\xi^{-1}=H$. (I found this explained in an exercise in Bourbaki, Algèbre, Chapitre I, p. 134, Exercice 27, or page 146 of the English edition; here is a simple case of the counterexample from that exercise: consider the group $G$ of permutations of $\mathbf{Z}$; consider the subgroup $H$ which is the pointwise stabilizer of $\mathbf{N}=\{0,1,2,\ldots\}$, and the element $\xi\in G$ which is just $\xi(x)=x+1.$ Then we have $\xi H\xi^{-1}\subset H$, because the left-hand side is the pointwise stabilizer of $\xi(\mathbf{N})$ which is a subset of $\mathbf{N}$. But $\xi H\xi^{-1}$ is not equal to $H$, because $\xi^{-1} H\xi$ is the pointwise stabilizer of $\xi^{-1}(\mathbf{N})$, and there are elements in $G$ which fix $\mathbf{N}$ but not $\xi^{-1}(\mathbf{N})=\{-1,0,1,\ldots\}$.) It seems natural to ask: what exactly is the set $X$ of all $(a,b)\in\mathbf{Z}^2$ such that $\xi^a H\xi^b\subset H$ is equivalent to $\xi^a H \xi^b=H$? This set $X$ contains the line $(n,n)$ for $n\in\mathbf{Z}$, and also the coordinates axes $(n,0)$ and $(0,n)$ (since we then deal with cosets of $H$). In fact, one can determine $X$: it is the complement of the line $(n,-n)$, for $n\not=0$ (which corresponds to conjugation). Indeed, suppose for instance that $a+b\geq 1$. Let $H$ and $\xi$ be such that $\xi^a H \xi^b\subset H$. We can write $-a=(a+b-1)a-a(a+b),\quad\quad -b=(a+b-1)b-b(a+b),$ and $\xi^{-a}H\xi^{-b}=\xi^{(a+b-1)a} (\xi^{a+b})^{-a}H (\xi^{a+b})^{-b} \xi^{(a+b-1)b}.$ Since $\xi^{a+b}\in H$, and since $a+b-1\geq 0$ is the common exponent of $\xi^a$ and $\xi^b$ at the two extremities of the right-hand side, it follows that this right-hand side is contained in $H$. A similar argument works for $a+b\leq -1$, using $-a=(-a-b-1)a+a(a+b),\quad\quad -b=(-a-b-1)b+b(a+b),$ where again it is crucial that the coefficient $(-a-b-1)$ appears on both sides, and that it is $\geq 0$. Since we already know that $(1,-1)\notin X$, and in fact that $(-n,n)\notin X$ for $n\not=0$ (the same setting as the counterexample above works, because the $\xi$ we used is an $n$-th power for every $n\not=0$), we have therefore determined the set $X$ 01.05.2014 ### Normalizers everywhere Filed under: Exercise,Mathematics @ 13:05 In working on a paper, I found myself in the amusing but unusual situation of having a group $G$, a subgroup $H$ and an element $\xi\in G$ such that $\xi H\xi =H.$ This certainly can happen: the two obvious cases are when $H=G$, or when $\xi$ is an involution that happens to be in the normalizer $N(H)$ of $H$. In fact the general case is just a tweak of this last case: we have $\xi H\xi=H$ if and only if $\xi^2\in H$ and $\xi\in N(H)$, or in other words, if $\xi$ belongs to the normalizer, and is an involution modulo $H$. This is of course easy to check. I then asked myself: what about the possibility that $\xi^a H\xi^b = H$, where $a$ and $b$ are arbitrary integers? Can one classify when this happens? The answer is another simple exercise (that I will probably use when I teach Algeba I next semester): this is the case if and only if $\xi^{a+b}\in H$ and $\xi^{(a,b)}\in N(H)$, where $(a,b)$ is the gcd of $a$ and $b$. In particular, for all pairs $(a,b)$ where $a$ and $b$ are coprime, the condition above implies that $\xi$ belongs to the normalizer of $H$. Here is the brief argument: having fixed $\xi$, let $M=\{(\alpha,\beta)\in\mathbf{Z}^2\,\mid\, \xi^{\alpha}H\xi^{\beta} =H\}.$ This set is easily seen to be a subgroup of $\mathbf{Z}^2$. Furthermore, note that $(\alpha,\beta)\in M$ implies that $\xi^{\alpha+\beta}\in H$, which in turns means that $(\alpha+\beta,0)\in M$ and $(0,\alpha+\beta)\in M$. Hence if $(a,b)\in M$, we get $(a,-a)=(a,b)-(0,a+b)\in M,\quad\quad (b,-b)=(a+b,0)-(a,b)\in M$, so that $M\cap (1,-1)\mathbf{Z}$ contains $(a,-a)\mathbf{Z}\cap (b,-b)\mathbf{Z}$, which is just $(d,-d)\mathbf{Z},$ where $d=(a,b)$. But $(d,-d)\in M$ means exactly that $\xi^{d}\in N(H)$. Thus we have got the first implication. Conversely, the conclusion means exactly that $(a+b,0)\in M,\quad (d,-d)\in M.$ But then $(a,b)=(a+b,0)-(b,-b)=(a+b,0)-b/d (d,-d)\in M$ shows that $\xi^a H\xi^b=H$. To finish, how did I get to this situation? This can arise quite naturally as follows: one has a collection $X$ of representations $\rho$ of a fixed group $\Gamma$, and an action of a group latex $G$ on these representations $X$ (action up to isomorphism really). For a given representation $\rho_0$, we can then define a group $H=\{g\in G\,\mid\, g\cdot \rho_0\simeq \rho_0\}$ and also a subset $T=\{g\in G\,\mid\, g\cdot \rho_0\simeq D(\rho_0)\},$ where $D(\cdot)$ denotes the contragredient representation. It can be that $T$ is empty, but let us assume it is not. Then $T$ has two properties: (1) it is a coset of $H$ (because $H$ acts on $T$ simply transitively); (2) we have $g^2\in H$ for all $g\in T$ (because the contragredient of the contragredient is the representation itself). This means that, for some $\xi\in G$, we have $T=\xi H$, and furthermore $\xi H\xi=H$ since $\xi g\xi g\in H$ for all $g\in H$. By the previous discussion, we therefore get a good handle on the structure of $T$: either it is empty, or it is of the form $\xi H$ for some $\xi\in G$ such that $\xi^2\in H$ and $\xi$ normalizes $H$ in $G$. In particular, if $H$ is trivial (which happens often), either $T$ is empty, or it consists of a single element $\xi$ which is an involution of $G$. Next Page » © 2014 E. Kowalski's blog   Hosted by uzh|ethz Blogs
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 244, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597756862640381, "perplexity": 1637.6914523863782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900544.33/warc/CC-MAIN-20141030025820-00215-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-5-section-5-3-addition-and-subtraction-of-polynomials-exercise-page-221/56
## Elementary Technical Mathematics $$9x^4+2x^3+11x-6.$$ We subtract the $x^4$-terms as follows $$7x^4+2x^4=9x^4$$ then subtract the $x^3$-terms as follows $$3x^3-x^3=2x^3$$ then subtract the $x$-terms as follows $$5x+6x=11x$$ then subtract the constant terms as follows $$0-6=-6$$ Hence, the result is $$9x^4+2x^3+11x-6.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990708231925964, "perplexity": 512.3302288847456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00545.warc.gz"}
https://stats.stackexchange.com/questions/410534/how-can-i-generate-a-time-series-with-autocorrelation-at-lags-other-than-1/410542
# How can I generate a time series with autocorrelation at lags other than 1? How can I generate a time series that has autocorrelation at a certain lag, but only that lag and nothing else? If you don't require stationarity for your process then generating it is as simple as starting with any arbitrary random variable and then using the recursive equations of the desired process to generate the rest of the series. However, assuming that you want to generate a stationary process, this requires you to find the stationary distribution. By way of example, I will show you how to generate a stationary $$\text{AR}(2)$$ process with missing coefficient on its first-order lag. Deriving the stationary distribution: The $$\text{AR}(2)$$ process of interest here is: $$X_t = \phi X_{t-2} + \varepsilon_t \quad \quad \quad \quad \quad \varepsilon_t \sim \text{IID N}(0, \sigma).$$ (Note that I have only included an auto-regressive term on the last lag, as was specified in your question.) Stationarity of the process requires $$|\phi| < 1$$. With a bit of algebra, the Wold representation for the process can be written as: $$X_t = \sum_{k=0}^\infty \psi_k \cdot \varepsilon_{t-k} \quad \quad \quad \quad \quad \psi_k = \begin{cases} \phi^{k/2} & & & \text{for even } k, \\[6pt] 0 & & & \text{for odd } k. \\ \end{cases}$$ Using the Wold representation we get the variance: \begin{aligned} \mathbb{V}(X_t) &= \sum_{k=0}^\infty \psi_k^2 \cdot \mathbb{V}(\varepsilon_{t-k}) \\[6pt] &= \sum_{k=0}^\infty \phi^k \cdot \mathbb{V}(\varepsilon_{t-2k}) \\[6pt] &= \sigma^2 \sum_{k=0}^\infty \phi^k \\[6pt] &= \frac{\sigma^2}{1-\phi}. \\[6pt] \end{aligned} (We also have the covariance $$\mathbb{C}(X_t,X_{t-1}) =0$$.) Thus, the stationary distribution of the process is: $$X_\infty \sim \text{N} \bigg( 0, \frac{\sigma^2}{1-\phi} \bigg).$$ Generating the process: Now that we have the stationary distribution of the process, we can generate a vector of values from the process by generating two consecutive values from the stationary distribution (noting that they are uncorrelated), and then generating the remaining values from the recursive equations. Here is some R code that generates a series of values from the above stationary model. #Function to generate vector from the AR(2) process set out above PROCESS <- function(T, phi = 0, sigma = 1) { if (abs(phi) >= 1) { stop("Error: Process is non-stationary"); } if (sigma < 0) { stop("Error: Process standard error is negative"); } VALUES <- numeric(T); ERROR <- rnorm(T, mean = 0, sd = sigma); VALUES[1:2] <- ERROR[1:2]/sqrt(1-phi); if (T > 2) { for (t in 3:T) { VALUES[t] <- phi*VALUES[t-2] + ERROR[t] } } VALUES; } We can use this function to generate a vector of length $$T$$ from the above $$\text{AR}(2)$$ process, for any specification of $$\phi$$ and $$\sigma$$ (we restrict the former to require stationarity of the process). #Set values for time-series T <- 100; phi <- 0.4; sigma <- 1; #Generate and plot the series set.seed(1); TTT <- PROCESS(T, phi, sigma); plot(TTT, type = 'l', main = 'AR(2) time-series', xlab = 'Time', ylab = 'Value');
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9608168005943298, "perplexity": 1048.9783258126101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526337.45/warc/CC-MAIN-20190719182214-20190719204214-00396.warc.gz"}
https://www.physicsforums.com/threads/t-2-vs-l-and-t-2-vs-m-graphs.546652/
# T^2 vs. L and T^2 vs M' Graphs • #1 5 0 ## Homework Statement What is the expected slope of the line? What was the actual slope of the line of best fit? Calculate the gravity constant g from the slope of your graph ## Homework Equations k = (Mg) / (y_0 - y) 4pi^2/g x L = T^2 ## The Attempt at a Solution I understand how to acquire gravity using the second equation for T^2 vs L. But I don't have a clue what my expected slope should be for either graph. My calculated slope for T^2 vs L is 3.223 and for T^2 vs. M' 3.616 • #2 Delphi51 Homework Helper 3,407 11 If you graph y vs x and the formula is y = mx, then the slope is expected to be m. If you graph T² vs L and the formula is T² = (4pi²/g) x L then the expected slope is (4pi²/g). The calc for g would then be g = 4pi²/slope I don't see a formula relating T² and M'. What is M'? It looks like you might be doing a pendulum experiment? • #3 5 0 So that means my expected slope would be ~4? This was a pendulum experiment for T^2 vs L and and oscillating spring for T^2 vs M'. M' is the mass of our hook + spring + added weight, while M is just the mass of our weight added to the hook and spring. Through my notes I found the equation T^2 = (4pi^2m)/k. So if I replace y=T^2 and x=m does that mean my slope is (4pi^2)/k? • #4 Delphi51 Homework Helper 3,407 11 Yes, that is the idea. Good luck. • Last Post Replies 6 Views 60K • Last Post Replies 8 Views 20K • Last Post Replies 4 Views 30K • Last Post Replies 2 Views 2K • Last Post Replies 2 Views 2K • Last Post Replies 6 Views 4K • Last Post Replies 7 Views 1K • Last Post Replies 0 Views 2K • Last Post Replies 5 Views 2K • Last Post Replies 5 Views 26K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8937842845916748, "perplexity": 3684.4193097471407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00291.warc.gz"}
https://latex.org/forum/viewtopic.php?f=45&t=13068&p=106859
## LaTeX forum ⇒ Graphics, Figures & Tables ⇒ eps figure displayed as blank in document Information and discussion about graphics, figures & tables in LaTeX documents. asims Posts: 10 Joined: Tue May 10, 2011 5:24 pm ### eps figure displayed as blank in document Hi I have a figure to include in my thesis (bplot.eps) which was created by printing to ps from Minitab. I then opened the image in gsview32 v4.9 successfully and converted to an EPS by defining the desired bounding box. I then tried to include the image in my latex document using the following code: \begin{figure}%\includegraphics[width=0.9\textwidth]{04_chapter/images/bplot.eps}%\caption{caption}%\label{fig:CPAP_FEA_rgd_040}%\end{figure} The docuement processed without any errors, but displayed blank space instead of my figure in the document. The caption was in the correct location. I tried compiling it in draft mode and the correct box outline and filename were printed. I have included many images in this document without a problem before this. Thank you for any comments or suggestions to resolve this issue. Regards Andrew Sims UNSW Stefan Kottwitz Posts: 9196 Joined: Mon Mar 10, 2008 9:44 pm Hi Andrew, welcome to the board! There are tools which are able to fix eps files or to fix their bounding boxes, such as ps2eps. Perhaps consider converting eps files to pdf (using epstopdf tool or the LaTeX package with the same name) and use pdfLaTeX. This engine has its advantages also if not using its jpg, png, pdf support. Stefan asims Posts: 10 Joined: Tue May 10, 2011 5:24 pm Stefan Thanks for the welcome and reply. I might be a bit far down the line to simply change to using pdflatex. I am using a number of matlab plots along the way that I have used the psfrag utility to match typefaces etc where possible. I understand that the psfrag functionality is not compatible with pdflatex. Will have to think. As for this problem - I think that I might be getting out of the bounds of the scope of the forum here, as I have not been able to get ps2eps to work. It simply hangs as follows: C:\users\asims\PhD\tests\cpap_mri>ps2eps boxplot.psInput files: -Processing: -Calculating Bounding Box...Terminating on signal SIGINT(2) This had to be terminated with CTRL-C (I am on Windows XP). I have tried several of the debug options, but no luck. There might be something strange in the postscript files, but am not sure on that. Thanks Andrew Sims afa0011 Posts: 6 Joined: Mon Sep 24, 2018 6:18 pm I have the same problem now. were you able to come up with a solution?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381576180458069, "perplexity": 2421.3759382909425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826686.8/warc/CC-MAIN-20181215014028-20181215040028-00463.warc.gz"}
https://www.physicsforums.com/threads/neutron-star-feynman-diagram.932170/
# I Neutron Star Feynman Diagram 1. Nov 20, 2017 ### doggydan42 Hello, I recently watched a video as an introduction to Feynman diagrams for my own self-interest. The video gave a link to practice problems, and one of them was as follows: In a neutron star gravitational collapse causes valence electrons to combine with protons. Draw a Feynman diagram representing this interaction. $$p+e^{-} \rightarrow n + \nu_e \\ uud+e^{-} \rightarrow udd + \nu_e$$ I understood most of the interaction and that there would be a W+ boson between the up quark and electron, though would the electron or up quark emit the boson, and why? 2. Nov 20, 2017 ### Staff: Mentor Neither. The boson is exchanged between the electron/neutrino and the up quark/down quark, but neither one can be said to be the "emitter". You just have a Feynman diagram (at lowest order) with two interaction vertices. 3. Nov 20, 2017 ### doggydan42 So to the diagram, it does not matter if the curve drawn for the W boson is further left at the vertex with the electron, being that time increases to the right? Or to rephrase the question, does it matter which particle, up quark or electron, decays first? 4. Nov 20, 2017 ### Staff: Mentor The diagram does not tell you where in spacetime the two vertexes are; the fact that it seems to when you draw it is misleading. In the actual math, this diagram corresponds to an infinite number of terms, one for each possible pair of spacetime locations for the vertexes. 5. Nov 20, 2017 ### doggydan42 Okay, thank you very much.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392330050468445, "perplexity": 956.9178014351901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742338.13/warc/CC-MAIN-20181115013218-20181115035218-00546.warc.gz"}
https://planetmath.org/dualhomomorphismofthederivative
# dual homomorphism of the derivative Let $\mathcal{P}_{n}$ denote the vector space of real polynomials of degree $n$ or less, and let $\mathrm{D}_{n}:\mathcal{P}_{n}\rightarrow\mathcal{P}_{n-1}$ denote the ordinary derivative. Linear forms on $\mathcal{P}_{n}$ can be given in terms of evaluations, and so we introduce the following notation. For every scalar $k\in\mathbb{R}$, let $\mathrm{Ev}^{(n)}_{k}\in(\mathcal{P}_{n})^{*}$ denote the evaluation functional $\mathrm{Ev}^{(n)}_{k}:p\mapsto p(k),\quad p\in\mathcal{P}_{n}.$ Note: the degree superscript matters! For example: $\mathrm{Ev}^{(1)}_{2}=2\,\mathrm{Ev}^{(1)}_{1}-\mathrm{Ev}^{(1)}_{0},$ whereas $\mathrm{Ev}^{(2)}_{0},\mathrm{Ev}^{(2)}_{1},\mathrm{Ev}^{(2)}_{2}$ are linearly independent. Let us consider the dual homomorphism $\mathrm{D}_{2}^{*}$, i.e. the adjoint of $\mathrm{D}_{2}$. We have the following relations: $\begin{array}[]{rrrr}\mathrm{D}_{2}^{*}\left(\mathrm{Ev}^{(1)}_{0}\right)=&-% \frac{3}{2}\,\mathrm{Ev}^{(2)}_{0}&+2\,\mathrm{Ev}^{(2)}_{1}&-\frac{1}{2}\,% \mathrm{Ev}^{(2)}_{2},\\ \mathrm{D}_{2}^{*}\left(\mathrm{Ev}^{(1)}_{1}\right)=&-\frac{1}{2}\,\mathrm{Ev% }^{(2)}_{0}&&+\frac{1}{2}\,\mathrm{Ev}^{(2)}_{2}.\end{array}$ In other words, taking $\mathrm{Ev}^{(1)}_{0},\mathrm{Ev}^{(1)}_{1}$ as the basis of $(\mathcal{P}_{1})^{*}$ and $\mathrm{Ev}^{(2)}_{0},\mathrm{Ev}^{(2)}_{1},\mathrm{Ev}^{(2)}_{2}$ as the basis of $(\mathcal{P}_{2})^{*}$, the matrix that represents $\mathrm{D}_{2}^{*}$ is just $\left(\begin{array}[]{rr}-\frac{3}{2}&-\frac{1}{2}\\ 2&0\\ -\frac{1}{2}&\frac{1}{2}\end{array}\right)$ Note the contravariant relationship between $\mathrm{D}_{2}$ and $\mathrm{D}_{2}^{*}$. The former turns second degree polynomials into first degree polynomials, where as the latter turns first degree evaluations into second degree evaluations. The matrix of $\mathrm{D}_{2}^{*}$ has 2 columns and 3 rows precisely because $\mathrm{D}_{2}^{*}$ is a homomorphism from a 2-dimensional vector space to a 3-dimensional vector space. By contrast, $\mathrm{D}_{2}$ will be represented by a $2\times 3$ matrix. The dual basis of $\mathcal{P}_{1}$ is $-x+1,\quad x$ and the dual basis of $\mathcal{P}_{2}$ is $\frac{1}{2}(x-1)(x-2),\quad x(2-x),\quad\frac{1}{2}x(x-1).$ Relative to these bases, $\mathrm{D}_{2}$ is represented by the transpose of the matrix for $\mathrm{D}_{2}^{*}$, namely $\begin{pmatrix}-\frac{3}{2}&2&-\frac{1}{2}\\ -\frac{1}{2}&0&\frac{1}{2}\end{pmatrix}$ This corresponds to the following three relations: $\begin{array}[]{lcrr}\mathrm{D}_{2}\left[\frac{1}{2}(x-1)(x-2)\right]&=&-\frac% {3}{2}\,(-x+1)&-\frac{1}{2}\,x\\ \mathrm{D}_{2}\left[x(2-x)\right]&=&2\,(-x+1)&+0\,x\\ \mathrm{D}_{2}\left[\frac{1}{2}x(x-1)\right]&=&-\frac{1}{2}\,(-x+1)&+\frac{1}{% 2}\,x\end{array}$ Title dual homomorphism of the derivative DualHomomorphismOfTheDerivative 2013-03-22 12:35:28 2013-03-22 12:35:28 rmilson (146) rmilson (146) 4 rmilson (146) Example msc 15A04 msc 15A72
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 32, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9954516887664795, "perplexity": 199.18720984662068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00104.warc.gz"}
https://pillowlab.wordpress.com/tag/size-biased-sample/
# Using size-biased sampling for certain expectations Let $\{\pi_i\}_i$ be a well defined infinite discrete probability distribution (e.g., a draw from Dirichlet process (DP)). We are interested in evaluating the following form of expectations: $E\left[ \sum_i f(\pi_i) \right]$ for some function $f$ (we are especially interested when $f = -\log$, which gives us Shannon’s entropy). Following [1], we can re-write it as $E\left[ \sum_i \frac{f(\pi_i)}{\pi_i} \pi_i \right] = E\left[ E[ \frac{f(X)}{X} | \{\pi_i\}]\right]$ where $X$ is a random variable that takes the value $\pi_i$ with probability $\pi_i$. This random variable $X$ is better known as the first size-biased sample $\tilde{\pi_1}$. It is defined by $\Pr[ \tilde \pi_1 = \pi_i | \{\pi_i\}_i] = \pi_i$. In other words, it takes one of the probabilities $\pi_i$ among $\{\pi_i\}_i$ with probability $\pi_i$. For Pitman-Yor process (PY) with discount parameter $d$ and concentration parameter $\alpha$ (Dirichlet process is a special case where $d = 0$), the size biased samples are naturally obtained by the stick breaking construction. Given a sequence of independent random variables $V_n$ distributed as $Beta(1-d, \alpha+n d)$, if we define $\pi_i = \prod_{k=1}^{i-1} (1 - V_k) V_i$, then the set of $\{\pi_i\}_i$ is invariant to size biased permutation [2], and they form a sequence of size-biased samples. In our case, we only need the first size biased sample which is simply distributed as $V_1$. Using this trick, we can compute the entropy of PY without the complicated simplex integrals. We used this and its extension for computing the PY based entropy estimator. 1. Jim Pitman, Marc Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. The Annals of Probability, Vol. 25, No. 2. (April 1997), pp. 855-900, doi:10.1214/aop/1024404422 2. Mihael Perman, Jim Pitman, Marc Yor. Size-biased sampling of Poisson point processes and excursions. Probability Theory and Related Fields, Vol. 92, No. 1. (21 March 1992), pp. 21-39, doi:10.1007/BF01205234
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.961162805557251, "perplexity": 349.1708777267978}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122621.35/warc/CC-MAIN-20170423031202-00277-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.dummies.com/article/academics-the-arts/math/pre-algebra/how-to-perform-associative-operations-191283/
##### Basic Math & Pre-Algebra All-in-One For Dummies (+ Chapter Quizzes Online) Addition and multiplication are both associative operations, which means that you can group them differently without changing the result. This property of addition and multiplication is also called the associative property. Here’s an example of how addition is associative. Suppose you want to add 3 + 6 + 2. You can solve this problem in two ways: In the first case, you start by adding 3 + 6 and then add 2. In the second case, you start by adding 6 + 2 and then add 3. Either way, the sum is 11. And here’s an example of how multiplication is associative. Suppose you want to multiply 5 2 4. You can solve this problem in two ways: In the first case, you start by multiplying 5 2 and then multiply by 4. In the second case, you start by multiplying 2 4 and then multiply by 5. Either way, the product is 40. In contrast, subtraction and division are nonassociative operations. This means that grouping them in different ways changes the result. Don’t confuse the commutative property with the associative property. The commutative property tells you that it’s okay to switch around two numbers that you’re adding or multiplying. The associative property tells you that it’s okay to regroup three numbers using parentheses. Taken together, the commutative and associative properties allow you to completely rearrange and regroup a string of numbers that you’re adding or multiplying without changing the result.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196953177452087, "perplexity": 205.52690485144606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00875.warc.gz"}
https://www.physicsforums.com/threads/general-regression-questions.637291/
# General Regression Questions 1. Sep 19, 2012 ### JoeTarmet Sorry if this is somewhat elementary but the regression form of the sine function with data provided is y=asin(b(x-c))+d As far as I know, all of the variables except c can be determined mathematically. My question is this, using calculus or any other method, is there a way to determine c without graphing the data? As a follow up, is there a general formula or procedure that applies to all types of Regression (linear, Cubic, Sinusoidal, etc.)? I want to know this because I believe it is possible using statistical concepts like variance and I want to find out how many of these can be done completely by hand without graphing. 2. Sep 19, 2012 ### Number Nine Your question is very confusing. Graphing isn't used to determine regression parameters, though it can be used to select general models for some very simple datasets. Generally, the parameters would be selected by minimizing some cost function (usually through least-squares), which in your case would require using some sort of non-linear optimization method, since your function is non-linear in its parameters. 3. Sep 22, 2012 ### JoeTarmet Please explain this further, I am interested in what you are saying about cost functions and optimization, but I don't understand these terms.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8196902275085449, "perplexity": 419.98332203010136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646213.26/warc/CC-MAIN-20180319023123-20180319043123-00007.warc.gz"}
https://iwaponline.com/wst/article/63/7/1373/14181/Application-of-experimental-design-methodology-to
The influence of different variables in the photoelectro-Fenton process for the decolorization of Orange II was investigated using an experimental design methodology. The variables considered in this study include electrical current, Fe3+ concentration, H2O2 concentration and initial pH. Response factors were decolorization efficiencies after 30, 90 and 120 min of reaction time, for an initial dye concentration of 100 mg/L. The positive and negative effects of variables and the interaction between variables on color removal were determined. The response surface methodology models were derived based on the decolorization efficiency results and the response surface plots were developed accordingly. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8078790307044983, "perplexity": 1762.5911617088473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540487789.39/warc/CC-MAIN-20191206095914-20191206123914-00429.warc.gz"}
https://projecteuclid.org/euclid.die/1356060220
## Differential and Integral Equations ### Positive solutions of an elliptic system depending on two parameters Flavio Dickstein #### Abstract We consider the Dirichlet problem associated to a $2\times 2$ semilinear elliptic system of equations in a bounded domain of $\mathbb R^n$. This system depends on two parameters $(\eta,\mu)$ and the nonlinear terms are of power type. The existence of solutions for such a problem in the plane $(\eta,\mu)$ is discussed. Our results are natural extensions of those obtained for the corresponding nonlinear eigenvalue problem for elliptic scalar equations. #### Article information Source Differential Integral Equations, Volume 18, Number 3 (2005), 287-297. Dates First available in Project Euclid: 21 December 2012 https://projecteuclid.org/euclid.die/1356060220 Mathematical Reviews number (MathSciNet) MR2122721 Zentralblatt MATH identifier 1212.35032 Subjects Primary: 35J55 Secondary: 35J60: Nonlinear elliptic equations 35P30: Nonlinear eigenvalue problems, nonlinear spectral theory #### Citation Dickstein, Flavio. Positive solutions of an elliptic system depending on two parameters. Differential Integral Equations 18 (2005), no. 3, 287--297. https://projecteuclid.org/euclid.die/1356060220
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8778550624847412, "perplexity": 950.9798329115912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156690.32/warc/CC-MAIN-20180920234305-20180921014705-00278.warc.gz"}
http://math.stackexchange.com/questions/183821/an-argument-to-prove-asymptotic-expansions
# An argument to prove asymptotic expansions I have a real number $I_h$ depending on a small parameter $h>0$. I want to show that it has an asymptotic expansion in integer powers $h$, i.e. there exists a sequence $(J_k)_{k}$ such that $$I_h \sim \sum_{k=0}^\infty \ h^{k} \ J_{k} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (*)$$ Assume I am not able to show this directly, but that I can construct for every $\alpha$ say in $[\frac 12,1)$ a double sequence $(J_{k,m}^{(\alpha)})_{k,m}$ such that asymptotically $$I_h \sim \sum_{k,m=0}^\infty \ h^{k + \alpha m} \ J^{(\alpha)}_{k,m}$$ This should imply what I want (I consider for example $\alpha_1 =\frac 12$ and $\alpha_2=\frac{\sqrt 2}{ 2}$, so only the coefficients of integer powers can be nonzero, otherwise I have a contradiciton. ) My question is (given that what I wrote is correct): is this somehow a standard trick in asymptotic analysis? Can you give me examples of situations where this argument is used? EDIT 1: I changed $\alpha\in[\frac 12,1]$ to $\alpha\in [\frac 12,1)$. If I had also $\alpha=1$ I would be done of course. EDIT 2: The coefficients $(J_{k,m}^{(\alpha)})_{k,m}$ are very complicated and I would say it is hopeless to write them all down in a closed formula (they arise from a combiantion of Laplacae asymtptotics and several other expansions). And even if one manages to write down a formula, there appear quantities derived from a WKB expansion, for which it seems hard to me to get directly much more information then just existence (to show directly $(*)$ I would need to know that some complicated combinations of arbitrary high derivatives vanish at some point... ). In brief: even if there is a direct argument to prove $(*)$, the undirect argument is much shorter and painless. - I find it hard to believe that this can be useful as stated. My feeling is that the analysis of your proof of such double expansion would uncover the reason for analyticity of $I_h$ in a more direct way. – user31373 Aug 18 '12 at 5:39 Something escapes me: if $I_h=J^{(\alpha)}_{0,0}+J^{(\alpha)}_{0,1}h^{\alpha}+o(h^{\alpha})$ and $J^{(\alpha)}_{0,1}\ne0$, then $I_h=J_0+J_1h+o(h)$ is impossible, for example. In short, the expansion of $I_h$ in powers of $h$ is unique, no? – Did Aug 18 '12 at 9:23 @did. Exactly. It is impossible, except when it happens that $J_{0,1}^{(\alpha)} = 0$. I cannot prove the latter property directly, but it is a consequence of the undirect argument. – Hans Aug 18 '12 at 9:36 @Leonid Kovalev. Yes, I admit it seems strange and I'm not very happy with it since of course it would be nicer to give a direct proof (but...see EDIT 2). This is why I look for consolation given by other similar cases. – Hans Aug 18 '12 at 9:40 Let $h\mapsto I_h$ denote a function defined at least in an interval $[0,h_0)$ and such that there exists two increasing nonnegative sequences $(i_n)_{n\geqslant0}$ and $(j_n)_{n\geqslant0}$ and some nonzero coefficients $(a_n)_{n\geqslant0}$ and $(b_n)_{n\geqslant0}$ such that $$I_h=\sum_{n=0}^{+\infty}a_nh^{i_n}=\sum_{n=0}^{+\infty}b_nh^{j_n},$$ in the sense that, for every $k\geqslant0$, when $h\to0^+$, $$I_h-\sum_{n=0}^{k}a_nh^{i_n}=o(h^{i_k}),\qquad I_h-\sum_{n=0}^{k}b_nh^{j_n}=o(h^{j_k}).$$ Then, $i_n=j_n$ and $a_n=b_n$ for every $n\geqslant0$. To prove this fact, assume the result is not true, consider $k=\min\{n\mid (i_n,a_n)\ne(j_n,b_n)\}$ and treat separately the cases when $i_k\ne j_k$ and when $i_k=j_k$ but $a_k\ne b_k$. To apply this result to your case, write $$I_h=\sum_{k=0}^{+\infty}J_kh^k=\sum_{n=0}^{+\infty}a_nh^{i_n},\qquad I_h=\sum_{k,m=0}^{+\infty}J^{(\alpha)}_{k,m}h^{k + \alpha m}=\sum_{n=0}^{+\infty}b_nh^{j_n},$$ where $(i_n)_n$ enumerates the set of $k$ such that $J_k\ne0$ and $(j_n)_n$ enumerates the set of $j$ such that $$\sum_{k+\alpha m=j}J^{(\alpha)}_{k,m}\ne0.$$ If $\alpha$ is irrational, for example $\alpha=\frac{\sqrt2}2$, this shows that $J^{(\alpha)}_{k,m}=0$ for every $m\ne0$. If $\alpha=\frac12$, this shows that, for every $i\geqslant0$, $$\sum_{k=0}^iJ^{(\alpha)}_{k,2i-2k+1}=0.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9651232361793518, "perplexity": 150.0552431156813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00147-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.wiwi.hu-berlin.de/de/professuren/bwl/or/projects/infrastructure
Direkt zum InhaltDirekt zur SucheDirekt zur Navigation ▼ Zielgruppen ▼ # Humboldt-Universität zu Berlin - Wirtschaftswissenschaftliche Fakultät Metropolitan Infrastructure ## Network and Mechanism Design for Metropolitan Infrastructure ### Background Metropolitan infrastructures like public roads, telecommunication networks, the electric grid, and public transport are a key factor for quality of life as well as cultural and economic development. However, their installation and maintenance often requires huge efforts both in terms of financial or personal investments, and in terms of environmental burden. The huge effect of infrastructure design decisions on nature, society, and economy make sound infrastructure planning indispensable. A main characteristic of infrastructure systems is that they are used by a large number of economically independent entities that strive to optimize their private goals instead of optimizing the overall network usage. This fact is apparent for publicly available services like public roads or transport, but matters also for electricity and gas networks that are operated and used by independent economic actors. Since the last 50 years, such systems of independent decision makers are analyzed within the theory of noncooperative games. Based on the works of Nash and Wardrop, the central concepts of game theory are Nash equilibria and Wardop equilibria. Roughly speaking, a system is in equilibrium when none of its users can minimize its personal costs of the network usage by altering its usage patterns. To optimize the design and maintenance of the infrastructure networks above it is imperative to understand the conditions under which equilibria emerge, to assess their quality, and to design mechanisms that lead to good equilibria, e.g., in terms of a provable performance guarantee. These are the main goals of this project. ### Network Design A classic problem in transportation is the design of a traffic network with a good tradeoff between investment costs for installing road capacity and routing cost of the emerging user equilibrium. To formalize the problem, we are given a graph $G=(V,E)$ for which the latency of each edge $e \in E$ depends on the ratio of the installed capacity $z_e$ and the edge flow $f_e$. The goal is to find an optimal investment in edge capacities $\mathbf{z} =(z_e)_{e∈E}$ that minimizes the sum of routing cost of the induced Wardrop equilibrium and the investment cost, i.e., \begin{align}&\min_{\mathbf{f}, \mathbf{z} \in \mathbb{R}_{\geq 0}^m} \sum_{e \in E} c_e( f_e / z_e) f_e + l_e z_e \tag{1} \\ &\text{ s.t. $\mathbf{f}$ is a Wardrop flow}.  \end{align} where is the cost of installing one unit of capacity on edge $e$. This problem can be reformulated as a bilevel optimization problem where in the upper level the edge capacities are determined and in the lower level the Wardop equilibrium conditions are expressed as a minimization problem \begin{align} &\min_{\mathbf{f}, \mathbf{z} \in \mathbb{R}_{\geq 0}^m} \sum_{e \in E} c_e(f_e/z_e) f_e + l_e z_e \tag{2} \\ &\text{ s.t.: } \mathbf{f} \in \arg\min\nolimits_{\text{$\mathbf{g}$ is a flow}} \sum_{e \in E} \int_{0}^{g_e} c_e(x) \,\text{d}x.\end{align} In Gairing, Harks and Klimm (2014, 2017), we show that the problem (2) is 𝖠𝖯𝖷-hard, i.e., there is a constant ϵ>0 such that no polynomial algorithm can solve all the above system within a factor of $1+\epsilon$ to optimality on all instances. In contrast, the natural relaxation of the problem without the equilibrium constraints \begin{align}&\min_{\mathbf{f},\mathbf{z}\in \mathbb{R}^m_{\geq 0}} \sum_{e \in E} c_e(f_e/z_e)f_e + l_e z_e \tag{3} \\ &\text{ s.t.  $\mathbf{f}$ is a  flow}.\end{align} can be solved straightforwardly by computing one shortest path for each commodity. We analyze the performance of two algorithms that are based on the solution of the relaxation (3). The first algorithm was proposed by Marcotte [Math. Program. 1986] for the special case that the function ce are monomials. We generalize this algorithm to arbitrary sets $\mathcal{S}$ of unbounded, non-negative and semi-convex latency functions. We obtain approximation guarantees parametrized by the so-called anarchy value $\mu(\mathcal{S})$. Our approximation guarantee matches the previous bounds by Marcotte for monomials, is equal to 2 for general semi-convex functions and equal to 5/4 for general concave and semi-convex functions. More interestingly, we show that taking the better of the two algorithms above gives a strictly improved performance guarantee which can be parametrized by μ() and another related parameter. For general latency functions, e.g., the improved approximation guarantee is equal to 9/5 and for concave and semi-convex functions it is equal to 49/41≈1.195. Figure 1: Schematic solution to a network design problem. While the network design problem above is concerned with the design of a new transportation network from scratch, road pricing is a popular measure to improve the equilibrium behavior of existing road networks that was implemented in cities like Singapore, London and Stockholm. The tolls impact the route choices of the commodities such that in the Wardrop equilibrium with tolls all used paths of a commodity are minimal with respect to the sum of the edges' latencies and tolls. To find tolls that minimize the routing cost of the induced user equilibrium, we are interested in solving \begin{align} &\min_{\mathbf{f}, \mathbf{t} \in \mathbb{R}_{\geq 0}^m} \sum_{e \in E} c_e(f_e) f_e \tag{4}\\ &\text{ s.t. $\mathbf{f}$ is a Wardrop flow w.r.t. $c_e(f_e) + t_e$}.\end{align} which is equivalent to  \begin{align} &\min_{\mathbf{f}, \mathbf{t} \in \mathbb{R}_{\geq 0}^m} \sum_{e \in E} c_e(f_e) f_e \tag{5}\\ &\text{ s.t. $\mathbf{f} \in \arg\min\nolimits_{\mathbf{g} \text{ is a flow }} \int_{0}^{g_e}c_e(x) \,\text{d}x$}.\end{align} It is well-known that (5) can be solved to optimality by charging tolls on every edge of the graph equal to the marginal costs of that edge. This result, however, has little bite for real-world road toll systems where only a subset of the edges of the network are priced. This motivates us in Harks, Kleinert, Klimm, and Möhring (2015) to study the problem of computing tolls on a given subset of network edges so at to minimize the routing costs of the induced Wardrop equilibrium, i.e., given a subset $T \subset E$ of tollable edges, we are interested in solving \begin{align} &\min_{\mathbf{f},\mathbf{t} \in \mathbb{R}_{\geq 0}^m} \sum_{e \in E} c_e(f_e) f_e \tag{6}\\ &\text{ s.t. $\mathbf{f}$ is a Wardrop equilibrium w.r.t. $c_e(f_e) + t_e$}\\ &t_e = 0 \text{ for all $e \notin T$}\end{align} As this problem is NP-hard for general networks, we restrict ourselves to parallel edge networks. We first devise a pseaudo-polynomial algorithm for this case and  extend it to a polynomial algorithm given that a certain technical condition on the cost functions is met. We then conduct a computational study for this problem on real-world traffic networks for which we tested several algorithms based on gradient descend methods. It turns out that already a small number of tollable edges suffices to decrease the travel time significantly. Further theoretical and empirical results for the problem to choose the set of tollable edges subject to cardinality constraints are given. Non-monetary access control Monetary incentives are not always possible to control the access to congested resources due to ethical issues. As an example, consider access to positions, funds, or publication venues. In these situations, a popular way to control the access to the resource is a process of peer review, where the users nominate who they think is eligible for access. It is natural to assume that the main interest of each user is to be granted access, so a selection mechanism must take into account that users may misreport their opinion about who they think is eligible for access as long as they can increase their own chances of being granted access. It is a natural question what percentage of the nominations a mechanism must lose in order to be strategyproof , i.e., in order to prevent misreporting of that kind. Formally, a nomination profile is a directed graph $G=(N,E)$ where an edge $(u,v) \in E$ is interpreted as $u$ nominating $v$. A selection mechanism $f$ is a function that takes a graph $G=(N,E)$ as input an returns a probability distribution on $N$. Impartiality requires that \begin{align} f(N,E)_i &= f(N,E')_ i & &\Leftarrow & E \setminus (\{i\} \times N) =  E' \setminus ({i}×N), \end{align} i.e., that the probability of selecting $i$ is independent of the nomination that $i$ casts on other agents. We are interested in impartial mechanism that extract a high percentage of the nominations, but are impartial. To this end, we call a mechanism $f$ that selects at most $k$ users $\alpha$-optimal, if \begin{align} \frac{\mathbb{E}_{S \sim f(G)} [ \sum_{i \in S} \text{deg}(i)]}{\max_{S \subseteq N :|S|=k} \bigl\{ \sum_{i \in S} \text{deg}(i) \bigr\}} \geq \alpha \end{align} for all graphs $G$, where $\text{deg}(i)$ denotes the indegree of vertex $i$. Even in the most basic where $k=1$, i.e., only a single user is to be selected was considered to be very challenging. It was conjectured by Alon et al. that there is a mechanism that selects in expectation a user that receives half the maximum number of nominations any user receives. In Fischer and Klimm (2015), we answered this conjecture to the positive. We were also able to show that when the most popular user receives many nominations, even a fraction of 3/4 of the maximum number of nominations to a user can be extracted by a mechanism. In Bjelde, Fischer, and Klimm (2015, 2017), we also considered the case where more than one agents is to be selected. In turns out that the achievable approximation guarantee depends on whether the mechanism may select a different number of agents for different input. For the problem of always selecting exactly two agents, e.g., we give a 2/3-optimal mechanism. In contrast, a mechanism that selects either one or two agents out of a set of three agents and that is 3/4-optimal is shown on the right. ### Project-related publications #### 2017 • Antje Bjelde, Yann Disser, Jan Hackfeld, Christoph Hansknecht, Maarten Lipmann, Julie Meißner, Kevin Schewior, Miriam Schlöter, Leen Stougie (2017): Tight bounds for online TSP on the line In Proceedings of the 28th ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 994-1005. • Antje Bjelde, Felix Fischer, and Max Klimm (2017): Impartial selection and the power of up to two choices ACM Transactions on Economics and Computation, to appear. • Antje Bjelde, Max Klimm, and Daniel Schmand (2017): Brief announcement: Approximation algorithms for unsplittable resource allocation problems with diseconomies of scale In Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), to appear. • Martin Gairing, Tobias Harks, and Max Klimm (2017): Complexity and approximation of the continuous network design problem SIAM Journal on Optimization, to appear. #### 2016 • Yann Disser, Jan Hackfeld, and Max Klimm (2016): Undirected graph exploration with $\Theta(\log \log n)$ pebbles In Proceedings of the 27th ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 25-39. • Tobias Harks and Max Klimm (2016): Congestion games with variable demands Mathematics of Operations Research, Vol. 41, pp. 255-277. • Jasper de Jong, Max Klimm, and Marc Uetz (2016): Efficiency of equilibria in uniform matroid congestion games In Proceedings of the 8th International Symposium on Algroithmic Game Theory (SAGT), pp. 105-116. #### 2015 • Antje Bjelde, Felix Fischer, and Max Klimm (2015): Impartial selection and the power of up to two choices In Proceedings of the 11th Conference on Web and Internet Economics, pp. 146-157. • Tobias Harks, Max Klimm, and Manuel Schneider (2015): Bottleneck routing with elastic demands In Proceedings of the 11th Conference on Web and Internet Economics (WINE), pp. 384-397. • Tobias Harks and Max Klimm (2015): Equilibria in a class of aggregative location games Journal of Mathematical Economics, Vol. 61, pp. 211-220. • Yann Disser, Andreas Feldmann, Max Klimm, and Matus Mihalak (2015): Improving the Hk-bound on the price of stability in undirected Shapley network design games Theoretical Computer Science, Vol 562, pp. 557-564. • Yann Disser, Max Klimm, and Elisabeth Lübbecke (2015): Scheduling bidirectional traffic on a path In Proceedings of the 42nd International Colloquium on Automata, Languages, and Programming (ICALP), pp. 406-418. • Felix Fischer and Max Klimm (2015): Optimal impartial selection SIAM Journal on Computing, Vol. 44, pp. 1263-1285. • Tobias Harks, Ingo Kleinert, Max Klimm, and Rolf H. Möhring (2015): Computing network tolls with support constraints Networks, Vol. 65, pp. 262-285. • Max Klimm (2015): Linear, exponential, but nothing else In A. Schulz, M. Kutella S. Stiller, and D. Wagner (editors), Gems of Combinatorial Optimization and Graph Algroithms, pp. 113-123. • Max Klimm and Daniel Schmand (2015): Sharing non-anonymous costs of multiple resources optimally In Proceedings of the 9th International Conference on Algorithms and Complexity (CIAC), pp. 274-287. #### 2014 • Christoph Hansknecht, Max Klimm, and Alexander Skopalik (2014): Approximate pure Nash equilibria in weighted congestion games In Proceedings of the 17th Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), pp. 242-257. • Tobias Harks and Max Klimm (2014): Multimarket oligopolies with restricted market access In Proceedings of the 7th International Symposium on Algorithmic Game Theory (SAGT), pp. 182-193. • Martin Gairing, Tobias Harks, and Max Klimm (2014): Complexity and approximation of the continuous network design problem In Proceedings of the 17th Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), pp. 226-241. • Tobias Harks, Max Klimm, and Britta Peis (2014): Resource competition on integral polymatroids In Proceedings of the 10th Conference on Web and Internet Economics (WINE), pp.189-202. • Max Klimm and Andreas Schütz (2014): Congestion games with higher demand dimensions In Proceedings of the 10th Conference on Web and Internet Economics (WINE), pp. 453-459.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 8, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164727330207825, "perplexity": 2085.9561294885266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00144.warc.gz"}
http://clay6.com/qa/49980/the-length-of-string-of-a-simple-pendulum-as-measured-by-a-meter-scale-was-
# The length of string of a simple pendulum as measured by a meter scale was found to be $98.2 \;cm$ The diameter of the bob was measured by a vernier caliper and it was found to be $3.62\;cm$ The time for 10 oscillations was found to be $21.2\; s$ Suggest method to improve the accuracy of the result. Percentage error introduced by limited accuracy of measurement of l is only $0.1\%$ while due to measurement of time is $0.94\%$ . As $\delta t$ is fixed equal to least count of stop watch , to improve the situation , t must be increased . In another words time for more number of oscillation must be recorded.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8380067348480225, "perplexity": 286.43419863351716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104172.67/warc/CC-MAIN-20170817225858-20170818005858-00190.warc.gz"}
http://justinbangerter.com/posts/basis_vectors/
# Justin's Blog ## February 18, 2012 ### Basis Vector • A normalized vector that defines a "basis" for a coordinate scheme. Basis vectors are orthogonal to each other, and they are often written with a "hat." (Like this: $\hat{i}$) ### Normalized Vector • A vector with a length of one. A normalized vector can be applied to a scalar to give it direction. ### Orthogonal Vectors • When the dot product of two vectors is zero, they are said to be orthogonal. For example, two perpendicular lines are orthogonal. There are as many basis vectors as there are dimenions in a coordinate scheme. Thus: a 3D rectangular scheme (called $ℝ^{3}$ for Real, 3 dimensions) has 3 basis vectors, all perpendicular to each other. You often see i, j, and k used as basis vectors. These usually refer to $ℝ^{3}$ You can write them out in vector form. $$\hat{i} = (1, 0, 0)$$ $$\hat{j} = (0, 1, 0)$$ $$\hat{k} = (0,0,1)$$ They are useful because you can describe any point in a coordinate system as a sum of the basis vectors, multiplied by a number. $$(3, 4, 5)$$ Means the same as... $$3\hat{ i} + 4\hat{j} + 5\hat{k}$$ It can make addition and multiplication conceptually simpler.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9459361433982849, "perplexity": 515.6039408186546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741016.16/warc/CC-MAIN-20181112172845-20181112194845-00062.warc.gz"}
http://backreaction.blogspot.com/2007/12/phase-diagram-of-nuclear-matter.html?showComment=1196953680000
## Wednesday, December 05, 2007 ### The Phase Diagram of Nuclear Matter Physical systems consisting of many particles can come about in different phases, depending on conditions such as temperature or pressure: Water can be solid, fluid, or gaseous, and matter made up of atoms with magnetic moments can show spontaneous magnetisation. These different properties are represented in phase diagrams. Now, at the heart of every atom, there is an atomic nucleus built up of protons and neutrons, which again consist of quarks kept together by gluons. Thus, it is natural to ask if there is a phase diagram of nuclear matter, and how it may look like. And indeed, there is a phase diagram of nuclear matter. Here it is, in a schematic representation, as it shows up in nearly every talk about quark matter and the quark-gluon plasma: Source: Compressed Baryonic Matter (CBM) Experiment at the Facility for Antiproton and Ion Research (FAIR), GSI, Darmstadt, Germany. The horizontal axis shows density (that's different from the phase diagram of water we have seen before) - to be precise, net baryon density, i.e., the density of protons and neutrons (which are both baryons) minus the density of antibaryons. Under usual conditions, there is not much antimatter around, and net baryon density is just the density of protons and neutrons. However, in more extreme conditions, for example if the temperature is sufficiently high, thermal energy may materialise in particle-antiparticle pairs, and then, it becomes important to properly distinguish baryon density and net baryon density. The scale for net baryon density is set by the density of nuclear matter in the ground state: atomic nuclei of the different atoms in the periodic system have all the same density of about 1018 kg m-3, or 1015 times the density of water - this value corresponds to a net baryon density of 1 on the horizontal axis. The vertical axis gives temperature, as in the phase diagram of water. However, as for density, the scale is vastly different. Via Boltzmann's constant, temperature is equivalent to energy - for example, room temperature corresponds to an energy of 1/40 electron volt (eV). In the phase diagram of nuclear matter, temperature is measured in million electron volt, or MeV - that's an enormous temperature scale: 100 MeV correspond to a temperature of about 1.2×1012 K, or 100 000 times the temperature at the centre of the Sun. On that scale, normal nuclear matter is quite cool - it's represented by the black dot in the lower left corner of the phase diagram at density 1. Normal nuclear matter consists of neutrons and protons, which are classified as baryons, and more generally as hadrons. In hadrons, the elementary constituents of quarks and gluons are packed together in well-defined bags, made up of three quarks for baryons, and a quark and an antiquark for mesons (the other class of particles among the hadrons). Quarks and gluons are said to be confined in hadrons. The range of density and temperature where this confined phase prevails is shown in the light shade in the lower left part of the phase diagram. However, if temperature or density, or both, increase, confinement eventually can break down, and quarks and gluons are set free - that's the deconfinement of hadrons to the quark-gluon plasma. At the same time as becoming deconfined, the mass of quarks drops to to a few MeV, which is called the chiral transition. In the phase diagram, the quark-gluon plasma occupies the region shaded in orange. Deconfinement at high density is believed to happen in the interior of neutron stars, where nuclear matter is compressed under the star's own weight to up to 10 times the normal nuclear density. Deconfinement by heating up nuclear matter is achieved by colliding heavy nuclei at enormous energies, for example at the Relativistic Heavy Ion Collider (RHIC), or at the planned heavy-ion program at the Large Hadron Collider LHC. The red line indicates how nuclear matter is heated up at these collisions and reaches the region of deconfinement. At this point, it should be clear that it is not possible to explore the phase diagram of nuclear matter in the same way as it can be done with, say, water. We cannot take a chunk of nuclear matter, heat it up or compress it in a controlled way and study its properties. Instead, we have to smash together heavy nuclei, and to rely entirely on the analysis of the fragments that emerge from these collisions to reconstruct the evolution of density and temperature during the event. If deconfinement has occurred in the collision has to be deduced by circumstantial evidence - there are never free, deconfined quarks measured in the detector. For this reason, the exact details of the phase diagram of nuclear matter are not known yet, and all qualitative features so far are deduced from the fundamental theory of nuclear matter, quantum chromodynamics (QCD). That's why the phase diagram of nuclear matter is usually also called the phase diagram of QCD. Analysing QCD on a space-time lattice using computers, current knowledge suggests that the transition from hadrons to the quark-gluon plasma is of first order at high net baryon density - meaning that there is latent heat, a surface tension, and that the transition occurs via the formation of bubbles - and has a critical point somewhere around a temperature of 150 MeV and a bit above nuclear density. This is completely analogous to the vapour line in the phase diagram of water, which separates fluid from gas and ends in the critical point. The line of the first-order transition is shown in the diagram in yellow. As a curious consequence of the location of the critical point, when in the very early universe quarks and gluons condensed into hadrons for the first time, this transition was very smooth and gentle - it is what is called technically a cross-over. This is because in the hot early universe, a lot of antimatter was still around, and hence, the net baryon density was very close to zero. For some time it had been thought that the hadronisation transition in the early universe may be responsible for the seeds of structure formation in the universe - with the smooth transition of a cross-over, this cannot be the case. Of course, it would be very interesting to check the predictions of QCD for the phase diagram in experiment. For example, one could try to identify signals of the first-order transition, or even better, of the critical point. At a critical point, all kinds of fluctuations grow large, and that may yield a good signal. So far, there are very few, and inconclusive data. One problem is, for example, that heavy ion collisions at RHIC are too high in energy and explore high temperatures at low net baryon density, i.e. the cross-over region of the phase diagram. However, starting in 2012, a new experiment at a collider currently under construction at the GSI in Darmstadt, Germany, will hopefully be able to find answers to this issue: The Compressed Baryonic Matter (CBM) experiment at the Facility for Antiproton and Ion Research (FAIR) will achieve higher net baryon densities at moderate temperatures, and hopefully cross the first-order transition and get close to the critical point. So, in ten years form now, we may know a bit more details about the phase diagram of nuclear matter. A very general introduction to heavy ion physics and the phase diagram of QCD is given on the pages of CBM and FAIR. For more on QCD in general, see e.g. QCD Made Simple by Frank Wilczek, Physics Today 53, August 2000, page 22. For heavy ion physics at RHIC, check out What Have We Learned From the Relativistic Heavy Ion Collider? by Thomas Ludlam and Larry McLerran, Physics Today 56, October 2003, page 48, and The First Few Microseconds by Michael Riordan and Bill Zajc, Scientific American, May 2006. The status of the phase diagram as seen by Lattice QCD is described, e.g., in Exploring the QCD phase diagram by Owe Philipsen, arXiv:0710.1217v1. This post is part of our 2007 advent calendar A Plottl A Day. Uncle Al said... The foamy visible mass distribution of the cosmos arising from hadron critical point opalescence has a certain elegance. Computable theory, too. http://en.wikipedia.org/wiki/Critical_opalescence http://tanzanite.chem.psu.edu/demos.html middle http://www.doitpoms.ac.uk/tlplib/solid-solutions/printall.php middle http://www.tau.ac.il/~phchlab/experiments/Binary_Solutions/critopal.html http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-66322006000300009&lng=ene&nrm=iso&tlng=ene multiple reactive component example Plato said... For me the lay person, it has always been important to me, that the experimental process be married with current dynamics going on within the formation of our universe. It had to make sense. Why I ask what value of gravity at the heart of collision processes. There is "that point" where they are all united. Cosmic ray collisions? Ice Cube? Neutrinos Where are such examples developing that we could say "that here" within this part of the universe such a formation "is the beginning" and becoming? So we have followed this back to the QGP state. Wonderful. So from a microperspective could such examples in microstate blackholes show relevance to the gravitational collapse(heat generation from decreasing spherical size) seen in the blackholes in our universe have motivation for universe expansion, to hold entropic designed products, just as particles do from.... some asymmetry breaking aspect developed from the perfect fluid??? Plato said... While the Standard Model has been very successful in describing most of the phenomemon that we can experimentally investigate with the current generation of particle acceleraters, it leaves many unanswered questions about the fundamental nature of the universe. The goal of modern theoretical physics has been to find a "unified" description of the universe. Click on paragraph for picture. So our minds have been lead to such thoughts, as to the origins of our universe along side of experimentation? So the "question mark" has been a leading factor. Plato said... G -> H -> ... -> SU(3) x SU(2) x U(1) -> SU(3) x U(1). Here, each arrow represents a symmetry breaking phase transition where matter changes form and the groups - G, H, SU(3), etc. - represent the different types of matter, specifically the symmetries that the matter exhibits and they are associated with the different fundamental forces of nature The universe in expression? "Nothing to me would be more poetic; no outcome would be more graceful ... than for us to confirm our theories of the ultramicroscopic makeup of spacetime and matter by turning our giant telescopes skyward and gazing at the stars," Brian Greene Even the String theorists had to turn their views to the heavens. Bring the Heavens down to earth. Now they see the landscape of the universe(gravity) in terms of the Lagrangian? Anonymous said... Stephan I'm a non-scientist. Here is a my question:Can a free quark be crushed down to a smaller diameter than it normally has? Do the laws of physics allow this? Can a single quark be crushed and squeezed into a black hole? Can a quark be torn apart in a black hole or is this unknowable because the laws of physics break down at the singualrity? Have a nice day oxo said... This is because in the hot early universe, a lot of antimatter was still around, and hence, the net baryon density was very close to zero. Where did the antimatter disappear without annihilating the matter in the process? Bee said... Hi Anonymous: The intersection between the quantum field theories in the standard model and general relativity that one needs to describe black holes is so far not very well understood. What you would need to know to answer your question is how to treat black holes and their formation in quantum field theory, including the strong interaction of particles. Regarding the first part of you question: one typically associates a size to objects depending on the energy at which they can be resolved. If you go to higher energies (collider) you can resolve smaller distances. Such, we were eventually able to find the proton has a substructure of three (valence) quarks. This scale at which you start seeing them isn't something that can be changed. If you were to go to higher energies (smaller distances) you would however 'see' a lot of virtual (see) quark-antiquark pairs, gluons etc. In a certain sense you might say these are 'smaller'. Roughly speaking I think the problem goes back to us talking about elementary 'particles' that one might imagine like a small ball, while the 'stuff' that we are made of is actually a quantum field. Does that help? Best, B. Anonymous said... Bee Yep, helps a bit. The idea that point particles at certain energy scales becomes a meaningless concept(not sure if your saying this) seems interesting. Thanks Anonymous said... Feynman's Quantum Electrodynamics (QED) actually requires electrons to be point particles, a notion that has already been wrestled with since shortly after Natural Philosophy professor J.J. Thomspon discovered the electron in 1898. Long before one gets down to a Planck length, there should be some deviations from QED -- I think around 10^-15 cm. I turn now to an expert. F. Rohrlich, "The Theory of the Electron", 31st Joseph Henry Lecture, read before the Society 11 May 1962. Rohrlich (the #1 authority on the history of the theory of the electron) gives a replacement for the Dirac equation, namely an integro-differential equation of the 2nd order whose solutions are exactly consistent with an extension of the principle of equivalence to eletromagnetic systems. It avoids non-physical "run-away solutions" that I addressed in my paper on Higher Order Terms in Maxwell's Equation, and other problems. "This leads to an apparent contradiction with energy conservation.... Here an essentially new feature emerges, a feature which was not expected and does not fit into the concepts of classical physics of which this theory is part: the new equation of motion has a non-local behavior in time, a certain lack of instantaneity which brings with itself a lack of causality over time intervals of the order of tau_0. In particular, energy conservation is no longer satisfied at every instant of time, but is spread out over a time interval of about tau_0... given in terms of the electron's mass and charge as tau_0 = (2/3) (r_0)/c = (2/3) e/mc^3 = 6 x 10^-24 sec Clearly such time intervals are entirely outside the domain of competence of classical physics..." Prof. Jonathan Vos Post Terry said... Nice summary. The feasibility of a search for the critical point at RHIC from AGS-SPS-RHIC energies is underway. The accelerator has already been tested down to a CMS energy of 9 GeV. http://www.bnl.gov/rhic/news/073107/story3.asp If all goes well, the plan is to have this energy scan in 2010. stefan said... Hi Oxo, Where did the antimatter disappear without annihilating the matter in the process? Well, the antimatter did annihilate the matter in the process... At the time of the hadronisation transition, the total number of quarks + antiquarks was enormously bigger than the net number of quarks (quarks - antiquarks) - sorry, I am not sure about actual numbers. The big problem, then, is of course, how comes that there have been some more quarks than antiquarks, so that not all matter has been annihilated. There is no conclusive answer yet to this question, which runs under the name of baryogenesis. For more about this, you may find this article helpful: The Mystery of the Matter Asymmetry by Eric Sather (PDF file). Best, Stefan stefan said... Hi terry, thanks for the update about RHIC! I guess this is a rare case where tuning an accelerator for energies lower than originally scheduled may yield cool physics results ;-) ... and 2010 would be before FAIR... Best, Stefan Terry said... Hi Stefan, It will be an interesting time at RHIC, that's for sure. Hopefully some questions will be answered, such as whether the famous peak/"horn" in the K+/pi+ ratio observed at SPS is actually there. From the experimental side, performing this energy scan in a collider environment with the same detectors will assist in getting a good handle on systematic errors. For example, the corrections for detector acceptance are much more straightforward in a collider than for fixed target experiments. FAIR will be able to study the low energy arena in much more detail. I hope they remain on schedule. Bee said... What about the lower right corner: color super conductivity? What is the status of that? stefan said... Dear Bee, about colour superconductivity - good question! I am not aware of concrete plans how this could be tested in heavy-ion collision experiments. I remember some ideas that the quark pairing may show up in enhanced pentaquark production, but as the pentaquark is more or less dead... Colour superconductivity could also be play a role for neutron stars, but I am not so sure about the status. Maybe one of our readers knows more? Best, Stefan
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8708736300468445, "perplexity": 764.3272023261192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462762.87/warc/CC-MAIN-20150226074102-00150-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.chemeurope.com/en/encyclopedia/Green-Kubo_relations.html
My watch list my.chemeurope.com # Green-Kubo relations Green-Kubo relations give exact mathematical expression for transport coefficients in terms of integrals of time correlation functions. ## Thermal and mechanical transport processes Thermodynamic systems may be prevented from relaxing to equilibrium because of the application of a mechanical field (e.g. electric or magnetic field), or because the boundaries of the system are in relative motion (shear) or maintained at different temperatures, etc. This generates two classes of nonequilibrium system: mechanical nonequilibrium systems and thermal nonequilibrium systems. The standard example of a mechanical transport process would be Ohm's law which states that at least for sufficiently small applied voltages, the current I is linearly proportional to the applied voltage V, $I = \sigma V.\,$ As the applied voltage increases we expect to see deviations from linear behaviour. The coefficient of proportionality is the electrical conductivity which is the reciprocal of the electrical resistance. The standard example of a thermal transport process would be Newton's Law of viscosity which states that the shear stress Sxy is linearly proportional to the strain rate. The strain rate γ is the rate of change streaming velocity in the x-direction, with respect to the y-coordinate, $\gamma \ \stackrel{\mathrm{def}}{=}\ \partial u_x /\partial y$. Newton's Law of viscosity states $S_{xy} = \eta \gamma.\,$ As the strain rate increases we expect to see deviations from linear behaviour $S_{xy} = \eta (\gamma )\gamma.\,$ Another well known thermal transport process is Fourier's Law of Heat conduction which states that the heat flux between two bodies maintained at different temperatures is proportional to the temperature gradient (the temperature difference divided by the spatial separation). ## Linear constitutive relations So regardless of whether transport processes are stimulated thermally or mechanically, in the small field limit it is expected that a flux will be linearly proportional to an applied field. In such a case the flux and the force are said to be conjugate to each other. The relation between a thermodynamic force and its conjugate thermodynamic flux is called a linear constitutive relation, $J = L(F_e = 0)F_e. \,$ L(0) is called a linear transport coefficient. ## Green-Kubo relations In the 1950s M S Green and R Kubo proved an exact expression for linear transport coefficients which is valid for systems of arbitrary temperature, T, and density. They proved that linear transport coefficients are exactly related to the time dependence of equilibrium fluctuations in the conjugate flux, $L(F_e = 0) = \beta V\;\int_0^\infty {ds} \left\langle {J(0)J(s)} \right\rangle _{F_e = 0}, \,$ where β = 1 / (kT) with the Boltzmann constant k and V is the system volume. The integral is over the equilibrium flux autocorrelation function. At zero time the autocorrelation function is positive since it is the mean square value of the flux at equilibrium. [Note at equilibrium the mean value of the flux is zero by definition.] At long times the flux at time t, J(t), is uncorrelated with its value a long time earlier J(0) and the autocorrelation function decays to zero. This remarkable relation is frequently used in molecular dynamics computer simulation to compute linear transport coefficients—see Evans and Morriss "Statistical Mechanics of Nonequilibrium Liquids", Academic Press 1990, now available online. ## Nonlinear response and transient time correlation functions In 1985 Denis Evans and Morriss derived two exact fluctuation expressions for nonlinear transport coefficients—see Evans and Morriss Mol. Phys, 54, 629(1985). They proved that in a thermostatted system that is at equilibrium at t = 0, the nonlinear transport coefficient can be calculated from the so-called transient time correlation function expression: $L(F_e ) = \beta V\;\int_0^\infty {ds} \left\langle {J(0)J(s)} \right\rangle _{F_e }, \,$ where the equilibrium (Fe = 0) flux autocorrelation function is replaced by a thermostatted field dependent transient autocorrelation function. At time zero $\left\langle {J(0)} \right\rangle _{F_e } = 0$ but at later times since the field is applied $\left\langle {J(t)} \right\rangle _{F_e } \ne 0$. Another exact fluctuation expression derived by Evans and Morriss is the so-called Kawasaki expression for the nonlinear response: $\left\langle {J(t;F_e )} \right\rangle = \left\langle {J(0)\exp [ - \beta V\int_0^t {J( - s)F_e \;ds]} } \right\rangle _{F_e }. \,$ The ensemble average of the right hand side of the Kawasaki expression is to be evaluated under the application of both the thermostat and the external field. At first sight the transient time correlation function (TTCF) and Kawasaki expression might appear to be of limited use—because of their innate complexity. However, the TTCF is quite useful in computer simulations for calculating transport coefficients. Both expressions can be used to derive new and useful fluctuation expressions quantities like specific heats, in nonequilibrium steady states. Thus they can be used as a kind of partition function for nonequilibrium steady states. ## Derivation of Green-Kubo relations from the fluctuation theorem and the central limit theorem For a thermostatted steady state, time integrals of the dissipation function are related to the dissipative flux, J, by the equation $\bar \Omega _t = - \beta \overline J _t VF_e.\,$ We note in passing that the long time average of the dissipation function is a product of the thermodynamic force and the average conjugate thermodynamic flux. It is therefore equal to the spontaneous entropy production in the system. The spontaneous entropy production plays a key role in linear irreversible thermodynamics - see de Groot and Mazur "Non-equilibrium thermodynamics" Dover. The fluctuation theorem (FT) is valid for arbitrary averaging times, t. Let's apply the FT in the long time limit while simultaneously reducing the field so that the product $F_e^2 t$ is held constant, $\lim_{t \to \infty, F_e \to 0 }\frac{1}{t}\ln \left( {\frac{{p(\beta \overline J _t = A)}}{{p(\beta \overline J _t = - A)}}} \right) = - \lim_{t \to \infty ,F_e \to 0}AVF_e,\quad F_e^2 t = c. \,$ Because of the particular way we take the double limit, the negative of the mean value of the flux remains a fixed number of standard deviations away from the mean as the averaging time increases (narrowing the distribution) and the field decreases. This means that as the averaging time gets longer the distribution near the mean flux and its negative, is accurately described by the central limit theorem. This means that the distribution is Gaussian near the mean and its negative so that $\lim_{t \to \infty ,F_e \to 0}\frac{1}{t}\ln \left( {\frac{{p(\overline J _t ) = A}}{{p(\overline J _t ) = - A}}} \right) = \lim_{t \to \infty ,F_e \to 0}\frac{{2A\left\langle J \right\rangle _{F_e } }}{{t\sigma _{\overline J (t)}^2 }}.$ Combining these two relations yields (after some tedious algebra!) the exact Green-Kubo relation for the linear zero field transport coefficient, namely, $L(0) = \beta V\;\int_0^\infty {dt} \left\langle {J(0)J(t)} \right\rangle _{F_e = 0}. \,$ Details of the proof of Green-Kubo relations from the FT are here. ## Summary This shows the fundamental importance of the fluctuation theorem in nonequilibrium statistical mechanics. The FT (together with the Axiom of Causality) gives a generalisation of the Second Law of Thermodynamics. It is then easy to prove the second law inequality and the Kawasaki identity. When combined with the central limit theorem, the FT also implies the famous Green-Kubo relations for linear transport coefficients, close to equilibrium. The FT is however, more general than the Green-Kubo Relations because unlike them, the FT applies to fluctuations far from equilibrium. In spite of this fact, we have not yet been able to derive the equations for nonlinear response theory from the FT. The FT does not imply or require that the distribution of time-averaged dissipation is Gaussian. There are many examples known when the distribution is non-Gaussian and yet the FT (of course) still correctly describes the probability ratios.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9743543863296509, "perplexity": 486.36702891165356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711069.79/warc/CC-MAIN-20221206024911-20221206054911-00561.warc.gz"}
https://physics.stackexchange.com/questions/378253/can-qed-be-formulated-in-terms-of-independent-vector-potential-and-bivector-fiel
# Can QED be formulated in terms of independent vector potential and bivector fields? We can obtain the classical equations of motion for electromagnetism by considering the vector potential $A^\mu$ and the electric $\mathbf E$ and magnetic $\mathbf B$ fields as independent degrees of freedom. $$\mathcal L = j^\mu A_\mu - A^0 (\nabla \cdot \mathbf E) - \mathbf A \cdot (\dot {\mathbf E} - \nabla \times \mathbf B) - \frac12 (\mathbf E^2 - \mathbf B^2).$$ Varying the action with respect to $A^\mu$ yields the inhomogeneous Maxwell equations in terms of $\mathbf E$ and $\mathbf B$ (up to possible sign errors): $$-\frac{\delta S}{\delta A^0} = \nabla \cdot \mathbf E - j^0$$ $$-\frac{\delta S}{\delta \mathbf A} = \dot {\mathbf E} - \nabla \times \mathbf B + \mathbf j$$ while varying with respect to $\mathbf E$ and $\mathbf B$ yields the standard definitions of those fields in terms of derivatives of $A^\mu$: $$-\frac{\delta S}{\delta \mathbf E} + \mathbf E = \dot {\mathbf A} - \nabla A^0$$ $$\frac{\delta S}{\delta \mathbf B} + \mathbf B = \nabla \times \mathbf A.$$ Under a gauge transformation $A_\mu \to A_\mu + \partial_\mu \lambda$, $\mathcal L$ is not invariant, but $\delta S/\delta \lambda \equiv 0$ when $\partial_{\mu}j^{\mu} = 0$. In this picture, $A^\mu$ acts as an intermediary between the bivector field $(\mathbf E, \mathbf B)$ and the source of the current density $j^\mu$ (e.g. a spinor field). The relations between $(\mathbf E,\mathbf B)$ and the exterior derivative of the vector potential arise as classical equations of motion, rather than by definition. 1. Are there problems with this formulation that invalidate it at the classical level? (e.g. too many degrees of freedom, problems with a Hamiltonian formulation, etc.) It looks like the Hamiltonian wouldn't be bounded from below, but maybe there's a workaround. 2. Does this setup have an analogue in quantum field theory, where we normally consider the gauge field $A^\mu$ as the only fundamental degrees of freedom for electromagnetism? • This is the first order formulation of electromagnetism; what's the advantage of not writing it covariantly, $\frac{1}{4}F^{\mu\nu}F_{\mu\nu}- F^{\mu\nu}\partial_\mu A_\nu$, for independent $A_\mu$ and $F_{\mu\nu}$, which is more conventional? – Cosmas Zachos Jan 6 '18 at 1:29 • There's no advantage; I just think of $F_{\mu\nu} \equiv \partial_\mu A_\nu - \partial_\nu A_\mu$ as being the stronger association, so it might have been clearer what I was asking if I used $\mathbf E$ and $\mathbf B$. Does it matter whether the derivative acts on $A_\nu$ or $F_{\mu\nu}$? It only determines which of $\mathbf A$ or $\mathbf E$ have conjugate momenta, right? – rossng Jan 6 '18 at 5:59 I) OP's action in covariant notation$^1$ $$S_1[A,F] ~:=~\int \! d^4x~{\cal L}_1, \qquad {\cal L}_1~:=~\frac{1}{4}F^{\mu\nu}F_{\mu\nu}- F^{\mu\nu}\partial_{\mu} A_{\nu} + j^{\mu}A_{\mu},$$ $$E_i~\equiv~F_{i0}, \qquad B_i~\equiv~\frac{1}{2}\epsilon_{ijk}F_{jk}, \tag{A}$$ is the first-order/Palatini formulation of E&M, cf. Ref. 1 & comment by Cosmas Zachos. It is classically well-defined. The EL eqs. for the OP's action (A) read $$F_{\mu\nu}~\approx~\partial_{\mu} A_{\nu}-\partial_{\nu} A_{\mu}, \tag{B}$$ $$d_{\mu} F^{\mu\nu}+j^{\nu}~\approx~0.\tag{C}$$ The quadratic potential term $${\cal V}~=~-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}+\ldots~=~\frac{1}{2}({\bf E}^2-{\bf B}^2)+\ldots\tag{D}$$ has a minus sign in front the independent ${\bf B}$-field, and is hence unbounded from below. Therefore OP's action (A) is quantum mechanically ill-defined. II) However, if we integrate out the independent ${\bf B}$-field, we basically get the Hamiltonian formulation of E&M, $$S_H[A,{\bf E}] ~:=~\int \! d^4x~{\cal L}_H, \qquad {\cal L}_H~:=~ -{\bf E}\cdot \dot{\bf A}-{\cal H},$$ $${\cal H}~:=~\frac{1}{2}({\bf E}^2+(\nabla \times {\bf A})^2)-{\bf J}\cdot {\bf A} +A_0{\cal G} \qquad {\cal G}~:=~\nabla \cdot {\bf E}-\rho, \tag{E}$$ cf. Ref. 2, which is quantum mechanically well-defined. (Minus) the independent ${\bf E}$-field plays the role of momentum for the magnetic gauge potential ${\bf A}$. Moreover, $A_0$ becomes the Lagrange multiplier for Gauss' law $${\cal G}~\approx~0.\tag{F}$$ To achieve QED, one should then proceed with quantization, gauge-fixing, etc. References: $^1$ We use signature convention $(−,+,+,+)$ and $c=1$. Disclaimer: In this answer we have ignored some total space-time divergence terms in the action as they don't contribute to EL eqs. • So we can vary $A_\mu$ and $E_i$ independently (with their respective equations of motion holding only in the classical limit), but must assert $\delta S/\delta B_i \equiv 0$ in order to get a sensible quantum theory? With gauge fixing that leaves 6 nominal degrees of freedom, but the conjugate momenta for $E_i$ vanish, leaving only 3 dynamical DoFs. Is that correct? Thanks! – rossng Jan 6 '18 at 19:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9667640328407288, "perplexity": 343.11970773448064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574765.55/warc/CC-MAIN-20190922012344-20190922034344-00466.warc.gz"}