url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://sepwww.stanford.edu/data/media/public/docs/sep102/cmora1/paper_html/node3.html
Next: Seismic Data Up: Introduction Previous: The problem of velocity ## AVO theory The variation of seismic reflection coefficients with offset can be used as a direct hydrocarbon indicator Ostrander (1984); Swan (1993), which is supported in the AVO analysis theory. The physical relation between the variation of reflection/transmission coefficients with incident angle (and offset) and rock parameters has been widely investigated. This relation is established in the Zoeppritz equations, which relate reflection and transmission coefficients for plane waves and elastic properties of the medium. Because of the nonlinearity of the Zoeppritz equations, several approximations have been generated, such as those presented by Aki and Richards (1997) and Shuey (1985). Equation (1) presents Shuey's simplification, which comprises three terms characterizing the reflection coefficient, , at normal incidence, at intermediated angles, and at the approach to the critical angle: (1) The simplified versions of the Zoeppritz equations allow the computation of AVO inversion to estimate elastic parameters from the observed variation of reflection amplitude with angle. From AVO analysis, several attributes can be generated, such as AVO intercept, which is referred to the normal incidence reflection coefficient, and AVO gradient, which is the coefficient of in the second term of equation  (1). In the case of the attributes we use in this work, we call them AVO-related attributes because these attributes are not calculated from CMP gathers in the offset domain but rather from CMP gathers in the offset ray parameter domain. In this domain, the offset ray parameter is related to the aperture angle instead of the incident angle Prucha et al. (1999). Next: Seismic Data Up: Introduction Previous: The problem of velocity Stanford Exploration Project 10/25/1999
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168093800544739, "perplexity": 1843.5525330414293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822930.13/warc/CC-MAIN-20171018104813-20171018124813-00357.warc.gz"}
https://www.physicsforums.com/threads/the-jumping-motorcycle.312133/
# The jumping motorcycle 1. May 6, 2009 ### SPIAction Hey guys, New to the forum but full of so many questions, of which I assume will be a cakewalk for many of you. I'm no physics expert as you will soon discover. Looking foward to your input and assistance. The first question is how do I calculate the force associated with a falling mass? For example, if you take a 10 pound brick and drop it 10 feet on a scale, what would be the maximum reading that I would see on the scale? Wind resistence and such aside. The next question is similar but a little more complicated. If I know the angle and height of a jump, the speed of an appoaching motorcycle, and the total weight of the bike with rider, how would I calculate the amount of force that the bike/rider has as it returns to earth? In other words, I assume that the falling mass will hit with a force greater than the total weight of bike and rider, with respects to the curve that it takes as it flys through the air. Let's say I'm just trying to calculate how strong the landing ramp needs to be. Something like that. Putting the matter of the suspension and such aside. Hope that makes sense. Thoughts? 2. May 12, 2009 ### Nerd The downward force on falling objects is always constant (if you put air resistance aside). The downward force would always be the mass of the object times the gravitational acceleration of the earth (roughly 9.8 m/s2). The curve with which the bike travels doesn't make any difference. 3. May 12, 2009 ### gmax137 This question is asked over & over again in many different guises. The question is not "how much force." Nerd is right, as far as he/she goes: mass times gravitational acceleration equals the force. But that just tells you the weight of the object (in your case, mcycle + rider). It doesnt tell you anything about the momentum (which depends on both mass and speed). And when you hit the ramp, it is the change in momentum that you are concerned with. What you want to know is the "impact load" on your landing-ramp. That question is probably better asked in the engineering section of the forum. The answer you're going to get there is, "well that depends..." Because, well, it depends on how long it takes for you and your bike to decelerate from your landing speed to the final speed. This time is a few fractions of a second, and the shorter it is, the more impact load your ramp has to withstand. The time it takes depends alot on the suspension - since a long springy suspension will stretch out the time and make for a softer landing. That's why you bike has springs to begin with. Eventually you will find that the only way to really know is to do some experiments. There may also be some empirical correlations (that's a fancy name for basing your estimate on other people's experiments, or thumb rules). Good luck. 4. May 12, 2009 ### SPIAction Ah...good stuff and thanks for the info. So...just to make sure that I have this correct, for as elementary as it may seem, if we have a ball that is 10 kilograms in weight (the mass) and we drop it any distance (assuming is can reach its maximum velocity) the force of the impact is 98 kgf? 5. May 12, 2009 ### SPIAction Thanks! So what if we simplify this by saying that we are rolling a bowling ball at a given speed, at a ramp that is 2 feet long and one feet high? Can we calculate how much force the ball will hit the ground (landing) with? And then what happen if the ramp is curved instead of being flat? 6. May 13, 2009 ### ImAnEngineer The unit for force is Newton (N). But the 98N is just the weight on the ball (which is always there, whether it's bouncing, flying or rolling, etc). If a ball is lying still on a surface, the weight gets canceled out by the force that the surface exerts on the object. But if it bounces, the force exerted by the surface is bigger than the weight (the force is the reason it is accelerated upwards again), or equal to the weight if we assume it doesn't bounce and there is no suspension etc (so this statement doesn't make a lot of sense in the first place). It makes no sense to simplify it down any further and to work with forces here. I'd recommend to read gmax's post again. That's pretty much all there is to it. You might want to read up on momentum and Newton's laws of motion. 7. May 13, 2009 ### Bob S The motorcycle + rider mass and weight is m and mg respectively. If the motorcycle + driver's downward velocity times mass is mv, and it hits a horizontal ramp, then the impulse (force times time) is integral [F*dt]= mv. Using 200 kg and 2 meters/sec for m and v, and dt = 0.1 seconds, then Fmax = about 4000 Newtons, or about 2 g's. 8. May 14, 2009 ### gmax137 OK, and using 200 kg and 2 meters/sec for m and v, and dt = 0.2 seconds, then Fmax = about 2000 Newtons; using the same and 0.05 seconds you get 8000 N, etc. That's why I said above "it depends." Unless you quantify the time to decelerate to zero, you don't have a number you can use to design your ramp. Similar Discussions: The jumping motorcycle
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8370321393013, "perplexity": 639.9644459609167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102819.58/warc/CC-MAIN-20170817013033-20170817033033-00442.warc.gz"}
https://almostoriginality.wordpress.com/tag/inequalities/
Affine Restriction estimates imply Affine Isoperimetric inequalities One thing I absolutely love about harmonic analysis is that it really has something interesting to say about nearly every other field of Analysis. Today’s example is exactly of this kind: I will show how a Fourier Restriction estimate can say something about Affine Geometry. This was first noted by Carbery and Ziesler (see below for references). 1. Affine Isoperimetric Inequality Recall the Affine Invariant Surface Measure that we have defined in a previous post. Given a hypersurface $\Sigma \subset \mathbb{R}^d$ sufficiently smooth to have a well-defined Gaussian curvature $\kappa_{\Sigma}(\xi)$ (where $\xi$ ranges over $\Sigma$) and with surface measure denoted by $d\sigma_{\Sigma}$, we can define the Affine Invariant Surface measure as the weighted surface measure $\displaystyle d\Omega_{\Sigma}(\xi) := |\kappa_{\Sigma}(\xi)|^{1/(d+1)} \, d\sigma_{\Sigma}(\xi);$ this measure has the property of being invariant under the action of $SL(\mathbb{R}^d)$ – hence the name. Here invariant means that if $\varphi$ is an equi-affine map (thus volume preserving) then $\displaystyle \Omega_{\varphi(\Sigma)}(\varphi(E)) = \Omega_{\Sigma}(E)$ for any measurable $E \subseteq \Sigma$. The Affine Invariant Surface measure can be used to formulate a very interesting result in Affine Differential Geometry – an inequality of isoperimetric type. Let $K \subset \mathbb{R}^d$ be a convex body – say, centred in the origin and symmetric with respect to it, i.e. $K = - K$. We denote by $\partial K$ the boundary of the convex body $K$ and we can assume for the sake of the argument that $\partial K$ is sufficiently smooth – for example, piecewise $C^2$-regular, so that the Gaussian curvature is defined at every point except maybe a $\mathcal{H}^{d-1}$-null set. Then the Affine Isoperimetric Inequality says that (with $\Omega = \Omega_{\partial K}$) $\displaystyle \boxed{ \Omega(\partial K)^{d+1} \lesssim |K|^{d-1}. } \ \ \ \ \ \ \ (\dagger)$ Notice that the inequality is invariant with respect to the action of $SL(\mathbb{R}^d)$ indeed – thanks to the fact that $d\Omega$ is. Observe also the curious fact that this inequality goes in the opposite direction with respect to the better known Isoperimetric Inequality of Geometric Measure Theory! Indeed, the latter says (let’s say in the usual $\mathbb{R}^d$) that (a power of) the volume of a measurable set is controlled by (a power of) the perimeter of the set; more precisely, for any measurable $E \subset \mathbb{R}^d$ $\displaystyle |E|^{d-1} \lesssim P(E)^d,$ where $P(E)$ denotes the perimeter1 of $E$ – in case $E = K$ a symmetric convex body as above we would have $P(K) = \sigma(\partial K)$. But in the affine context the “affine perimeter” is $\Omega(\partial K)$ and is controlled by the volume instead of viceversa. This makes perfect sense: if $K$ is taken to be a cube $Q$ then $\kappa_{\partial Q} = 0$ and so the “affine perimeter” cannot control anything. Notice also that the power of the perimeter is $d$ for the standard isoperimetric inequality and it is instead $d+1$ for the affine isoperimetric inequality. Informally speaking, this is related to the fact that the affine perimeter is measuring curvature too instead of just area. So, the inequality should actually be called something like “Affine anti-Isoperimetric inequality” to better reflect this, but I don’t get to choose the names. The inequality above is formulated for convex bodies since those are the most relevant objects for Affine Geometry. However, below we will see that Harmonic Analysis provides a sweeping generalisation of the inequality to arbitrary hypersurfaces that are not necessarily boundaries of convex bodies. Before showing this generalisation, we need to introduce Affine Fourier restriction estimates, which we do in the next section. The Chang-Wilson-Wolff inequality using a lemma of Tao-Wright Today I would like to introduce an important inequality from the theory of martingales that will be the subject of a few more posts. This inequality will further provide the opportunity to introduce a very interesting and powerful result of Tao and Wright – a sort of square-function characterisation for the Orlicz space $L(\log L)^{1/2}$. 1. The Chang-Wilson-Wolff inequality Consider the collection $\mathcal{D}$ of standard dyadic intervals that are contained in $[0,1]$. We let $\mathcal{D}_j$ for each $j \in \mathbb{N}$ denote the subcollection of intervals $I \in \mathcal{D}$ such that $|I|= 2^{-j}$. Notice that these subcollections generate a filtration of $\mathcal{D}$, that is $(\sigma(\mathcal{D}_j))_{j \in \mathbb{N}}$, where $\sigma(\mathcal{D}_j)$ denotes the sigma-algebra generated by the collection $\mathcal{D}_j$. We can associate to this filtration the conditional expectation operators $\displaystyle \mathbf{E}_j f := \mathbf{E}[f \,|\, \sigma(\mathcal{D}_j)],$ and therefore define the martingale differences $\displaystyle \mathbf{D}_j f:= \mathbf{E}_{j+1} f - \mathbf{E}_{j}f.$ With this notation, we have the formal telescopic identity $\displaystyle f = \mathbf{E}_0 f + \sum_{j \in \mathbb{N}} \mathbf{D}_j f.$ Demystification: the expectation $\mathbf{E}_j f(x)$ is simply $\frac{1}{|I|} \int_I f(y) \,dy$, where $I$ is the unique dyadic interval in $\mathcal{D}_j$ such that $x \in I$. Letting $f_j := \mathbf{E}_j f$ for brevity, the sequence of functions $(f_j)_{j \in \mathbb{N}}$ is called a martingale (hence the name “martingale differences” above) because it satisfies the martingale property that the conditional expectation of “future values” at the present time is the present value, that is $\displaystyle \mathbf{E}_{j} f_{j+1} = f_j.$ In the following we will only be interested in functions with zero average, that is functions such that $\mathbf{E}_0 f = 0$. Given such a function $f : [0,1] \to \mathbb{R}$ then, we can define its martingale square function $S_{\mathcal{D}}f$ to be $\displaystyle S_{\mathcal{D}} f := \Big(\sum_{j \in \mathbb{N}} |\mathbf{D}_j f|^2 \Big)^{1/2}.$ With these definitions in place we can state the Chang-Wilson-Wolff inequality as follows. C-W-W inequality: Let ${f : [0,1] \to \mathbb{R}}$ be such that $\mathbf{E}_0 f = 0$. For any ${2\leq p < \infty}$ it holds that $\displaystyle \boxed{\|f\|_{L^p([0,1])} \lesssim p^{1/2}\, \|S_{\mathcal{D}}f\|_{L^p([0,1])}.} \ \ \ \ \ \ (\text{CWW}_1)$ An important point about the above inequality is the behaviour of the constant in the Lebesgue exponent ${p}$, which is sharp. This can be seen by taking a “lacunary” function ${f}$ (essentially one where $\mathbf{D}_jf = a_j \in \mathbb{C}$, a constant) and randomising the signs using Khintchine’s inequality (indeed, ${p^{1/2}}$ is precisely the asymptotic behaviour of the constant in Khintchine’s inequality; see Exercise 5 in the 2nd post on Littlewood-Paley theory). It should be remarked that the inequality extends very naturally and with no additional effort to higher dimensions, in which $[0,1]$ is replaced by the unit cube $[0,1]^d$ and the dyadic intervals are replaced by the dyadic cubes. We will only be interested in the one-dimensional case here though. Oscillatory integrals II: several-variables phases This is the second part of a two-part post on the theory of oscillatory integrals. In the first part we studied the theory of oscillatory integrals whose phases are functions of a single variable. In this post we will instead study the case in which the phase is a function of several variables (and we integrate in all of them). Here the theory becomes weaker because these objects can indeed have a worse behaviour. We will proceed by analogy following the same footsteps as in the single-variable case. Part I 3. Oscillatory integrals in several variables In the previous section we have analysed the situation for single variable phases, that is for integrals over (intervals of) ${\mathbb{R}}$. In this section, we will instead study the higher dimensional situation, when the phase is a function of several variables. Things are unfortunately generally not as nice as in the single variable case, as you will see. In order to avoid having to worry about connected open sets of ${\mathbb{R}^d}$ (see Exercise 18 for the sort of issues that arise in trying to deal with general open sets of ${\mathbb{R}^d}$), in this section we will study mainly objects of the form $\displaystyle \mathcal{I}_{\psi}(\lambda) := \int_{\mathbb{R}^d} e^{i \lambda u(x)} \psi(x) dx,$ where ${\psi}$ has compact support. We have switched to ${u}$ for the phase to remind the reader of the fact that it is a function of several variables now. 3.1. Principle of non-stationary phase – several variables The principle of non-stationary phase we saw in Section 2 of part I continues to hold in the several variables case. Given a phase ${u}$, we say that ${x_0}$ is a critical point of ${u}$ if $\displaystyle \nabla u(x_0) = (0,\ldots,0).$ Proposition 8 (Principle of non-stationary phase – several variables) Let ${\psi \in C^\infty_c(\mathbb{R}^d)}$ (that is, smooth and compactly supported) and let the phase ${u\in C^\infty}$ be such that ${u}$ does not have critical points in the support of ${\psi}$. Then for any ${N >0}$ we have $\displaystyle |\mathcal{I}_\psi(\lambda)|\lesssim_{N,\psi,u} |\lambda|^{-N}.$ Marcinkiewicz-type multiplier theorem for q-variation (q > 1) Not long ago we discussed one of the main direct applications of the Littlewood-Paley theory, namely the Marcinkiewicz multiplier theorem. Recall that the single-variable version of this theorem can be formulated as follows: Theorem 1 [Marcinkiewicz multiplier theorem]: Let ${m}$ be a function on $\mathbb{R}$ such that 1. $m \in L^\infty$ 2. for every Littlewood-Paley dyadic interval $L := [2^k, 2^{k+1}] \cup [-2^{k+1},-2^k]$ with $k \in \mathbb{Z}$ $\displaystyle \|m\|_{V(L)} \leq C,$ where $\|m\|_{V(L)}$ denotes the total variation of ${m}$ over the interval $L$. Then for any ${1 < p < \infty}$ the multiplier ${T_m}$ defined by $\widehat{T_m f} = m \widehat{f}$ for functions $f \in L^2(\mathbb{R})$ extends to an $L^p \to L^p$ bounded operator, $\displaystyle \|Tf\|_{L^p} \lesssim_p (\|m\|_{L^\infty} + C) \|f\|_{L^p}.$ You should also recall that the total variation $V(I)$ above is defined as $\displaystyle \sup_{N}\sup_{\substack{t_0, \ldots, t_N \in I : \\ t_0 < \ldots < t_N}} \sum_{j=1}^{N} |m(t_j) - m(t_{j-1})|,$ and if ${m}$ is absolutely continuous then ${m'}$ exists as a measurable function and the total variation over interval $I$ is given equivalently by $\int_{I} |m'(\xi)|d\xi$. We have seen that the “dyadic total variation condition” 2.) above is to be seen as a generalisation of the pointwise condition $|m'(\xi)|\lesssim |\xi|^{-1}$, which in dimension 1 happens to coincide with the classical differential Hörmander condition (in higher dimensions the pointwise Marcinkiewicz conditions are of product type, while the pointwise Hörmander(-Mihklin) conditions are of radial type; see the relevant post). Thus the Marcinkiewicz multiplier theorem in dimension 1 can deal with multipliers whose symbol is somewhat rougher than being differentiable. It is an interesting question to wonder how much rougher the symbols can get while still preserving their $L^p$ mapping properties (or maybe giving up some range – recall though that the range of boundedness for multipliers must be symmetric around 2 because multipliers are self-adjoint). Coifman, Rubio de Francia and Semmes came up with an answer to this question that is very interesting. They generalise the Marcinkiewicz multiplier theorem (in dimension 1) to multipliers that have bounded ${q}$-variation with ${q}$ > 1. Let us define this quantity rigorously. Definition: Let $q \geq 1$ and let $I$ be an interval. Given a function $f : \mathbb{R} \to \mathbb{R}$, its ${q}$-variation over the interval ${I}$ is $\displaystyle \|f\|_{V_q(I)} := \sup_{N} \sup_{\substack{t_0, \ldots t_N \in I : \\ t_0 < \ldots < t_N}} \Big(\sum_{j=1}^{N} |f(t_j) - f(t_{j-1})|^q\Big)^{1/q}$ Notice that, with respect to the notation above, we have $\|m\|_{V(I)} = \|m\|_{V_1(I)}$. From the fact that $\|\cdot\|_{\ell^q} \leq \|\cdot \|_{\ell^p}$ when $p \leq q$ we see that we have always $\|f\|_{V_q (I)} \leq \|f\|_{V_p(I)}$, and therefore the higher the ${q}$ the less stringent the condition of having bounded ${q}$-variation becomes (this is linked to the Hölder regularity of the function getting worse). In particular, if we wanted to weaken hypothesis 2.) in the Marcinkiewicz multiplier theorem above, we could simply replace it with the condition that for any Littlewood-Paley dyadic interval $L$ we have instead $\|m\|_{V_q(L)} \leq C$. This is indeed what Coifman, Rubio de Francia and Semmes do, and they were able to show the following: Theorem 2 [Coifman-Rubio de Francia-Semmes, ’88]: Let $q\geq 1$ and let ${m}$ be a function on $\mathbb{R}$ such that 1. $m \in L^\infty$ 2. for every Littlewood-Paley dyadic interval $L := [2^k, 2^{k+1}] \cup [-2^{k+1},-2^k]$ with $k \in \mathbb{Z}$ $\displaystyle \|m\|_{V_q(L)} \leq C.$ Then for any ${1 < p < \infty}$ such that ${\Big|\frac{1}{2} - \frac{1}{p}\Big| < \frac{1}{q} }$ the multiplier ${T_m}$ defined by $\widehat{T_m f} = m \widehat{f}$ extends to an $L^p \to L^p$ bounded operator, $\displaystyle \|Tf\|_{L^p} \lesssim_p (\|m\|_{L^\infty} + C) \|f\|_{L^p}.$ The statement is essentially the same as before, except that now we are imposing control of the ${q}$-variation instead and as a consequence we have the restriction that our Lebesgue exponent ${p}$ satisfy ${\Big|\frac{1}{2} - \frac{1}{p}\Big| < \frac{1}{q} }$. Taking a closer look at this condition, we see that when the variation parameter is $1 \leq q \leq 2$ the condition is empty, that is there is no restriction on the range of boundedness of $T_m$: it is still the full range ${1}$ < ${p}$ < $\infty$, and as ${q}$ grows larger and larger the range of boundedness restricts itself to be smaller and smaller around the exponent $p=2$ (for which the multiplier is always necessarily bounded, by Plancherel). This is a very interesting behaviour, which points to the fact that there is a certain dichotomy between variation in the range below 2 and the range above 2, with $2$-variation being the critical case. This is not an isolated case: for example, the Variation Norm Carleson theorem is false for ${q}$-variation with ${q \leq 2}$; similarly, the Lépingle inequality is false for 2-variation and below (and this is related to the properties of Brownian motion). Kovač’s solution of the maximal Fourier restriction problem About 2 years ago, Müller Ricci and Wright published a paper that opened a new line of investigation in the field of Fourier restriction: that is, the study of the pointwise meaning of the Fourier restriction operators. Here is an account of a recent contribution to this problem that largely sorts it out. 1. Maximal Fourier Restriction Recall that, given a smooth submanifold $\Sigma$ of $\mathbb{R}^d$ with surface measure $d\sigma$, the restriction operator ${R}$ is defined (initially) for Schwartz functions as $\displaystyle f \mapsto Rf:= \widehat{f}\Big|_{\Sigma};$ it is only after having proven an a-priori estimate such as $\|Rf\|_{L^q(\Sigma,d\sigma)} \lesssim \|f\|_{L^p(\mathbb{R}^d)}$ that we can extend ${R}$ to an operator over the whole of $L^p(\mathbb{R}^d)$, by density of the Schwartz functions. However, it is no longer clear what the relationship is between this new operator that has been operator-theoretically extended and the original operator that had a clear pointwise definition. In particular, a non-trivial question to ask is whether for $d\sigma$-a.e. point $\xi \in \Sigma$ we have $\displaystyle \lim_{r \to 0} \frac{1}{|B(0,r)|} \int_{\eta \in B(0,r)} |\widehat{f}(\xi - \eta)| d\eta = \widehat{f}(\xi), \ \ \ \ \ (1)$ where $B(0,r)$ is the ball of radius ${r}$ and center ${0}$. Observe that the Lebesgue differentiation theorem already tells us that for a.e. element of $\mathbb{R}^d$ in the Lebesgue sense the above holds; but the submanifold $\Sigma$ has Lebesgue measure zero, and therefore the differentiation theorem cannot give us any information. In this sense, the question above is about the structure of the set of the Lebesgue points of $\widehat{f}$ and can be reformulated as: Q: can the complement of the set of Lebesgue points of $\widehat{f}$ contain a copy of the manifold $\Sigma$? Basic Littlewood-Paley theory II: square functions This is the second part of the series on basic Littlewood-Paley theory, which has been extracted from some lecture notes I wrote for a masterclass. In this part we will prove the Littlewood-Paley inequalities, namely that for any ${1 < p < \infty}$ it holds that $\displaystyle \|f\|_{L^p (\mathbb{R})} \sim_p \Big\|\Big(\sum_{j \in \mathbb{Z}} |\Delta_j f|^2 \Big)^{1/2}\Big\|_{L^p (\mathbb{R})}. \ \ \ \ \ (\dagger)$ This time there are also plenty more exercises, some of which I think are fairly interesting (one of them is a theorem of Rudin in disguise). Part I: frequency projections. 4. Smooth square function In this subsection we will consider a variant of the square function appearing at the right-hand side of ($\dagger$) where we replace the frequency projections ${\Delta_j}$ by better behaved ones. Let ${\psi}$ denote a smooth function with the properties that ${\psi}$ is compactly supported in the intervals ${[-4,-1/2] \cup [1/2, 4]}$ and is identically equal to ${1}$ on the intervals ${[-2,-1] \cup [1,2]}$. We define the smooth frequency projections ${\widetilde{\Delta}_j}$ by stipulating $\displaystyle \widehat{\widetilde{\Delta}_j f}(\xi) := \psi(2^{-j} \xi) \widehat{f}(\xi);$ notice that the function ${\psi(2^{-j} \xi)}$ is supported in ${[-2^{j+2},-2^{j-1}] \cup [2^{j-1}, 2^{j+2}]}$ and identically ${1}$ in ${[-2^{j+1},-2^{j}] \cup [2^{j}, 2^{j+1}]}$. The reason why such projections are better behaved resides in the fact that the functions ${\psi(2^{-j}\xi)}$ are now smooth, unlike the characteristic functions ${\mathbf{1}_{[2^j,2^{j+1}]}}$. Indeed, they are actually Schwartz functions and you can see by Fourier inversion formula that ${\widetilde{\Delta}_j f = f \ast (2^{j} \widehat{\psi}(2^{j}\cdot))}$; the convolution kernel ${2^{j} \widehat{\psi}(2^{j}\cdot)}$ is uniformly in ${L^1}$ and therefore the operator is trivially ${L^p \rightarrow L^p}$ bounded for any ${1 \leq p \leq \infty}$ by Young’s inequality, without having to resort to the boundedness of the Hilbert transform. We will show that the following smooth analogue of (one half of) ($\dagger$) is true (you can study the other half in Exercise 6). Proposition 3 Let ${\widetilde{S}}$ denote the square function $\displaystyle \widetilde{S}f := \Big(\sum_{j \in \mathbb{Z}} \big|\widetilde{\Delta}_j f \big|^2\Big)^{1/2}.$ Then for any ${1 < p < \infty}$ we have that the inequality $\displaystyle \big\|\widetilde{S}f\big\|_{L^p(\mathbb{R})} \lesssim_p \|f\|_{L^p(\mathbb{R})} \ \ \ \ \ (1)$ holds for any ${f \in L^p(\mathbb{R})}$. We will give two proofs of this fact, to illustrate different techniques. We remark that the boundedness will depend on the smoothness and the support properties of ${\psi}$ only, and as such extends to a larger class of square functions. Basic Littlewood-Paley theory I: frequency projections I have written some notes on Littlewood-Paley theory for a masterclass, which I thought I would share here as well. This is the first part, covering some motivation, the case of a single frequency projection and its vector-valued generalisation. References I have used in preparing these notes include Stein’s “Singular integrals and differentiability properties of functions“, Duoandikoetxea’s “Fourier Analysis“, Grafakos’ “Classical Fourier Analysis” and as usual some material by Tao, both from his blog and the notes for his courses. Prerequisites are some basic Fourier transform theory, Calderón-Zygmund theory of euclidean singular integrals and its vector-valued generalisation (to Hilbert spaces, we won’t need Banach spaces). 0. Introduction Harmonic analysis makes a fundamental use of divide-et-impera approaches. A particularly fruitful one is the decomposition of a function in terms of the frequencies that compose it, which is prominently incarnated in the theory of the Fourier transform and Fourier series. In many applications however it is not necessary or even useful to resolve the function ${f}$ at the level of single frequencies and it suffices instead to consider how wildly different frequency components behave instead. One example of this is the (formal) decomposition of functions of ${\mathbb{R}}$ given by $\displaystyle f = \sum_{j \in \mathbb{Z}} \Delta_j f,$ where ${\Delta_j f}$ denotes the operator $\displaystyle \Delta_j f (x) := \int_{\{\xi \in \mathbb{R} : 2^j \leq |\xi| < 2^{j+1}\}} \widehat{f}(\xi) e^{2\pi i \xi \cdot x} d\xi,$ commonly referred to as a (dyadic) frequency projection. Thus ${\Delta_j f}$ represents the portion of ${f}$ with frequencies of magnitude ${\sim 2^j}$. The Fourier inversion formula can be used to justify the above decomposition if, for example, ${f \in L^2(\mathbb{R})}$. Heuristically, since any two ${\Delta_j f, \Delta_{k} f}$ oscillate at significantly different frequencies when ${|j-k|}$ is large, we would expect that for most ${x}$‘s the different contributions to the sum cancel out more or less randomly; a probabilistic argument typical of random walks (see Exercise 1) leads to the conjecture that ${|f|}$ should behave “most of the time” like ${\Big(\sum_{j \in \mathbb{Z}} |\Delta_j f|^2 \Big)^{1/2}}$ (the last expression is an example of a square function). While this is not true in a pointwise sense, we will see in these notes that the two are indeed interchangeable from the point of view of ${L^p}$-norms: more precisely, we will show that for any ${1 < p < \infty}$ it holds that $\displaystyle \boxed{ \|f\|_{L^p (\mathbb{R})} \sim_p \Big\|\Big(\sum_{j \in \mathbb{Z}} |\Delta_j f|^2 \Big)^{1/2}\Big\|_{L^p (\mathbb{R})}. }\ \ \ \ \ (\dagger)$ This is a result historically due to Littlewood and Paley, which explains the name given to the related theory. It is easy to see that the ${p=2}$ case is obvious thanks to Plancherel’s theorem, to which the statement is essentially equivalent. Therefore one could interpret the above as a substitute for Plancherel’s theorem in generic ${L^p}$ spaces when ${p\neq 2}$. In developing a framework that allows to prove ($\dagger$) we will encounter some variants of the square function above, including ones with smoother frequency projections that are useful in a variety of contexts. We will moreover show some applications of the above fact and its variants. One of these applications will be a proof of the boundedness of the spherical maximal function ${\mathscr{M}_{\mathbb{S}^{d-1}}}$ (almost verbatim the one on Tao’s blog). Notation: We will use ${A \lesssim B}$ to denote the estimate ${A \leq C B}$ where ${C>0}$ is some absolute constant, and ${A\sim B}$ to denote the fact that ${A \lesssim B \lesssim A}$. If the constant ${C}$ depends on a list of parameters ${L}$ we will write ${A \lesssim_L B}$. Christ’s result on near-equality in Riesz-Sobolev inequality It’s finally time to address one of Christ’s papers I talked about in the previous two blogposts. As mentioned there, I’ve chosen to read the one about the near-equality in the Riesz-Sobolev inequality because it seems the more approachable, while still containing one very interesting idea: exploiting the additive structure lurking behind the inequality via Freiman’s theorem. 1. Elaborate an attack strategy Everything is in dimension ${d=1}$ and some details of the proof are specific to this dimension and don’t extend to higher dimensions. I’ll stick to Christ’s notation. Recall that the Riesz-Sobolev inequality is $\displaystyle \boxed{\left\langle \chi_{A} \ast \chi_{B}, \chi_{C}\right\rangle \leq \left\langle \chi_{A^\ast} \ast \chi_{B^\ast}, \chi_{C^\ast}\right\rangle} \ \ \ \ \ (1)$ and its extremizers – which exist under the hypothesis that the sizes are all comparable – are intervals, i.e. the intervals are the only sets that realize equality in (1). See previous post for further details. The aim of paper [ChRS] is to prove that whenever ${\left\langle \chi_{A} \ast \chi_{B}, \chi_{C}\right\rangle}$ is suitably close to ${\left\langle \chi_{A^\ast} \ast \chi_{B^\ast}, \chi_{C^\ast}\right\rangle}$ (i.e. we nearly have equality) then the sets ${A,B,C}$ are nearly intervals themselves. Freiman’s theorem and compact subsets of the real line with additive structure In the following, I shall use ${|A|}$ to denote both the Lebesgue measure of ${A}$, when a subset of ${\mathbb{R}}$, or the cardinality of set ${A}$. This shouldn’t cause any confusion, and help highlight the parallel with the continuous case. For the sake of completeness, we remind the reader that the Minkowski sum of two sets ${A,B}$ is defined as $\displaystyle A+B:=\{a+b \,:\, a\in A, b\in B\}.$ I’ve been shamefully sketchy in the previous post about Christ’s work on near extremizers, and in particular I haven’t addressed properly one of the most important ideas in his work: exploiting the hidden additive structure of the inequalities. I plan to do that in this post and a following one, in which I’ll sketch his proof of the sharpened Riesz-Sobolev inequality. In that paper, one is interested in proving that triplets of sets ${A,B,C \subset \mathbb{R}^d}$ that nearly realize equality in Riesz-Sobolev inequality $\displaystyle \left\langle \chi_{A} \ast \chi_{B}, \chi_{C}\right\rangle \leq \left\langle \chi_{A^\ast} \ast \chi_{B^\ast}, \chi_{C^\ast}\right\rangle$ must be close to the extremizers of the inequality, which are ellipsoids (check this previous post for details and notation). In case ${d=1}$, ellipsoids are just intervals, and one wants to prove there exist intervals ${I,J,K}$ s.t. ${A \Delta I, B\Delta J, C\Delta K}$ are very small. Christ devised a tool that can be used to prove that a set on the line must nearly coincide with an interval. It’s the following Proposition 1 (Christ, [ChRS2]) , (continuum Freiman’s theorem) Let ${A\subset \mathbb{R}}$ be a measurable set with finite measure ${>0}$. If $\displaystyle |A+A|< 3|A|,$ then there exists an interval ${I}$ s.t. ${A\subset I}$ [1] and $\displaystyle |I| \leq |A+A|-|A|.$ Thus if one can exploit the near equality to spot some additive structure, one has a chance to prove the sets must nearly coincide with intervals. It turns out that there actually is additive structure concealed in the Riesz-Sobolev inequality: consider the superlevel sets $\displaystyle S_{A,B}(t):=\{x \in \mathbb{R} \,:\, \chi_A \ast \chi_B (x) > t\};$ then one can prove that $\displaystyle S_{A,B} (t) - S_{A,B} (t') \subset S_{A,-A}(t+t' - |B|).$ If one can control the measure of the set on the right by ${|S_{A,B} (t)|}$ for some specific value of ${t=t'}$, then Proposition 1 can be applied, and ${S_{A,B}(t)}$ will nearly coincide with an interval. Then one has to prove this fact extends to ${A,B,C}$, but that’s what the proof in [ChRS] is about and I will address it in the following post, as said. Anyway, the result in Prop. 1, despite being stated in a continuum setting, is purely combinatoric. It follows – by a limiting argument – from a big result in additive combinatorics: Freiman’s theorem. The aim of this post is to show how Prop. 1 follows from Freiman’s theorem, and to prove Freiman’s theorem with additive combinatorial techniques. It isn’t necessary at all in order to appreciate the results in [ChRS], but I though it was nice anyway. I haven’t stated the theorem yet though, so here it is: Theorem 2 (Freiman’s ${3k-3}$ theorem) Let ${A\subset \mathbb{Z}}$ be finite and such that $\displaystyle |A+A| < 3|A|-3.$ Then there exists an arithmetic progression ${P}$ s.t. ${A\subseteq P}$, whose length is ${|P|\leq |A+A|-|A|+1}$. The proof isn’t extremely hard but neither it’s trivial. It relies on a few lemmas, and it is fully contained in section 2. Section 1 contains instead the limiting procedure mentioned above that allows to deduce Proposition 1 from Freiman’s theorem. Remark 1 Notice that Proposition 1 is essentially a result for the near-extremizers of Brunn-Minkowski’s inequality in ${\mathbb{R}^1}$, which states that ${|A+A|\geq |A|+|A|}$. Indeed the extremizers for B-M are convex sets, which in dimension 1 means the intervals. Thus Prop 1 is saying that if ${|A+A|}$ isn’t much larger than ${2|A|}$, then ${A}$ is close to being an extremizer, i.e. an interval. One can actually prove that for two sets ${A,B}$, if one has $\displaystyle |A+B| \leq |A|+|B|+\min(|A|,|B|)$ then ${\mathrm{diam}(A) \leq |A+B|-|B|}$. A proof can be found in [ChRS]. It is in this sense that the result in [ChBM] for Brunn-Minkowski was used to prove the result in [ChRS] for Riesz-Sobolev, which was then used for Young’s and thus for Hausdorff-Young, as mentioned in the previous post. Fine structure of some classical affine-invariant inequalities and near-extremizers (account of a talk by Michael Christ) I’m currently in Bonn, as mentioned in the previous post, participating to the Trimester Program organized by the Hausdorff Institute of Mathematics – although my time is almost over here. It has been a very pleasant experience: Bonn is lovely, the studio flat they got me is incredibly nice, Germany won the World Cup (nice game btw) and the talks were interesting. 2nd week has been pretty busy since there were all the main talks and some more unexpected talks in number theory which I attended. The week before that had been more relaxed instead, but I’ve followed a couple of talks then as well. Here I want to report about Christ’s talk on his work in the last few years, because I found it very interesting and because I had the opportunity to follow a second talk, which was more specific of the Hausdorff-Young inequality and helped me clarify some details I was confused about. If you get a chance, go to his talks, they’re really good. What follows is an account of Christ’s talks – there are probably countless out there, but here’s another one. This is by no means original work, it’s very close to the talks themselves and I’m doing it only as a way to understand better. I’ll stick to Christ’s notation too. Also, I’m afraid the bibliography won’t be very complete, but I have included his papers, you can make your way to the other ones from there. 1. Four classical inequalities and their extremizers Prof. Christ introduced four famous apparently unrelated inequalities. These are • the Hausdorff-Young inequality: for all functions ${f \in L^p (\mathbb{R}^d)}$, with ${1\leq p \leq 2}$, $\displaystyle \boxed{\|\widehat{f}\|_{L^{p'}}\leq \|f\|_{L^p};} \ \ \ \ \ \ \ \ \ \ \text{(H-Y)}$ • the Young inequality for convolution: if ${1+\frac{1}{q_3}=\frac{1}{q_1}+\frac{1}{q_2}}$ then $\displaystyle \|f \ast g\|_{L^{q_3}} \leq \|f\|_{L^{q_1}}\|g\|_{L^{q_2}};$ for convenience, he put it in trilinear form $\displaystyle \boxed{ |\left\langle f\ast g, h \right\rangle|\leq \|f\|_{L^{p_1}}\|g\|_{L^{p_2}}\|h\|_{L^{p_3}}; } \ \ \ \ \ \ \ \ \ \ \text{(Y)}$ notice the exponents satisfy ${\frac{1}{p_1}+\frac{1}{p_2}+\frac{1}{p_3}=2}$ (indeed ${q_1=p_1}$ and same for index 2, but ${p_3 = q'_3}$); • the Brunn-Minkowski inequality: for any two measurable sets ${A,B \subset \mathbb{R}^d}$ of finite measure it is $\displaystyle \boxed{ |A+B|^{1/d} \geq |A|^{1/d} + |B|^{1/d}; } \ \ \ \ \ \ \ \ \ \ \text{(B-M)}$ • the Riesz-Sobolev inequality: this is a rearrangement inequality, of the form $\displaystyle \boxed{ \left\langle \chi_A \ast \chi_B, \chi_C \right\rangle \leq\left\langle \chi_{A^\ast} \ast \chi_{B^\ast}, \chi_{C^\ast} \right\rangle,} \ \ \ \ \ \ \ \ \ \ \text{(R-S)}$ where ${A,B,C}$ are measurable sets and given set ${E}$ the notation ${E^\ast}$ stands for the symmetrized set given by ball ${B(0, c_d |E|^{1/d})}$, where ${c_d}$ is a constant s.t. ${|E|=|E^\ast|}$: it’s a ball with the same volume as ${E}$. These inequalities share a large group of symmetries, indeed they are all invariant w.r.t. the group of affine invertible transformations (which includes dilations and translations) – an uncommon feature. Moreover, for all of them the extremizers exist and have been characterized in the past. A natural question then arises Is it true that if ${f}$ (or ${E}$, or ${\chi_E}$ where appropriate) is close to realizing the equality, then ${f}$ must also be close (in an appropriate sense) to an extremizer of the inequality? Another way to put it is to think of these questions as relative to the stability of the extremizers, and that’s why they are referred to as fine structure of the inequalities. If proving the inequality is the first level of understanding it, answering the above question is the second level. As an example, answering the above question for (H-Y) led to a sharpened inequality. Christ’s work was motivated by the fact that nobody seemed to have addressed the question before in the literature, despite being a very natural one to ask.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 294, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9701101183891296, "perplexity": 233.0399459345079}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056902.22/warc/CC-MAIN-20210919220343-20210920010343-00251.warc.gz"}
http://cstheory.stackexchange.com/questions/3826/np-hardness-of-a-special-case-of-orthogonal-packing-problem/3827
# NP-Hardness of a special case of orthogonal packing problem Let $V$ be a set of $D$-dimensional rectangular shapes. For $d \in \{1,...,D\}$ and $v \in V$, $w_d(v) \in \mathbb{Q}^{+}$ describes the length of $v$ in the dimension $d$. The same notation is used for the container $C$. The $D$-dimensional orthogonal packing problem (OPP-$D$) is to decide if $V$ fits into the container $C$ without overlapping. Formally speaking, the problem is to find out whether $\forall d \in \{1,...,D\}$ there exists a function $f_d:V\rightarrow \mathbb{Q}^{+}$, such that $\forall v \in V, f_d(v)+w_d(v) \leq w_d(C)$ and $\forall v_1,v_2 \in V$, $(v_1 \neq v_2)$, $[f_d(v_1),f_d(v_1)+w_d(v_1)) \cap [f_d(v_2),f_d(v_2)+w_d(v_2)) = \emptyset$. The problem is NP-complete (see Fekete SP, Schepers J. "On higher-dimensional packing I: Modeling". Technical Report 97–288, University at zu Köln, 1997). The problem is NP-complete even for $D=2$. I am wondering, whether the orthogonal packing problem for a bounded number of types (i.e. sizes in each dimension) of items is still NP-complete or not. Till now I found a result in some paper about NP-completeness of packing squares into a square (see JOSEPH Y-T. LEUNG, TOMMY W . TAM, AND C. S. WONG, "Packing squares into a square", Journal of Parallel and Distributed Computing, Volume 10 Issue 3, Nov. 1990) which is already a restriction but I still don't know what happens when the number of types of items is bounded. - can you state the original problem ? –  Suresh Venkat Dec 16 '10 at 15:22 What is orthogonal packing problem? –  Tsuyoshi Ito Dec 16 '10 at 15:23 (1) I am not an expert on the subject, but isn’t that problem discription too sketchy to analyze its complexity? (2) Please try to use other people’s comments to improve your question by editing it, rather than adding more comments. Most people do not want to follow the discussions in comments just to understand the question. –  Tsuyoshi Ito Dec 16 '10 at 15:46 Maybe try to define rigorously what the problem is by editing your question (click the edit button above), and add some references you've found. That will help the community understand what you have known, and what do you want to know. Help us to help you! –  Hsien-Chih Chang 張顯之 Dec 16 '10 at 15:48 (My comment and Hsien-Chih’s comment referred to the asker’s earlier comment sketching what the orthogonal packing problem is, which has been deleted later.) –  Tsuyoshi Ito Dec 16 '10 at 19:27 I think that the paper by Klaus Jansen and Roberto Solis-Oba "An OPT + 1 Algorithm for the Cutting Stock Problem with Constant Number of Object Lengths" has a partial answer to your question. They consider a special case of your problem known as Cutting Stock problem when the number of different object types is constant and defined as follows: In the cutting stock problem we are given a set $T = \{T_1,T_2,\dots,T_d\}$ of object types, where objects of type $T_i$ have positive integer length $p_i$. Given an infinite set of bins, each of integer capacity $\beta$, the problem is to pack a set $\mathcal{O}$ of $n$ objects into the minimum possible number of bins in such a way that the capacity of the bins is not exceeded; in set $\mathcal{O}$ there are $n_i$ objects of type $T_i$, for all $i =1,\dots,d$. The authors claim that it is not known whether the cutting stock problem can be solved in polynomial time for every fixed value $d$. And they propose $OPT+1$ approximation polynomial-time algorithm when $d$ is fixed. Since it's not proved that this special case is in $P$, this is the evidence that your problem is $NP$-hard. Addendum: it's known that the case with two object types ($d=2$) is polynomially solvable, but for $d=3$ there is known only $OPT+1$-approximation. - Thank you for your answer. It was not proved to be in $P$, but neither NP-hard right? Anyway, as you said it gives me a partial answer and makes me think that for OPP-2 it was most probably not studied. –  Petru Dec 17 '10 at 8:44 I think you're probably right that your problem was not studied. As you said "It was not proved to be in P, but neither NP-hard" and I understand it this way too. –  Oleksandr Bondarenko Dec 17 '10 at 12:56 Maybe this problem can be added into the list of "not known to be in P or being NPC" problems. –  Hsien-Chih Chang 張顯之 Dec 17 '10 at 16:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8331981301307678, "perplexity": 248.6764228410393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424937406179.50/warc/CC-MAIN-20150226075646-00144-ip-10-28-5-156.ec2.internal.warc.gz"}
https://labs.tib.eu/arxiv/?author=Y.%20Panebrattsev
• ### Azimuthal dependence of pion source radii in Pb+Au collisions at 158 A GeV(0805.2484) May 16, 2008 nucl-ex We present results of a two-pion correlation analysis performed with the Au+Pb collision data collected by the upgraded CERES experiment in the fall of 2000. The analysis was done in bins of the reaction centrality and the pion azimuthal emission angle with respect to the reaction plane. The pion source, deduced from the data, is slightly elongated in the direction perpendicular to the reaction plane, similarly as was observed at the AGS and at RHIC.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774797558784485, "perplexity": 1838.390638686443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518622.65/warc/CC-MAIN-20200403190006-20200403220006-00165.warc.gz"}
https://sciencenotes.org/what-is-a-solute-solute-definition-and-examples/
# What Is a Solute? Solute Definition and Examples In chemistry, a solute is the substance dissolved in a solvent or the part of a chemical solution present in the smaller amount. Mixing a solute and a solvent resulting in dissolving the solute, which is also known as solvation. Concentration describes the amount of solute in a solvent or solution. For example, the concentration 0.1M HCl describes a solution consisting of 0.1 moles of hydrochloric acid per liter of solution. The term “solute” comes from the Latin word solvere, which means “to loosen.” The words “solvent” and “dissolve” also derive from the same word. While solutes are most often discussed in liquid solutions, they also occur in gases and solids. ### Solute Properties A solute displays characteristic properties in a solution: • The distribution of a dissolve solute is homogeneous. That is, the number of solute particles per volume is the same, no matter where a solution is sampled. • Solute particles are not visible to the eye. • The solute in a solution does not scatter light. • Solute particles do not settle out of solution and cannot be separated from a solution by filtration. • In a solution, the solute is in the same phase as the solvent. ### How to Tell Which Is Solute and Which Is Solvent The solute is the part of a solution present in a lower amount than the solvent. If you know the composition of the solution, it’s easy to identify the solvent, since it accounts for the largest fraction. The remainder of the solution consists of the solute. There may be more than one solute in a solution. In a chemical reaction, the solute is present in a smaller mole fraction and dissolves in the solvent. Any time you see the symbol (aq) following a chemical species, you know it is a solute in aqueous solution (water). ### Solute Examples Here are examples of solutes: ### Predicting Whether a Solute Will Dissolve Solubility is a measure of how much of a solute will dissolve in a solvent. Generally, polar solvents dissolve polar solutes, while nonpolar solvents dissolve nonpolar solutes. For example, salt (polar) dissolves in water (polar), but not in oil (nonpolar). Solubility depends on several factors, including temperature, pressure, and the presence of other substances. For example, more salt will dissolve in boiling water than in ice water. ### References • Clugston, M.; Fleming, R. (2000). Advanced Chemistry (1st ed.). Oxford: Oxford Publishing. • Hefter, G.T.; Tomkins, R.P.T (ed.) (2003). The Experimental Determination of Solubilities. Wiley-Blackwell. ISBN 978-0-471-49708-0. • Houk, C.; Post, R. (eds.) (1997). Chemistry, Concept and Problems. John Wiley & Sons. ISBN 978-0-471-12120-6. • IUPAC (1997). “Solute.” Compendium of Chemical Terminology (2nd ed.) (the “Gold Book”). Compiled by A. D. McNaught and A. Wilkinson. Blackwell Scientific Publications. Online version (2019-) created by S. J. Chalk. ISBN 0-9678550-9-8. doi:10.1351/goldbook
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8451961278915405, "perplexity": 2797.2599890609426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00296.warc.gz"}
http://mathhelpforum.com/pre-calculus/54990-im-struggling-print.html
# I'm struggling.. • Oct 21st 2008, 05:49 PM Kayria I'm struggling.. The height s of a ball in feet thrwn with an initial velocity of 80 feet er second from an initial height of 6 feet is givenas a function of the time t in seconds by s(t)=16t^2+80t+6 (a) Graph (b)Determine the time at which the height is maximum (c)What is the maximum height? Would I put this equation into my TI-86 and interpret the graph? Or am I totally wrong? • Oct 21st 2008, 05:52 PM skeeter Quote: Originally Posted by Kayria The height s of a ball in feet thrwn with an initial velocity of 80 feet er second from an initial height of 6 feet is givenas a function of the time t in seconds by s(t)=16t^2+80t+6 (a) Graph (b)Determine the time at which the height is maximum (c)What is the maximum height? Would I put this equation into my TI-86 and interpret the graph? Or am I totally wrong? that would be a good starting point. • Oct 21st 2008, 05:56 PM Kayria Alright so I did that. And my graph is a messed up line on (5,0) Then goes up slightly to the left and down slightly to the right. Since it only intercects the graph at that poing would that be the max? My teacher gave us this assignment without teaching it to us.. • Oct 21st 2008, 06:08 PM BCHurricane89 I'm assuming this is a calculus class correct? if so, no real need to use a calculator, except to check your work. If so, part B: take first derivative of your function, set it equal to 0 and solve for t part C: put the answer u got in part B into your original equation to find the max height • Oct 21st 2008, 06:14 PM skeeter btw ... your position function should be $s(t)=-16t^2+80t+6$ the graph should be an inverted parabola. • Oct 21st 2008, 06:15 PM Dubulus what math class is this for? you can find this solution various ways. as for the graphing calc, it is a very useful tool. graphing it on your calculator and copying it onto your homework is one way to complete A. this graph will also show you when the ball is at its max height. just be careful in case you cant use calculators on your test! • Oct 21st 2008, 06:45 PM Kayria Quote: Originally Posted by Dubulus what math class is this for? you can find this solution various ways. as for the graphing calc, it is a very useful tool. graphing it on your calculator and copying it onto your homework is one way to complete A. this graph will also show you when the ball is at its max height. just be careful in case you cant use calculators on your test! Its for Pre-Calc. And we are urged to and can use our calcs on our tests in the back of the book the answer is 2.5 seconds and 106 feet.. im just not getting the parabola. Any reasons you can think why? • Oct 21st 2008, 06:51 PM Dubulus i plugged it into my ti 84 and i got the correct result what are you entering in?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859059751033783, "perplexity": 959.1895670131298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00138-ip-10-171-10-70.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/classical-mechanics-problem-3/?sort=new
# Classical mechanics problem A ball of very small size is dropped vertically onto a frictionless inclined plane which makes an angle thetha with the horizontal. Initially the distance between the plane and the ball is d and the speed of the ball is zero. The trajectory of the ball consists of parabolic arcs .what will be the locus of all those focus of all those parabolic arcs . Assume that elastic collisions is taking place and no air resistance is there Note by Azimuddin Sheikh 7 months, 3 weeks ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Here is a plot of a ball dropped onto a 30 degree ramp from a height of 1 meter. The key to the simulation is that upon every bounce, the component of the velocity normal to the ramp is reversed, and the component of the velocity tangential to the ramp is preserved. The second graph is of a ball dropped onto a 15 degree ramp from a height of 5 meters - 7 months, 2 weeks ago Sir is it possible to join the focus of all these parabolas in the figure ? Also sir can u make a horizontal straight line passing through from the initial position ? So now from this property ( which is directrix is same horizontal line for all the parabolas) isn't it possible to easily get the locus in mathematical way sir ?? Or now also its complicated ? @Steven Chase sir if yes then I will make another problem , from this next onwards. My friends told me that focus has a nice locus . They maybe wrong though. - 7 months, 2 weeks ago It may be possible, but my way of solving isn't useful for deriving an analytical expression. Anyway, I'm moving on from this one, and I have posted the ramp problem in the CM section. - 7 months, 2 weeks ago Yeah OK sir , I will post a new problem then. - 7 months, 2 weeks ago I'll do the bead problem next - 7 months, 2 weeks ago Yeah thx sir. Was waiting for that problem to be solved . - 7 months, 2 weeks ago @Aaghaz Mahajan can u help ?? - 7 months, 3 weeks ago okay...........lemme try - 7 months, 3 weeks ago Thx for replying sir , pls share your ideas when u have tried it. @Aaghaz Mahajan sir. BTW @Steven Chase sir can u also share your ideas ?? - 7 months, 3 weeks ago @Aaghaz Mahajan sir u got the locus ?? - 7 months, 3 weeks ago Hey don't call me sir.....I am only 16 yrs old....XD....!! Btw, I tried your question......it is pretty weird in my opinion......Are you sure the answer has a nice form??? Because well, it seems pretty complicated to me.......I tried but am not able to find a solution..........will surely post you if anything else came up..... - 7 months, 3 weeks ago Yeah it will be complicated bro. It should have a nice locus I think so. can we prove that the directrix of all the parabolas is the same horizontal straight line, passing through the initial position of the ball? Also is it true that all the parabolic arcs are touching the line which is parallel with the inclined plane and passes through the initial position of the ball? @Steven Chase sir , @Aaghaz Mahajan bro ?? - 7 months, 3 weeks ago It is easy to derive numerical results and plot the trajectory. I can’t guarantee any kind of closed form solution though. That’s why I haven’t done this one yet. - 7 months, 3 weeks ago Yeah I agree Sir.......but numerical results are always easy (comparatively!) and they don't really give us the true essence of solving something......So, that is why I was trying to do this manually, but in vain....:( - 7 months, 2 weeks ago So how about we turn this into a question that can actually be solved? - 7 months, 2 weeks ago @Aaghaz Mahajan bro @Steven Chase sir isn't it is interesting (that the directrix of all the parabolas is the same horizontal straight line, passing through the initial position of the ball), the problem is made to get interesting results isn't ?? I maybe wrong about calculation part , which can be hard .[[ [[Can anyone just show by animation what would be the locus seems like , if its not nice I will change this problem ,for sure.]]]] - 7 months, 2 weeks ago I can make a plot of the trajectory - 7 months, 2 weeks ago Yeah pls sir , show it - 7 months, 2 weeks ago Ok, it'll be some time in the next day. It's almost bedtime here - 7 months, 2 weeks ago Sir can't we be able to find closed form by looking at the focus of some of the parabolic arcs ? Is the locus not coming out to be a nice curve by plotting the trajectory and joining all focuses? - 7 months, 2 weeks ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959102034568787, "perplexity": 1664.1304409944419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00281.warc.gz"}
http://www.physicsforums.com/showpost.php?p=3806540&postcount=9
View Single Post P: 604 taylor, angle of incidence is same as angle of reflection. that is one of the laws of geometric optics.. so can you see which one is angle on incidence and which one is angle reflection at point Q Edit: http://en.wikipedia.org/wiki/Angle_of_incidence
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896061360836029, "perplexity": 913.4716109223706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999651159/warc/CC-MAIN-20140305060731-00045-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/87137/is-there-a-kan-thurston-theorem-for-fibrations
# Is there a Kan-Thurston theorem for fibrations ? Given a fibration $F \to X \to B$ with all spaces path-connected. Is there a (discrete) group $G$ with normal subgroup $H$ such that $$H^\ast(BG;\mathcal{A}) = H^\ast(X;\mathcal{A})$$ $$H^\ast(BH;\mathcal{A}) = H^\ast(F;\mathcal{A})$$ $$H^\ast(B(G/H);\mathcal{A}) = H^\ast(B;\mathcal{A})$$ for every local coefficient system $\mathcal{A}$ on $X$ ? - To be more precise, I think you want there to be homomorphisms from these groups to the fundamental groups, therefore maps of cohomology groups (from that of BG to that of X, for example), and the question is whether these data can be chosen such that those cohomology maps are always isomorphisms. Also, for the third equation you want a coeff system on B, not on X. –  Tom Goodwillie Jan 31 '12 at 14:43 Yes, that's right. Actually I wonder if it's always possible to replace a Serre spectral seq. of a fibration by the spectral seq. of a group extension (therefore my missing caution for the coefficients on B :) –  Ralph Jan 31 '12 at 15:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8725773692131042, "perplexity": 419.890356851973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447552182.102/warc/CC-MAIN-20141224185912-00081-ip-10-231-17-201.ec2.internal.warc.gz"}
https://mathshistory.st-andrews.ac.uk/HistTopics/Kinematic_planetary_motion/
# Planetary motion tackled kinematically Orbital motion expressed in terms of the auxiliary angle Introduction Nowadays astronomers accept that planetary motion has to be treated dynamically, as a many-body problem, for which there is bound to be no exact solution. This situation is currently approximated in textbooks by the two-body problem, which itself became amenable to analysis only after the work of Newton (1642-1727) [1]. His epoch-making publication was the source of recognisably modern formulations of mass, and force, and acceleration (in fact even the concept of resultant - tangential - velocity seems to have been new). It initiated the science of dynamics as we now know it. Before that breakthrough, planetary motion involved merely a path (a curve), together with a measure of time, represented geometrically: that is, a strictly kinematical treatment. This by definition involves the dimensions of length and time alone, while excluding altogether the dimension of mass. Thus the situation is simplified to one in which a planet (regarded as a point) moves in a plane about a fixed source of motion. (The theory was developed first in terms of circles based on the heliocentric configuration invented by Copernicus (1473-1543) [2].) In this case, the motion of each individual planet occurs in isolation, entirely unaffected by any other member of the system. This will be specifically referred to as the 'one-body problem'. It is this situation we shall now examine, adjusting our terminology accordingly (in particular, replacing any mention of 'velocity' with the more appropriate term 'motion' throughout). The astronomical solution to the one-body problem consists of the two laws: 1. Law I (the Ellipse Law) - the curve or path of a planet is an ellipse whose radius vector is measured from the Sun which is fixed at one focus. 2. Law II (the Area Law) - the time taken by a planet to reach a particular position is measured by the area swept out by the radius vector drawn from the fixed Sun. This composite solution represents what is in fact the earliest instance of a planetary orbit: it will be succinctly referred to in what follows as 'the Sun-focused ellipse'. We shall now prove that, subject to its obvious external limitations, this unique solution is of universal applicability as a self-contained piece of mathematics. Moreover the topic is of great historical significance - since the discovery of the two laws stated above actually took place during the period 1600-1630 [3], under the kinematical circumstances described above: see Kepler's Planetary Laws. Therefore it is of interest to assess the validity of the techniques actually employed at the time, applying rigorous modern methods as a standard of comparison. Unexpectedly, this analysis is carried out in terms of the auxiliary angle of the ellipse, rather than the polar angle (at the Sun) that is invariably used nowadays: this came about for historical reasons - because, until the adoption of the heliocentric view, the position of the Sun did not play an explicit part in planetary theory. Moreover, the kinematical solution is qualitatively different from any later, dynamical one in that it possesses exact geometrical representation - while the adoption of the auxiliary angle as variable ensures that the treatment turns out to be the simplest possible. In what follows, we establish the properties of an ellipse, both as a path, in Part I, and as an orbit, in Part II; while in Part III we will derive Law III, the relationship that synthesizes the planetary system. Part I. Geometrical properties of the ellipse with focus at the origin (i) Determination of the radius vector The figure shows an ellipse with its major auxiliary circle diameter $CD$, centre $B$, whose given measures will be denoted by $BC = BD = a$, the major semiaxis of the ellipse, and BF = $b$, its minor semiaxis. The focus A is constructed geometrically by drawing $FM$ parallel to $CD$ to cut the circle at $M$, and dropping a perpendicular from $M$ to cut $CD$ at $A$ (thus making $AM = BF$). Then we set $AB = BE = ae$, where $ae$ is derived from the relationship that connects the three determining constants of an ellipse (it may be referred to as 'the focus-fixing property'): $a^{2}e^{2} = a^{2} - b^{2}$.             (1) It is essential to appreciate here that $e$ not only denotes the focal eccentricity (the 'ellipticity') but the polar eccentricity as well, since A is both the focus and the origin or pole of coordinates (which here coincides with the position of the Sun). Otherwise, the present treatment will be ineffective. By considering the (evidently) congruent right-angled triangles $ABF$ and $ABM$, we find $AF = BM = a$. This length AF is subsequently recognized as 'the mean distance', which is of great significance in Part III below. Our derivation will be carried out exclusively in terms of the auxiliary angle $\angle QBC = \beta$. This will be unfamiliar to modern readers since the standard treatment is nowadays invariably based on the polar angle $PAC = \angle PAC = \theta$. We start from what was almost certainly the earliest definition of an ellipse (because it can be derived from the plane section of a cone in three easy steps, as set out in [4]). It enables the ellipse to be regarded as a 'compressed circle', by a relation known nowadays as 'the ratio-property of the ordinates': $\large\frac{PH}{QH}\normalsize = \large\frac{b}{a}\normalsize$.             (2) Now from $\Delta QHB$, $QH = a\sin \beta$, so from (2), $PH = \large\frac{b}{a}\normalsize . QH = b \sin \beta$ . We will first find the radius vector $AP = r$ in terms of $\beta$ (though it will be convenient to introduce the polar angle $\theta$ temporarily, in a subsidiary capacity). Then two geometrical equivalences can be derived from $\Delta APH$, again shown in the figure: $PH = r \sin \theta = b \sin \beta$             (3) $AH = r \cos \theta = a(\cos \beta + e)$.             (4) Applying Pythagoras' theorem to $\Delta APH$, we derive: $r^{2} = AP^{2} = PH^{2} + AH^{2}$ and thus, $r^{2} = b^{2}\sin^{2} \beta + a^{2}(\cos \beta + e)^{2}$. Using (1), $r^{2} =a^{2}(1 - e^{2})\sin^{2} \beta + a^{2}(\cos^{2} \beta + 2e \cos \beta + e^{2})$ $= a^{2}(\sin^{2} \beta - e^{2}\sin^{2} \beta + \cos^{2} \beta + 2e \cos \beta + e^{2})$ $= a^{2}(1 + 2e \cos \beta + e^{2}\cos^{2}\beta).$ Hence, $r = AP = a(1 + e \cos \beta).$            (5) This is Law I: the equation of the elliptic path with respect to the origin at one focus: see Kepler's Planetary Laws: Section 6. (ii) Proof that the ellipse is 'simpler than any circle (except one)' We consider the equation of a circle with origin at some eccentric point: as an illustration we may take the circle $CQD$, centre $B$, shown in the figure, where $A$ is to be regarded as the origin or pole; just for our present purpose, we set $AB = a e$ to represent the 'polar distance' alone (since the focal distance for a circle is zero). Then we use the information from (i) above to calculate the radius vector $AQ$ of the circle: $AQ^{2} = AH^{2} + QH^{2}$ $= a^{2}(\cos \beta + e)^{2} + a^{2}\sin^{2} \beta$ $= a^{2}(1 + 2e \cos \beta + e^{2}).$ So, $AQ = a(1 + 2e \cos \beta + e^{2})^{{1\over2}}$ Hence, $AQ = a(1 + e \cos \beta + \large\frac{1}{2}\normalsize e^{2}\sin^{2} \beta + ...).$ Therefore it is clear that this expression for the radius vector of a circle with its origin at an eccentric point is much less simple than that for the radius vector of the ellipse with the same origin when that point is its focus, as set out in (5) just above. Further, this argument could be generalized by carrying out a similar brief calculation to find the radius vector of any ellipse belonging to the system of conics whose origin is at the Sun (again setting polar distance $AB = ae$) that has $CQD$ as its auxiliary circle and its typical point lying on $QH$ (still defined by auxiliary angle β). However, because each such ellipse possesses its own individual eccentricity, this would introduce a separate constant (say $\epsilon$ ) to represent the focal eccentricity of that particular ellipse, and thus produce a still more complicated expression. Since both the focal distance and the polar distance are measured from the centre $B$ of the ellipse, it is only when these two distances coincide ($a\epsilon = ae$), uniquely, that we obtain the simplest possible equation -- as expressed in (5). (And mathematicians will not need convincing that the simplest of all circles, having its origin at the centre $B$, is no more than a special case of that system of conics, with $e = \epsilon = 0.$) (iii) Evaluation of the transradial arc $r d\theta$ From the equivalences for $PH$ set out in (3), we obtain: $\large\frac{\sin\theta}{\sin \beta}\normalsize = \large\frac{b}{r}\normalsize$.            (6) So, applying the formula for the radius vector from (5), we have: $\sin \theta = \large\frac{b}{a}\normalsize \large\frac{\sin\beta}{1 + e \cos \beta}\normalsize$ . Differentiating with respect to β, we derive: $\large\frac{d\theta}{d\beta} = \frac{b}{a}.\large\frac{1}{\cos \theta}.\large\frac{\cos \beta (1 + e \cos\beta) + e\sin^{2}\beta}{(1 + e \cos\beta)^{2}}$ and using (4), $\large\frac{d\theta}{d\beta}\normalsize = ab.\large\frac{1}{\cos \theta}.\large\frac{\cos \beta + e (\cos^{2}\beta + \sin^{2}\beta)}{r^{2}}$ . Hence, $\large\frac{d\theta}{d\beta}\normalsize = \large\frac{b}{r}\normalsize$.            (7) This identity acts as the bridging relation (inverse or direct) between the modern treatment by polar angle and the present treatment by auxiliary angle (which will only work effectively in the case of the unique Sun-focused ellipse alone). Moreover, this purely geometrical relationship is unexpectedly of enormous significance in connection with one kinematical component of the orbit, as we shall see in Part II(i) below. Meanwhile we point out that the transradial arc is constant with respect to the auxiliary angle: $r d\theta = b d\beta$.            (8) Part II. Kinematical properties of the ellipse with focus at the origin (i) The transradial component of motion rdθ/dt (This motion is known to some mathematicians as the transverse component of motion). Whatever we call it, this motion is defined to take place round the Sun instantaneously in a circle. The characteristic property of orbital motion in its most general form is generally stated dynamically, but it was in fact first proved as a kinematical relation in Book I, Prop.1 of Newton's work, already cited [1] (at that early stage, the concept of mass had not yet been introduced). This property can be formulated in various ways, all equivalent to the statement that equal areas correspond to equal times. The constant of proportionality involved ($\large\frac{1}{2}\normalsize h$ is standard usage) is expressed mathematically by the following relationship, in which $r$ represents the radius vector measured from the source of motion at the Sun, still taken as the origin of coordinates, again with reference to the figure: $r^{2} \large\frac{d\theta}{dt}\normalsize = h$. This is the modern mathematical expression of the kinematical area-time law. We now apply this to the special case of the Sun-focused ellipse, whose total area is $\pi ab$ and periodic time $T$, in order to evaluate its particular constant. For one complete circuit, the area-time law gives: $2\pi \large\frac{ab}{T}\normalsize = h$.            (9) So in this case, $r^{2} \large\frac{d\theta}{dt}\normalsize = 2\pi \large\frac{ab}{T}\normalsize$, and hence, $r \large\frac{d\theta}{dt}\normalsize = 2\pi \large\frac{ab}{T}\normalsize \times \large\frac{1}{r}\normalsize$.            (10) Now from the evaluation of the transradial arc in (8) above, we have: $r \large\frac{d\theta}{dt}\normalsize = b \large\frac{d\beta}{dt}\normalsize$.            (11) Thus for the Sun-focused ellipse alone, we deduce from (10) and (11): $\large\frac{d\beta}{dt}\normalsize = 2\pi \large\frac{a}{T}\normalsize \times \large\frac{1}{r}\normalsize$.            (12) We digress to consider the inverse form: $\large\frac{dt}{d\beta}\normalsize = \large\frac{T}{2\pi}\normalsize \times \large\frac{r}{a}\normalsize = \large\frac{T}{2\pi}\normalsize (1 + e \cos \beta)$    from (5). Hence by integration, $t \propto \beta + e \sin \beta$. This is Law II: the time expressed in angular measure. Then, by introducing the dimensional constant $\large\frac{1}{2}\normalsize ab$, for the Sun-focused ellipse alone, we can easily deduce that time is proportional to area. See Kepler's Planetary Laws: Section 7. [For a less precise version of equation (10) - simply that the transradial motion is proportional (inverse-linearly) to the distance - see Kepler's Planetary Laws: Section 10.] (ii) The radial component of motion $\large\frac{dr}{dt}\normalsize$ This motion takes place linearly in the direction of the radius vector -- towards or away from the Sun. $r = a(1 + e \cos \beta).$ Hence, $\large\frac{dr}{d\beta}\normalsize = -ae \sin \beta$.            (13) This is the radial variation of the distance with respect to β: see Kepler's Planetary Laws: Section 11.] Continuing our modern treatment, we carry out a change of variable, using (12) and (13): $\large\frac{dr}{dt} = \large\frac{dr}{d\beta}.\large\frac{d\beta}{dt}= \frac{2\pi a}{T}.\frac{ae\sin \beta}{r}$, and thus, $\large\frac{dr}{dt} = -\frac{2\pi a}{T}.\frac{e\sin \beta}{1 + e\cos{\beta}}$.            (14) It can easily be checked that calculating the resultant of these two components (10) and (14) will produce the modern value of the 'velocity' in orbit -- but there is no reason to do so since the present treatment by components is entirely adequate -- and much simpler -- for a kinematical approach. On the other hand, for the removal of doubt, we should confirm that this treatment is compatible with the modern dynamical approach, by determining the acceleration that corresponds to this motion (as has been said, this concept was an anachronism in Kepler's day). There are several ways of carrying this out, which unfortunately involve either sophisticated calculus or fairly heavy algebra. We start from the formula analogous to that found in textbooks of dynamics: Radial acceleration $r\large\frac{d\theta}{dt}^{2} -\frac{d^{2} r}{dt^{2}}$ directed towards the Sun.            (15) Then we apply result (11) above to the first term, and, as one possibility for the second term, introduce a formula for change of variable which is found in some calculus textbooks: $\large\frac{d^{2} r}{dt^{2}}= \left[\frac{dt}{d\beta}.\frac{d^{2}r}{d\beta^{2}}- \frac{dr}{d\beta}.\frac{d^{2}t}{d\beta^{2}}\right]\left( \frac{d\beta}{dt}\right)^{3}$.            (16) This can be expressed in terms of $r$ as required by differentiating (12) and (13), and also using (14), and then simplified by applying (5) and (1). Lastly by using (12), we obtain: Radial acceleration = $(2\pi)^{2} \large\frac{a^{3}}{T^{2}}\normalsize \times \large\frac{1}{r^{2}}$ towards the Sun. Now introducing, provisionally, the quantity $\mu_{0}$ to represent $(2\pi)^{2} \large\frac{a^{3}}{T^{2}}$, we express the radial acceleration in the more familiar form: Acceleration = $\large\frac{\mu_{0}}{r^{2}\normalsize}$ towards the Sun. This quantity $\mu_{0}$ is evidently determined by the particular orbit, and thus appears to be a (kinematical) constant associated with the individual planet. It will be interpreted further in Part III below. We conclude that this theory is rigorously exact in kinematical terms for an individual planet, in accordance with presentday standards. Moreover, subject to precise determination of the values of all the constants involved, Kepler's own treatment was entirely satisfactory, up to the level of first order differentiation. Part III. Corollary: the derivation of Law III for a system of planets A geometrical lemma to Part I(i) above will enable us to evaluate $AL = l$, the semilatus rectum, shown in the figure (where L is the point of the ellipse lying on $AM$). By the original construction, $AM = BF = b$. Accordingly, applying the ratio-property of the ordinates, we obtain: $\large\frac{AL}{AM}\normalsize = \large\frac{l}{b}\normalsize = \large\frac{b}{a}\normalsize$. Hence, $b^{2} = al$.            (17) Now we return to equation (9), which stated the area-time law (in kinematical terms) for one complete circuit: $h = 2\pi \large\frac{ab}{T}\normalsize ,$ and so, $h^{2} = (2\pi)^{2}a^{2}b^{2}/T^{2}$ Using (17), we obtain: $h^{2} = (2\pi)^{2}a^{3}l/T^{2}$. Rearranging, $\large\frac{a^{3}}{T^{2}}\normalsize = \large\frac{1}{(2\pi)^{2}} . \large\frac{h^{2}}{l}$. Since $h$ and $l$ are constants determined by the particular orbit, we will follow Cohen [5] (who presumably chose the notation to commemorate the discoverer of this relationship) to write the result: $\large\frac{a^{3}}{T^{2}}\normalsize = K$.            (18) Hence we have uncovered the existence of a kinematical relationship between the square of the periodic time and the cube of the mean distance for each of the (six) planets independently, each apparently possessing its own individual value of $K$. In the event the value of $K$ was compared empirically for all the planets in pairs, and was found to be constant (within observational limits) for every pair tested; $K$ was then assumed to have a common value for the whole planetary system -and the relationship is known as Kepler's Third Law [6]. (It was proved, in a dynamical context, in Book I, Prop.15 of Newton's work, already cited.) However, it is possible to formulate a rational basis for the above deduction, founded on geometry - and so to produce a theoretical proof of Law III which would have been not so far beyond the conceptual understanding of a pre-Newtonian mathematician [7]. (The proof involves the extension of a method actually used by Kepler in his proof of the area law.) Accordingly, it is evident that the quantity $\mu_{0}$ provisionally defined in Part II(iii) - there associated with an individual planet -- may now be identified as a kinematical constant that will operate to synthesize the planetary system. (It is explained in elementary textbooks of modern astronomy that the corresponding value $\mu$ in the dynamical system depends on the relative masses as well as the actual constant of gravitation.) So we will name $\mu_{0}$ 'the coefficient of planetary cohesion', and in correlation, we have: $\mu_{0} = (2\pi)^{2} \large\frac{a^{3}}{T^{2}}\normalsize = (2\pi)^{2}K$. ### References (show) 1. Isaac Newton: The Mathematical Principles of Natural Philosophy, London 1687. 2. Nicolaus Copernicus: On the Revolutions of the Heavenly Spheres, Nürnberg 1543. 3. The laws appeared in Johannes Kepler (1571-1630): New Astronomy, Heidelberg 1609. They were validated in his later work: Epitome of Copernican Astronomy, Book V, Frankfurt 1621. 4. A E L Davis: 'Some plane geometry from a cone ... 'Mathematical Gazette, forthcoming, July 2007. 5. I. Bernard Cohen: The Birth of a New Physics, Norton 1985 (up-dated), p.166. 6. Johannes Kepler: The Harmony of the World, Linz 1619. Stated Book V, Ch.3, Prop.8. 7. A E L Davis: 'Kepler's potential proof of his Third Law' in Miscellanea Kepleriana, ed. Boockman, Di Liscia, Kothmann, Augsburg 2005. Written by A E L Davis, University College, London. Last Update October 2006
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 108, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495278000831604, "perplexity": 739.7526752912037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00803.warc.gz"}
http://math.stackexchange.com/questions/206364/find-the-eclipse-focal-point?answertab=votes
# Find the eclipse focal point A conic with equation $$a x^2 + b y^2 = c$$ has two focus points, where $a=4$, $b=24$ and $c=65$. One of those focus points has a positive x-coordinate. To two decimal places, what is the value of that positive x-coordinate? I got the answer as 14.83, seems a bit too big is that right? The above answer is according to this in my textbook - title:"Just convert this to radians?" the question seems to be asking for a coordinate, not an angle... –  Navin Oct 3 '12 at 4:38 @Navin Tks for pointing that out my old question was in my browsers cache, changed the title –  JackyBoi Oct 3 '12 at 4:43 Put the ellipse in standard form. Start from $4x^2+24y^2=65$. Divide through by $65$. We get $\frac{4}{65}x^2+\frac{24}{65}y^2=1$. This is $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$, where $a^2=\frac{65}{4}$ and $b^2=\frac{65}{24}$. Finally, you want $\sqrt{a^2-b^2}$. Your intuition is right: this is substantially smaller than your answer. To two decimal places, I get $3.68$. I found the $x$-coordinate of the focal point with positive $x$-coordinate, as the question asks. The eccentricity is not $a^2/b^2$, it is $\frac{\sqrt{a^2-b^2}}{a}$. If your book/notes do not have the relevant formulas, I am sure you can find them in the wikipedia article on the ellipse, there must be one! –  André Nicolas Oct 3 '12 at 5:00 That is equivalent to what I wrote. But I guess if you follow the book, you will first find $e$, which is $\sqrt{1-\frac{b^2}{a^2}}$ (mine was different but equivalent) and then you will multiply by $a$. You will get $\sqrt{a^2-b^2}$. Anyway, same answer. Eccentricity is about $0.91287$. –  André Nicolas Oct 3 '12 at 5:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9210108518600464, "perplexity": 341.2191124975734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500959239.73/warc/CC-MAIN-20140820021559-00008-ip-10-180-136-8.ec2.internal.warc.gz"}
https://homework.zookal.com/questions-and-answers/a-crate-slides-down-the-ramp-shown-at-the-instant-187017212
1. Engineering 2. Mechanical Engineering 3. a crate slides down the ramp shown at the instant... # Question: a crate slides down the ramp shown at the instant... ###### Question details A crate slides down the ramp shown. At the instant where the speed and position are pictured, determine the crates total acceleration if its speed is increasing 6ft/s^2.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9473528265953064, "perplexity": 2584.3310574614284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038461619.53/warc/CC-MAIN-20210417162353-20210417192353-00078.warc.gz"}
http://mathoverflow.net/questions/85121/degree-0-vector-bundles
# degree 0 vector bundles If $X$ is a smooth projective curve over a field $k$, $V$ is a subbundle of some finite direct sum of $O_X$, then is it true that $V$ is of degree 0? - No. Counterexample: On $\mathbf P^1$, take a (two-element) basis of $\Gamma(\mathbf P^1, \mathcal O(1))$, so that the corresponding homomorphism $\mathcal O^2\to\mathcal O(1)$ is surjective. Its kernel is a subbundle of degree $-1$. [Really a comment sed hac marginis ... .] a-fortiori's answer disposes of the question completely, but it is possible to go a little further: if $V$ is a degree $0$ sub-bundle of a free bundle on a curve, then $V$ is free (and is a direct summand). Proof: Write $W=V^\vee$, the dual bundle. Say it is of rank $n$. There is then a surjection $\pi:F=\mathcal O_X^{\oplus r}\to W$. Note that $r\ge n$. Choose general sections $s_1,\ldots,s_n$ in $H^0(X,F)$; then $\pi(s_1),\ldots,\pi(s_n)$ will generate a rank $n$ subsheaf $G$ of $W$ [because $\pi(s_1)$ is a nowhere vanishing section of $W$, and then argue inductively on the sheaves $F/(s_1)$ and $W/(\pi(s_1))$]. Then $G$ generated by $n$ sections and is a rank $n$ subsheaf of $W$. Since $\deg W=0$ it follows that $G=W$ and the homomorphism $\mathcal O_X^{\oplus n}\to W$ generated by the $s_i$ is an isomorphism. Dualizing gives the result. Yes, what you said is right. But it is easier to see it using Tannkian. If $V$ is of degree 0 and a sub of a free bundle, then it is an essentially finite vector bundle (see Nori's article for Nori's fundamental group). But the category of essentially finite vector bundles is tensor equivalent to representations of some group scheme over $k$. Since free bundle corresponds to trivial representation. $V$ as a suboject also corresponds to the trivial rep. The splitting assertion is even clearer. – Lei Jan 11 '12 at 16:44 More generally, if $F$ is a finite direct sum of simple objects $S_i$ in some abelian category, any subobject of $F$ is a direct summand and isomorphic to a direct sum of some of the $S_i$. Apply this to the abelian category of semistable degree 0 bundles. – user2035 Jan 29 '12 at 13:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905330538749695, "perplexity": 164.588311493028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825366.39/warc/CC-MAIN-20160723071025-00158-ip-10-185-27-174.ec2.internal.warc.gz"}
http://www.maplesoft.com/support/help/Maple/view.aspx?path=Task/ContentAndPrimPartOfPolynomial
Content and Primitive Part of a Polynomial - Maple Programming Help Content and Primitive Part of a Polynomial Description Calculate the content and primitive part of a polynomial. Enter a polynomial. > ${4}{}{{x}}^{{2}}{}{y}{-}{2}{}{x}{}{{y}}^{{2}}{+}{6}{}{x}{}{y}$ (1) Specify the variable, and then calculate the content of the polynomial. > ${2}{}{y}$ (2) Specify the variable, and then calculate the primitive part of the polynomial. > $\mathrm{primpart}\left(,{x}\right)$ ${-}{x}{}{y}{+}{2}{}{{x}}^{{2}}{+}{3}{}{x}$ (3) > Commands Used
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990803599357605, "perplexity": 1001.4342986841378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00199-ip-10-164-35-72.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/222891-how-would-i-go-about-problem.html
Let F(x)=f(f(x)) and G(x)=F(x)^2 . You also know that f(8)=2, f(2)=3, f ' (2)=11, f '(8)=3 Find F(8)= and G(8)=. I have no idea of where to even start on this Use the chain rule. If $c(x) = a(b(x))$ for some differentiable functions $a,b$, then $c'(x) = a'(b(x))b'(x)$ by the chain rule. Do something similar to find $F'(x)$ and $G'(x)$. ok so how do I set up the function to find F(x)? Is it something along the lines of F(f(x), so I would use F(f(2))=h(x)? Originally Posted by rhcprule3 Let F(x)=f(f(x)) and G(x)=F(x)^2 . You also know that f(8)=2, f(2)=3, f ' (2)=11, f '(8)=3 Find F(8)= and G(8)=. Originally Posted by rhcprule3 ok so how do I set up the function to find F(x)? Is it something along the lines of F(f(x), so I would use F(f(2))=h(x)? $F'(x)=f'(f(x))f'(x)~\&~G'(x)=2F(x)F'(x)$ Now you are give all necessary information to complete the substitutions. ok but what do I do with the given functions? like for f(8)=2, do I just plug in f(8) for f(x)? Like 2(f(8)8(f(3)? and i have no clue how to get G'(x). Id rather be walked through the problem so I can learn step by step how to do this problem... Originally Posted by rhcprule3 Like 2(f(8)8(f(3)? and i have no clue how to get G'(x). Id rather be walked through the problem so I can learn step by step how to do this problem... Well, it is evident to me that your misunderstandings are so profound that the help you need is beyond what we can give. You need to sit down with a live instructor and discuss what you can do to strengthen your understanding of the background material necessary for being successful in this course. A function is like a machine. You give it input, it does something, and it gives you output. So, the notation $f(x)$ means if you give the function the input $x$, you get the output $f(x)$. So, the value of $f(x)$ when $x=8$ is just $f(8)$. If you have $F'(x) = f'(f(x))f'(x)$, then you are asked to find $F'(8)$, you just put 8 wherever you see an $x$ in $f'(f(x))f'(x)$. Now, how do you evaluate the result $f'(f(8))f'(8)$? You use the information you are given. You are told f(8)=2, f(2)=3, f ' (2)=11, f '(8)=3. When you have an expression like $f(8)=2$, it means that if you give the machine $f(x)$ the input 8, you get back the output 2. So, $f(x)$ is a placeholder for a number. When you put a number into $x$, you get back a number. It is no longer a placeholder. $f(8)$ is a number. One of the rules of functions is that each time you give it the same input, you get the same output. So, you will never find that $f(8)=2$ the first time you plug in 8, then the next time you plug in 8, you get a different number. Once you know $f(8)=2$, you can consider $f(8)$ and $2$ to be the same number. So, suppose you have $f(a) = b$ and you are asked to evaluate $f(f(a))$. That would be $f(f(a)) = f(b)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.827229380607605, "perplexity": 230.9685941326159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122865.36/warc/CC-MAIN-20170423031202-00415-ip-10-145-167-34.ec2.internal.warc.gz"}
https://pressbooks.bccampus.ca/introductorygeneralphysics1phys1107/chapter/9-5-simple-machines/
Chapter 9 Statics and Torque # 9.5 Simple Machines ### Summary • Describe different simple machines. Simple machines are devices that can be used to multiply or augment a force that we apply – often at the expense of a distance through which we apply the force. The word for “machine” comes from the Greek word meaning “to help make things easier.” Levers, gears, pulleys, wedges, and screws are some examples of machines. Energy is still conserved for these devices because a machine cannot do more work than the energy put into it. However, machines can reduce the input force that is needed to perform the job. The ratio of output to input force magnitudes for any simple machine is called its mechanical advantage (MA). $\boldsymbol{\textbf{MA}\:=}$$\boldsymbol{\frac{F_{\textbf{o}}}{F_{\textbf{i}}}}$ One of the simplest machines is the lever, which is a rigid bar pivoted at a fixed place called the fulcrum. Torques are involved in levers, since there is rotation about a pivot point. Distances from the physical pivot of the lever are crucial, and we can obtain a useful expression for the MA in terms of these distances. Figure 1 shows a lever type that is used as a nail puller. Crowbars, seesaws, and other such levers are all analogous to this one. $\boldsymbol{\vec{\textbf{F}}_{\textbf{i}}}$ is the input force and $\boldsymbol{\vec{\textbf{F}}_{\textbf{o}}}$ is the output force. There are three vertical forces acting on the nail puller (the system of interest) – these are $\boldsymbol{\vec{\textbf{F}}_{\textbf{i}},\: \vec{\textbf{F}}_{\textbf{o}}}$ and $\vec{\textbf{N}}.$ $\boldsymbol{\vec{\textbf{F}}_{\textbf{n}}}$ is the reaction force back on the system, equal and opposite to $\boldsymbol{\vec{\textbf{F}}_{\textbf{o}}}.$ (Note that $\boldsymbol{\vec{\textbf{F}}_{\textbf{o}}}$ is not a force on the system.) $\vec{\textbf{N}}$ is the normal force upon the lever, and its torque is zero since it is exerted at the pivot. The torques due to $\boldsymbol{\vec{\textbf{F}}_{\textbf{i}}}$ and $\boldsymbol{\vec{\textbf{F}}_{\textbf{n}}}$ must be equal to each other if the nail is not moving, to satisfy the second condition for equilibrium (net τ = 0). (In order for the nail to actually move, the torque due to $\boldsymbol{\vec{\textbf{F}}_{\textbf{i}}}$ must be ever-so-slightly greater than torque due to $\boldsymbol{\vec{\textbf{F}}_{\textbf{n}}}$.) Hence, $\boldsymbol{l_{\textbf{i}}F_{\textbf{i}}=l_{\textbf{o}}F_{\textbf{o}}}$ where li and lo are the distances from where the input and output forces are applied to the pivot, as shown in the figure. Rearranging the last equation gives $\boldsymbol{\frac{F_{\textbf{o}}}{F_{\textbf{i}}}}$$\boldsymbol{=}$$\boldsymbol{\frac{l_{\textbf{i}}}{l_{\textbf{o}}}}.$ What interests us most here is that the magnitude of the force exerted by the nail puller, Fo, is much greater than the magnitude of the input force applied to the puller at the other end, Fi. For the nail puller, $\boldsymbol{\textbf{MA}\:=}$$\boldsymbol{\frac{F_{\textbf{o}}}{F_{\textbf{i}}}}$$\boldsymbol{=}$$\boldsymbol{\frac{l_{\textbf{i}}}{l_{\textbf{o}}}}.$ This equation is true for levers in general. For the nail puller, the MA is certainly greater than one. The longer the handle on the nail puller, the greater the force you can exert with it. Two other types of levers that differ slightly from the nail puller are a wheelbarrow and a shovel, shown in Figure 2. All these lever types are similar in that only three forces are involved – the input force, the output force, and the force on the pivot – and thus their MAs are given by $\boldsymbol{\textbf{MA}=\frac{F_{\textbf{o}}}{F_{\textbf{i}}}}$ and $\boldsymbol{\textbf{MA}=\frac{d_1}{d_2}},$ with distances being measured relative to the physical pivot. The wheelbarrow and shovel differ from the nail puller because both the input and output forces are on the same side of the pivot. In the case of the wheelbarrow, the output force or load is between the pivot (the wheel’s axle) and the input or applied force. In the case of the shovel, the input force is between the pivot (at the end of the handle) and the load, but the input lever arm is shorter than the output lever arm. In this case, the MA is less than one. ### Example 1: What is the Advantage for the Wheelbarrow? In the wheelbarrow of Figure 2, the load has a perpendicular lever arm of 7.50 cm, while the hands have a perpendicular lever arm of 1.02 m. (a) What upward force must you exert to support the wheelbarrow and its load if their combined mass is 45.0 kg? (b) What force does the wheelbarrow exert on the ground? Strategy Here, we use the concept of mechanical advantage. Solution (a) In this case, $\boldsymbol{\frac{F_{\textbf{o}}}{F_{\textbf{i}}}=\frac{l_{\textbf{i}}}{l_{\textbf{o}}}}$ becomes $\boldsymbol{F_{\textbf{i}}=F_{\textbf{o}}}$$\boldsymbol{\frac{l_{\textbf{o}}}{l_{\textbf{i}}}}.$ Adding values into this equation yields $\boldsymbol{F_{\textbf{i}}=(45.0\textbf{ kg})(9.80\textbf{ m/s}^2)}$$\boldsymbol{\frac{0.075\textbf{ m}}{1.02\textbf{ m}}}$$\boldsymbol{=\:32.4\textbf{ N}}.$ The free-body diagram (see Figure 2) gives the following normal force: Fi + N = W. Therefore, N = (45.0 kg)(9.80 m/s2) -32.4 N = 409 N. N is the normal force acting on the wheel; by Newton’s third law, the force the wheel exerts on the ground is 409 N. Discussion An even longer handle would reduce the force needed to lift the load. The MA here is MA=1.02/0.0750=13.6. Another very simple machine is the inclined plane. Pushing a cart up a plane is easier than lifting the same cart straight up to the top using a ladder, because the applied force is less. However, the work done in both cases (assuming the work done by friction is negligible) is the same. Inclined lanes or ramps were probably used during the construction of the Egyptian pyramids to move large blocks of stone to the top. A crank is a lever that can be rotated 360° about its pivot, as shown in Figure 3. Such a machine may not look like a lever, but the physics of its actions remain the same. The MA for a crank is simply the ratio of the radii ri/r0. Wheels and gears have this simple expression for their MAs too. The MA can be greater than 1, as it is for the crank, or less than 1, as it is for the simplified car axle driving the wheels, as shown. If the axle’s radius is 2.0 cm and the wheel’s radius is 24.0 cm, then MA=2.0/24.0=0.083 and the axle would have to exert a force of 12,000 N on the wheel to enable it to exert a force of 1000 N on the ground. An ordinary pulley has an MA of 1; it only changes the direction of the force and not its magnitude. Combinations of pulleys, such as those illustrated in Figure 4, are used to multiply force. If the pulleys are friction-free, then the force output is approximately an integral multiple of the tension in the cable. The number of cables pulling directly upward on the system of interest, as illustrated in the figures given below, is approximately the MA of the pulley system. Since each attachment applies an external force in approximately the same direction as the others, they add, producing a total force that is nearly an integral multiple of the input force T. # Section Summary • Simple machines are devices that can be used to multiply or augment a force that we apply – often at the expense of a distance through which we have to apply the force. • The ratio of output to input forces for any simple machine is called its mechanical advantage • A few simple machines are the lever, nail puller, wheelbarrow, crank, etc. ### Conceptual Questions 1: Scissors are like a double-lever system. Which of the simple machines in Figure 1 and Figure 2 is most analogous to scissors? 2: Suppose you pull a nail at a constant rate using a nail puller as shown in Figure 1. Is the nail puller in equilibrium? What if you pull the nail with some acceleration – is the nail puller in equilibrium then? In which case is the force applied to the nail puller larger and why? 3: Why are the forces exerted on the outside world by the limbs of our bodies usually much smaller than the forces exerted by muscles inside the body? 4: Explain why the forces in our joints are several times larger than the forces we exert on the outside world with our limbs. Can these forces be even greater than muscle forces (see previous Question)? ### Problems & Exercises 1: What is the mechanical advantage of a nail puller—similar to the one shown in Figure 1 —where you exert a force 45 cm from the pivot and the nail is 1.8 cm on the other side? What minimum force must you exert to apply a force of 1250 N to the nail? 2: Suppose you needed to raise a 250-kg mower a distance of 6.0 cm above the ground to change a tire. If you had a 2.0-m long lever, where would you place the fulcrum if your force was limited to 300 N? 3: a) What is the mechanical advantage of a wheelbarrow, such as the one in Figure 2, if the center of gravity of the wheelbarrow and its load has a perpendicular lever arm of 5.50 cm, while the hands have a perpendicular lever arm of 1.02 m? (b) What upward force should you exert to support the wheelbarrow and its load if their combined mass is 55.0 kg? (c) What force does the wheel exert on the ground? 4: A typical car has an axle with 1.10 cm radius driving a tire with a radius of 27.5 cm. What is its mechanical advantage assuming the very simplified model in Figure 3(b)? 5: What force does the nail puller in Exercise 1 exert on the supporting surface? The nail puller has a mass of 2.10 kg. 6: If you used an ideal pulley of the type shown in Figure 4(a) to support a car engine of mass 115 kg, (a) What would be the tension in the rope? (b) What force must the ceiling supply, assuming you pull straight down on the rope? Neglect the pulley system’s mass. 7: Repeat Exercise 6 for the pulley shown in Figure 4(c), assuming you pull straight up on the rope. The pulley system’s mass is 7.00 kg. ## Glossary the ratio of output to input forces for any simple machine ### Solutions Problems & Exercises 1: $$\boldsymbol{25\:\:50\textbf{ N}}$$ 3: (a) $\boldsymbol{MA=18.5}$ (b) $\boldsymbol{F_{\textbf{i}}=29.1\textbf{ N}}$ (c) $$\boldsymbol{510\textbf{ N}}$$ downward 5: $\boldsymbol{1.3\times10^3\textbf{ N}}$ 7: (a) $\boldsymbol{T=299\textbf{ N}}$ (b) $$\boldsymbol{897\textbf{ N}}$$ upward
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8756253123283386, "perplexity": 1054.3019479730876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058456.86/warc/CC-MAIN-20210927151238-20210927181238-00668.warc.gz"}
https://www.aimsciences.org/journal/1078-0947/2006/16/3
# American Institute of Mathematical Sciences ISSN: 1078-0947 eISSN: 1553-5231 All Issues ## Discrete & Continuous Dynamical Systems - A September 2006 , Volume 16 , Issue 3 Select all articles Export/Reference: 2006, 16(3): 513-523 doi: 10.3934/dcds.2006.16.513 +[Abstract](2226) +[PDF](223.0KB) Abstract: We study the effect of a zero order term on existence and optimal summability of solutions to the elliptic problem $-\text{div}( M(x)\nabla u)- a\frac{u}{|x|^2}=f \text{ in } \Omega, \qquad u=0 \text{ on } \partial \Omega$, with respect to the summability of $f$ and the value of the parameter $a$. Here $\Omega$ is a bounded domain in $\mathbb{R}^N$ containing the origin. 2006, 16(3): 525-539 doi: 10.3934/dcds.2006.16.525 +[Abstract](1999) +[PDF](241.7KB) Abstract: We describe as $\varepsilon \to 0$ radially symmetric sign-changing solutions to the problem $-\Delta u =|u|^{\frac 4{N-2} -\varepsilon} u \quad \text{in } B$ where $B$ is the unit ball in $\R^N$, $N\ge 3$, under zero Dirichlet boundary conditions. We construct radial solutions with $k$ nodal regions which resemble a superposition of "bubbles'' of different signs and blow-up orders, concentrating around the origin. A dual phenomenon is described for the slightly supercritical problem $-\Delta u =|u|^{\frac 4{N-2} +\varepsilon} u \quad \text{in } \R^N \setminus B$ under Dirichlet and fast vanishing-at-infinity conditions. 2006, 16(3): 541-561 doi: 10.3934/dcds.2006.16.541 +[Abstract](1917) +[PDF](281.8KB) Abstract: We study the Cauchy problem with bounded continuous initial-value functions for the differential-difference equation $\frac{\partial u}{\partial t}= \sum$nk,j,m=1$a_{kjm}\frac{\partial^2u}{\partial x_k\partial x_j} (x_1,...,x_{m-1},x_m+h_{kjm},x_{m+1},...,x_n,t),$ assuming that the operator on the right-hand side of the equation is strongly elliptic and the coefficients $a_{kjm}$ and $h_{kjm}$ are real. We prove that this Cauchy problem has a unique solution (in the sense of distributions) and this solution is classical in ${\mathbb R}^n \times (0,+\infty),$ find its integral representation, and construct a differential parabolic equation with constant coefficients such that the difference between its classical bounded solution satisfying the same initial-value function and the investigated solution of the differential-difference equation tends to zero as $t\to\infty$. 2006, 16(3): 563-586 doi: 10.3934/dcds.2006.16.563 +[Abstract](1747) +[PDF](277.2KB) Abstract: We prove a universal bound, independent of the initial data, for all global nonnegative solutions of the Dirichlet problem of the quasilinear parabolic equation with convection $u_t = \Delta u^m +a\cdot \nabla u^q+ u^p$ in $\Omega\times (0,\infty)$, where $\Omega$ is a smoothly bounded domain in $\mathbf{R^N}$, $a \in \mathbf {R^N}$, $1 \le m < p$ <$m+2/(N+1)$ and $(m+1)/2 \le q < (m+p)/2$ (or $q = (m+p)/2$ and $|a|$ is small enough). The universal bound can be obtained by showing that any solution $u$ in $\Omega\times(0,T)$ satisfies the estimate $\||u(t)\||_{L^{\infty}(\Omega)} \le C(p,m,q,|a|, \Omega,\alpha,T)t^{-\alpha}$ in $0$<$t \le T/2$ for $\alpha$>$(N+1 )/[(m-1)(N+1)+2]$, which describes the initial blow-up rates of solutions. 2006, 16(3): 587-614 doi: 10.3934/dcds.2006.16.587 +[Abstract](2528) +[PDF](309.8KB) Abstract: We present the necessary and sufficient conditions and a new method to study the existence of pullback attractors of nonautonomous infinite dimensional dynamical systems. For illustrating our method, we apply it to nonautonomous 2D Navier-Stokes systems. We also show that the parametrically inflated pullback attractors and uniform attractors are robust with respect to the perturbations of both cocycle mappings and driving systems. As an example, we consider the nonautonomous 2D Navier-Stokes system with rapidly oscillating external force. 2006, 16(3): 615-634 doi: 10.3934/dcds.2006.16.615 +[Abstract](2025) +[PDF](280.9KB) Abstract: It is proved that for a prescribed potential $V$ there are many quasi-periodic solutions of nonlinear wave equations $u_{t t}-u_{x x}+V(x)u\pm u^3+O(|u|^5)=0$ subject to Dirichlet boundary conditions. 2006, 16(3): 635-655 doi: 10.3934/dcds.2006.16.635 +[Abstract](2027) +[PDF](260.6KB) Abstract: In this paper we prove the persistence of elliptic lower dimensional invariant tori for nearly integrable Gevrey-smooth Hamiltonian systems under Rüssmann's non-degeneracy condition by an improved KAM iteration, and the persisting invariant tori are Gevrey smooth with respect to parameters in the sense of Whitney, with a Gevrey index depending on the Gevrey class of Hamiltonian systems and on the exponent in the Diophantine condition. Moreover the Gevrey index should be optimal for the Diophantine condition in the proof of our theorem. 2006, 16(3): 657-688 doi: 10.3934/dcds.2006.16.657 +[Abstract](1568) +[PDF](346.2KB) Abstract: We consider the following singularly perturbed Schrödinger-Maxwell system with Dirichlet boundary condition $-\varepsilon^2\Delta v+v+\omega\phi v- \varepsilon^{\frac{p-1}{2}} v^p=0 \quad \text{ in}\ B_1,$ $-\Delta \phi=4\pi \omega v^2 \quad \text{in}\ B_1$, $v,\ \phi>0 \ \text{in}\ B_1 \quad \text{and}\quad v=\phi=0 \quad \text{on}\ \partial B_1$, where $B_1$ is the unit ball in $\mathbb{R}^3,\ \omega>0$ and $\ \frac{7}{3}$<$p\leq 5$ are constants, and $\varepsilon$>$0$ is a small parameter. Using the localized energy method, we prove that for every sufficiently large integer $N$, the system has a family of radial solutions $(v_\varepsilon, \phi_\varepsilon)$ such that $v_\varepsilon$ has $N$ sharp spheres concentrating on a sphere $\{|x|=r_N\}$ as $\varepsilon\to 0$. 2006, 16(3): 689-703 doi: 10.3934/dcds.2006.16.689 +[Abstract](1747) +[PDF](252.9KB) Abstract: The seemingly straightforward stability issue in three-dimensional homogeneous continuous piecewise linear systems with two linear zones is considered. The only equilibrium at the origin, being in the separation plane of the linear zones, has two linearization matrices. For the important case where both matrices have complex eigenvalues with the whole spectrum in the left half plane, the possible counter-intuitive instability of the origin is proved. Some sufficient conditions for the global asymptotic stability of such systems are also shown. 2006, 16(3): 705-720 doi: 10.3934/dcds.2006.16.705 +[Abstract](2413) +[PDF](241.4KB) Abstract: In this paper we study an eigenvalue boundary value problem which arises when seeking radial convex solutions of the Monge-Ampère equations. We shall establish several criteria for the existence, multiplicity and nonexistence of strictly convex solutions for the boundary value problem with or without an eigenvalue parameter. 2019  Impact Factor: 1.338
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9755988121032715, "perplexity": 371.8102022229064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141711306.69/warc/CC-MAIN-20201202144450-20201202174450-00666.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-7-review-page-468/45
## Intermediate Algebra (6th Edition) $\frac{1}{a^{\frac{9}{2}}}$ We are given the expression $(a^{\frac{1}{2}}a^{-2})^{3}$. First, we can use the product rule to simplify, which holds that $a^{m}\times a^{n}=a^{m+n}$ (where a is a real number, and m and n are positive integers). $(a^{\frac{1}{2}+(-2)})^{3}=(a^{-\frac{3}{2}})^{3}$ Next, we can use the power rule, which holds that $(a^{m})^{n}=a^{m\times n}$ (where a is a real number, and m and n are integers). $(a^{-\frac{3}{2}})^{3}=a^{-\frac{3}{2}\times3}=a^{-\frac{9}{2}}=\frac{1}{a^{\frac{9}{2}}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.977981686592102, "perplexity": 118.73973964911166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825154.68/warc/CC-MAIN-20171022075310-20171022095310-00648.warc.gz"}
https://stacks.math.columbia.edu/tag/01F3
Remark 20.13.5. The Leray spectral sequence, the way we proved it in Lemma 20.13.4 is a spectral sequence of $\Gamma (Y, \mathcal{O}_ Y)$-modules. However, it is quite easy to see that it is in fact a spectral sequence of $\Gamma (X, \mathcal{O}_ X)$-modules. For example $f$ gives rise to a morphism of ringed spaces $f' : (X, \mathcal{O}_ X) \to (Y, f_*\mathcal{O}_ X)$. By Lemma 20.13.3 the terms $E_ r^{p, q}$ of the Leray spectral sequence for an $\mathcal{O}_ X$-module $\mathcal{F}$ and $f$ are identical with those for $\mathcal{F}$ and $f'$ at least for $r \geq 2$. Namely, they both agree with the terms of the Leray spectral sequence for $\mathcal{F}$ as an abelian sheaf. And since $(f_*\mathcal{O}_ X)(Y) = \mathcal{O}_ X(X)$ we see the result. It is often the case that the Leray spectral sequence carries additional structure. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9927065968513489, "perplexity": 116.18713449461643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00133.warc.gz"}
https://physics.stackexchange.com/questions/377687/do-we-get-rid-of-irrelevant-operators-in-as-we-consider-low-energy-effective-t
# Do we “get rid of” irrelevant operators in as we consider low-energy effective theory? On reading the section 12.1 of Peskin and Schroeder's (P&S) QFT on "Wilson's Approach to Renormalization Theory", I gathered the impression that in the Wilsonian approach, one starts by analyzing a Lagrangian containing all possible terms compatible with some given symmetry and having mass dimension $\leq 4$. For example, P&S have considered the $\phi^4-$theory and discarded all other terms such as $\phi^6$, $\phi^2(\partial_\mu\phi)^2$ etc in the starting Lagrangian. However, according to this answer by AccidentalFourierTransform (AFT) and the following comments, in general, P&S's treatment is a pedagogical simplification. The answer says that one starts with a Lagrangian containing all terms compatible with some given symmetry and not just those with mass dimension $\leq 4$. In this scenario, on integrating out high momentum modes, the irrelevant operators contribute insignificantly and effectively disappear from the low-energy effective Lagrangian. But as I understand, P&S shows the opposite. They show as we integrate high momentum modes, we generate irrelevant terms. P&S's result is also compatible with the fact that Fermi theory is a low-energy theory containing an irrelevant operator. Therefore, we do not "get rid of" irrelevant operators in the low-energy theory. In some sense, we generate them in the low-energy version of the Standard Model. *How do I reconcile the result of Peskin and Schroeder with the answer of AFT? Following P&S, if we say non-renormalizable operators are "generated" (was not there to start with) in the low energy theory, aren't we saying that non-renormalizable operators become important in the low-energy approximation? But that's wrong. The answer here by ACuriousMind also says something similar to P&S. This answer also says that non-renormalizable operators are thought to be absent in the original Lagrangian, and they appear when the cut-off is lowered. • 'Irrelevant' operators do not cease to exist as energy is lowered. They are 'irrelevant' in the sense that they do not affect the scaling dimension of operators in the ground state. The coarse-graining procedure will generate all possible operators that are consistent with the symmetries of the initial (or 'bare') action (assuming that the coarse graining does not break any of symmetries of the bare action). – vik May 19 at 20:11 There are three parts to the process of getting an effective Lagrangian which are often conflated in discussion. Hopefully, seeing them separately will help: 1) When considering processes at energy scale $E_{exp}$ we can 'integrate out' modes with $p > E_{exp}$ as such terms will never appear on external legs (by design of the experiment) and so will never need them in the Lagrangian (this discussion applies in momentum space). 2) However, virtual processes with $p > E_{exp}$ can and will still happen 'in loops' and this effect is captured by the extra terms introduced in the lagrangian when integrating out. 3) Nevertheless, if we are only interested in $\tilde E_{exp} \ll E_{exp}$ processes (i.e. ones well below the cutoff used for integrating out) then the 'virtual' and 'irrelevant' processes (mass dimension > 4) have vanishing contribution and it is safe to 'drop' (ignore) them. A very useful useful toy model to understand this is in David Skinner's lecture notes section 2.5 where he considers a '$0$-dimension QFT' $$S(\phi,\chi) = \frac{m^2}{2} \phi^2 + \frac{M^2}{2} \chi^2 + \frac{\lambda}{4} \phi^2 \chi^2$$ and integrates out $\chi$. I explained all of that in my answer to Wilsonian definition of renormalizability so I will not repeat myself here. The Wilsonian RG is a dynamical system. The issue has to do with what question one is asking regarding this dynamical system: is one given a starting point and trying to figure out where one goes from there (statistical mechanics), or is one aiming for a final destination and is trying to figure out the starting point in order to get there (continuum QFT).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920486330986023, "perplexity": 359.45026371520373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529276.65/warc/CC-MAIN-20190723105707-20190723131707-00512.warc.gz"}
https://www.arxiv-vanity.com/papers/1109.0528/
###### Abstract In a previously work, we undertook a static and anisotropic content in theory and obtained new spherically symmetric solutions considering a constant torsion and some particular conditions for the pressure. In this paper, still in the framework of theory, new spherically symmetric solutions are obtained, first considering the general case of an isotropic fluid and later the anisotropic content case in which the generalized conditions for the matter content are considered such that the energy density, the radial and tangential pressures depend on the algebraic and its derivative . Moreover, we obtain the algebraic function through the reconstruction method for two cases and also study a polytropic model for the stellar structure. UFES 2011 Static Anisotropic Solutions in Theory M. Hamani Daouda 111E-mail address:  , Manuel E. Rodrigues 222E-mail address:  and M. J. S. Houndjo 333E-mail address: (a)  Universidade Federal do Espírito Santo Centro de Ciências Exatas - Departamento de Física Av. Fernando Ferrari s/n - Campus de Goiabeiras CEP29075-910 - Vitória/ES, Brazil (b)  Instituto de Física, Universidade Federal da Bahia, 40210-340, Salvador, BA, Brazil Pacs numbers: 04.50. Kd, 04.70.Bw, 04.20. Jb ## 1 Introduction Through the proposal of Einstein for constructing a new version for the General Relativity (GR) [1], the Teleparallel Theory (TT) took place but has been abandoned later for many years. However, from the considerations made by Moller, the proposal for an analogy between TT and GR has been undertaken and developed once again [2]. The GR is a theory that describes the gravitational interaction through the curvature of spacetimes. Since the Cartan manifold may possess curvature and torsion, we can define a connection free of torsion, called Levi-Civita’s connection, for which the interaction between the gravitation and matter is described by the GR. With the progress of the TT, it can be observed that the GR, in other word, the interaction of the curvature free of the torsion with the matter, can analogously be view as a theory that possesses uniquely torsion and free of the curvature, and whose the Riemann tensor without torsion is identically null. Then, the TT is described as a geometrical theory for which the Weitzenbock connection [3] generates a torsion, which in turn defines the action which remains invariant under the local Lorentz transformation [4]. From the global formulation of the TT for gravitation, one can compare it with experiments of quantum effects in gravitational interaction of neutron interferometer [5, 6]. With the progress of the measurements about the evolution of the universe, as the expansion and the acceleration, the dark matter and dark energy, various proposals for modifying the GR are being tested. As the unifications theories, for the scales of low energies, it appears in the effective actions besides the Ricci scalar, the terms , and and the proposal of modified gravity that agrees with the cosmological and astrophysical data is theory [7, 8]. The main problem that one faces with this theory is that the equation of motion is of order 4, being more complicated than the GR for any analyse. Since the GR possesses the TT as analogous, it has been thought the called theory, being the torsion scalar, which would be the analogous to the generalizes GR, the theory. The theory is the generalization of the TT as we shall see later. Note also that the theory is free of the curvature, i.e., it is defined from the Weitzenbock connection. However, it has been shown recently that this theory breaks the invariance of the local Lorentz transformations [10]. Other recent problem is that the theory appears to be dependent on the used frame, i.e., it is not covariant [10, 11]. In cosmology, the theory has originally been used as the source driving the inflation [12]. Posteriorly, it has been used as an alternative proposal for the acceleration of the universe, without requiring the introduction of the dark energy [13, 14, 15]. Recently, the cosmological perturbations of this theory have been analysed [16]. In gravitation, this theory started by obtaining solutions of black hole BTZ [17]. Note also that spherically symmetric solutions has been obtained freshly for stars [18, 19]. In this paper, as well as in the simplification methods for the equations of motion used in the GR, for obtaining the solutions with anisotropic symmetry [20], we propose to analyse the possibility of getting new gravitational solutions in theory. Fixing the spherical symmetry and the staticity of the metric for a matter content described by a energy momentum tensor for an anisotropic fluid, we can impose some conditions on the functions which define the metric or on the radial pressure. Thus, one can obtain equations for the torsion scalar, and this results in the complete determination of the Weitzenbock geometry through the differential equation that defines the torsion scalar. This paper is organized as follows. In the Section , we will present a brief revision of the fundamental concepts of the Weitzenbok geometry, the action of the theory and the equations of motion. In the Section , we will fix the symmetries of the geometry and present the equations of the energy density, the radial and tangential pressures. The Section is devoted for obtaining new solutions in the theory with constant torsion, and the matter content depending on the function and its derivative. In the Section , we present a summary of the reconstruction method for the static case of the theory , showing its effectiveness for two illustrative examples. In Section we study the stellar structure in our conjecture for the theory. The conclusion and perspectives are presented in the Section . ## 2 The field equations from f(T) theory As the theory has been extensively studied this last months, several works have established with great consistency their mathematical bases and physical basic important concepts for their understandability [4, 21]. Let us define the notation of the Latin subscript as those related to the tetrad fields, and the Greek one related to the spacetimes coordinates. For a general spacetimes metric, we can define the line element as dS2=gμνdxμdxν. (1) One can describe the projection of this line element in the tangent space to the spacetimes through the matrix called tetrad as follows dS2 = gμνdxμdxν=ηijθiθj, (2) dxμ = eμiθi,θi=eiμdxμ, (3) where and or . The square root of the metric determinant is given by . For describing the spacetimes in terms of the tetrad matrix, we choose the connection such that the Riemann tensor vanishes identically and the Weitzenbock connection is given by Γαμν=eαi∂νeiμ=−eiμ∂νeαi. (4) For this spacetimes, the curvature is always null, while the pressure can vanish. Due to the fact that the antisymmetric part of the connection does not vanish, we can define directly the components of the connection, the tensors, the torsion and contorsion, whose components are given by Tαμν = (5) Kμνα = −12(Tμνα−Tνμα−Tμνα), (6) which allow us to define new components of the tensor as Sμνα=12(Kμνα+δμαTβνβ−δναTβμβ). (7) We are now able to define easily the scalar that makes up the action of theory, the torsion scalar . Through (5)-(7), we define the scalar torsion scalar as T=TαμνSμνα. (8) The GR couples matter content with the Einstein-Hilbert action (linear function of ), and the equation of motion is obtained by the minimum action principle. The theory in the same way seeks the coupling with the matter content, defining instead of a linear term of the curvature scalar for the geometrical part, an arbitrary function of , . In this paper, we will make the same considerations for coupling the geometrical part of the theory, which is the generalization of the TT, through a function depending on the torsion scalar, , with the Lagrangian density of the matter content. Thus, we define the action of the theory as S[eiμ,ΦA]=∫d4xe[116πf(T)+LMatter(ΦA)], (9) where we used the units in which and the are the matter fields. Considering the action (9) as a functional of the fields and , and vanishing the variation of the functional with respect to the field , one obtains the following equation of motion [10] Sνρμ∂ρTfTT+[e−1eiμ∂ρ(eeαiSνρα)+TαλμSνλα]fT+14δνμf=4πTνμ, (10) where is the energy momentum tensor, and . If we consider again , the TT is recovered with a cosmological constant. For obtaining more general solutions in this spacetimes, we will undertake the matter content described by a energy-momentum tensor of an anisotropic fluid Tνμ=(ρ+pt)uμuν−ptδνμ+(pr−pt)vμvν, (11) where is the four-velocity, the unit space-like vector in the radial direction, the energy density, the pressure in the direction of (radial pressure) and the pressure orthogonal to (tangential pressure). Since we are assuming an anisotropic spherically symmetric matter, on has , such that their equality corresponds to an isotropic fluid sphere. In the next section, we will make some considerations for the manifold symmetries in order to obtain simplifications in the equation of motion and the specific solutions of these symmetries. ## 3 Spherically Symmetric geometry The line element of a spherically symmetric and static spacetimes can be described, without loss of generality, as dS2=ea(r)dt2−eb(r)dr2−r2(dθ2+sin2(θ)dϕ2). (12) In order to re-write the line element (12) into the invariant form under the Lorentz transformations as in (2), we define the tetrad matrix (3) as {eiμ}=diag{ea(r)/2,eb(r)/2,r,rsin(θ)}. (13) This choice of tetrad matrices is not unique, because the aim of letting the line element invariant under local Lorentz transformations, resulting in the form (2). Other choices have been performed with non-diagonal matrices, as in references [19] and [37]. Using (13), one can obtain , and with (4)-(8), we determine the torsion scalar and its derivatives in terms of T(r) = 2e−br(a′+1r), (14) T′(r) = 2e−br(a′′−1r2)−T(b′+1r), (15) where the prime () denote the derivative with respect to the radial coordinate . One can now re-write the equations of motion (10) for an anisotropic fluid as 4πρ = f4−(T−1r2−e−br(a′+b′))fT2, (16) 4πpr = (T−1r2)fT2−f4, (17) 4πpt = [T2+e−b(a′′2+(a′4+12r)(a′−b′))]fT2−f4, (18) cotθ2r2T′fTT = 0, (19) where and are the radial and tangential pressures respectively. In the Eqs. (16)-(18), we used the imposition (19), which arises from the non-diagonal components - (-) of the equation of motion (10). This imposition does not appear in the static case of the GR, but making his use in (19), we get only the following possible solutions T′ = 0⇒T=T0, (20) fTT = 0⇒f(T)=a0+a1T, (21) T′ = 0,fTT=0⇒T=T0,f(T)=f(T0), (22) which always relapses into the particular case of Teleparallel Theory, with a constant or a linear function. In the next section, we will determine new solutions for the theory making some consideration about the matter components and . ## 4 New solutions for an anisotropic fluid in the Weitzenbock spacetimes Several works have been done in cosmology, modeling and solving some problems, using the f(T) theory as basis. Actually, in local and astrophysical phenomena, their is still slowly moving to obtain new solutions. Recently, Deliduman and Yapiskan [11] shown that it could not exist relativistic stars, such as that of neutrons and others, in theory in D, except in the linear trivial case, the usual Teleparallel Theory. However, Boehmer et al [19] showed that for cases where and , there exists solutions of relativistic stars. Wang [18] also shown the existence of a class of solutions, coupled with Maxwell field, in the theory. Similarly, Vasquez et al [28] show some classes of solutions with rotation of the theory in D, some of them coupled with the Maxwell field. In a previously paper, we also draw the same idea and shown some classes of new solutions in theory with some specific conditions for the torsion and radial pressure for anisotropic fluids [22]. In order to extend this same idea, we will show in this paper some classes of spherically symmetric static solutions coming from theory, generalizing the condition of the matter content as depending on the algebraic functions and and algebraic functions of the radial coordinate . Meanwhile, in order to start with the simplest cases, and later performing the generalization, we will show, first, the general case of isotropy in the next subsection. ### 4.1 The isotropic case Before discussing the general anisotropic case, we will establish the general condition of isotropy of the solutions the theory with the metric (12). Taking the equality , with the expressions (17) and (18), we get T(r)=2r2+e−b[a′′+(a′2+1r)(a′−b′)]. (23) Tanking and in terms of , and their derivatives, through (14), we obtain a′=r2T(r)eb−1r,a′′=eb2(T+rT′+rTb′)+1r2, (24) which, substituted in (23) yield B′(r)2r[1−r2T(r)2B(r)]+B(r)2r2[1+r4T2(r)4B2(r)]+2r2[1−r24T(r)+r34T′(r)]=0, (25) where . This is the general equation for a model with isotropic matter content in theory, for the metric (12), firstly obtained in this work. As the differential equation (25) is nonlinear and very complicated for a direct integration, with some manipulation, we have three simplified cases: 1. For , where , the equation (25) becomes B′(r)+1rB(r)+4r=0, (26) whose general solution is B(r)=e−b(r)=c0r−4. (27) The line element (12) becomes dS2=r0rdt2−(c0r−4)−1dr2−r2dΩ2, (28) where and is a real constant. With this, we have the following possibilities for the signature of the metric: a) when , the signature is ; b) when , the signature is 444 Metrics with two timelike coordinates were studied in strings models with Timelike T-Duality [38], in branes [39], in Sigma model [41] and other physical applications [40].. This solution was first obtained by Boehmer et al [19], but with a general equation of isotropy different from ours, and with an incorrect limit555They choose the torsion as constant and make the limit in an equation similar to (25), which clearly does not lead to (26), due to the terms where there exists in (25).. Boehmer et al obtained this solution by using a wrong limit, and have not drawn up any analysis. This solution could be view as a wormhole. We can observe this by following the same process as in [22]. This line element (12) can be put in the form dS2=ea(r)dt2−dl2−r2(l)dΩ2, (29) where is denoted redshift function, and through the redefinition , with being the metric function given in (12), is called shape function. Therefore, the conditions of existence of a traversable wormhole are: a) the function must possess a minimum value for , which imposes ; b) ; c) has a finite value; and finally d) . The solution (28) satisfies the conditions a), for (signature ), b) and c), but not the condition d), because , and then seems to be a wormhole, but not a traversable one. However, as the signature must be , it could not be an usual wormhole, with the evolution given by Einstein-Rosen bridge, and then is discarded because of being non-physical. Now, the solution with (signature ), is not a wormhole, but a possible solution for a null torsion scalar. The energy density and the pressure (isotropic case) are given by ρ(r) = f(0)16π+5fT(0)8πr2, (30) pr(r) = pt(r)=−f(0)16π−fT(0)8πr2. (31) This solution presents a matter content divergent in , then is called singular. The matter content must satisfy the weak energy condition (WEC) and the null energy condition (NEC), given by , and . Then, for guaranteeing the WEC and NEC, we need to have , or and , with , where . 2. Choosing the condition a′(r)=−1r+c0,c0∈R, (32) that generalizes the previous case, where we had , through the equation (14), (25) becomes B′(r)(1+c0r)+B(r)(1r+c20r−4c0)+4r=0. (33) The general solution of this equation is given by B(r) = r−1e−c0r(1+c0r)6(c1−k(r)180ec0)+1180c0r(154+51c0r+28c20r2+16c30r3+ (34) +6c40r4+c50r5), where and is a real constant. The solution of , coming from (32), is given by ea(r)=r0rec0r. (35) The expression of energy density, the radial and tangential pressures are too long and can not be written here. However, it is important to note that they are singular in . This is a new isotropic solution obtained for the first time in this work. 3. Taking the so-called quasi-global coordinate condition a(r)=−b(r), (36) from (16) and (17) one obtains the equality pr(r)=−ρ(r). (37) The isotropy requires , which, from (17) and (18) yields (23). Replacing the condition (36) in (14) and equating with the expression (23), we get the following differential equation b′′−(b′)2+2r2(1−eb)=0. (38) The solution of this equation is ea(r)=e−b(r)=1−c0r+c13r2, (39) where . This solution (39) behaves as the equation of state of dark energy, , with ρ(r)=a0−2a1c116π, (40) where we took into account the imposition (21). This is a new black hole solution obtained in this work. This solution is similar to the S-(A)dS one, for and . The conditions and are always satisfied, since in this case. The condition impose , for , and , for and . Here, the torsion scalar (14) cannot be a constant. A similar solution to this one was obtained in our previous paper [22]. It was obtained as a particular case of anisotropic solution for the choice of the constant radial pressure, which, by the quasi-global coordinate condition, resulted in a isotropized solution. But here, we have obtained a general isotropic solution for the quasi-global condition (36), which leads to a matter content that satisfies the equation of state of the dark energy, with a constant energy density (40). Comparing this later with our particular case previously obtained (solution () in [22]), it appears that the constant pressure must be equal to the energy density (40), which fixes the pressure in terms of the constants of the algebraic function and . ### 4.2 The anisotropic content case Now, as already shown in our previous paper [22], the equations of motion (16)-(18) allow us to establish the following generalized conditions ρ(r) = g1(r)16πf(T)+g2(r)8πfT(T)+g3(r)16π, (41) pr(r) = g4(r)16πf(T)+g5(r)8πfT(T)+g6(r)16π, (42) pt(r) = g7(r)16πf(T)+g8(r)8πfT(T)+g9(r)16π, (43) where the functions , with , are algebraic functions of only the radial coordinate . These generalized conditions for the matter content are first used here, resulting in new solutions originally obtained in this work. We will distinguish three main cases here: 1. When the energy density obeys the condition (41), equating (16) and (41), and taking into account the imposition (21), we get T(r)=a0a1[2k1(r)−1]−2k1(r)[g2(r)+g32a1−1r2−e−br(a′+b′)], (44) where . Three important sub-cases can be observed: 1. Making use of the coordinate condition (32), we obtain directly the solution (35) for . Equating (14) and (44), then substituting (35), we get the following differential equation (e−b)′ + x1(r)(e−b)+y1(r)=0, (45) x1(r) = c0(1k1(r)−1)+1r, (46) y1(r) = r[g2(r)+g3(r)2a1−1r2−a0a1(1−12k1(r))], (47) whose general solution is e−b(r) = exp(−∫x1(r)dr)[c1−∫exp(∫x1(r)dr)y1(r)dr], (48) where , and are given in (46) and (47) respectively. A particular case is when we choose k1(r)=c0rc0r−1,g2(r)=∑nh(n)rn−1−g3(r)2a1+1r2+a0a1(c0r+12c0r). (49) In this case, we get and , from which, using (48), yields e−b(r)=−h(−1)lnr−∑nh(n)(n+1)rn+1,n≠−1. (50) Now, if we use only the terms in which and , we regain a wormhole already obtained in [22], for and . Another particular case would be a generalization of several classes of traversable wormholes that connect two non-asymptotically flat regions. We explicit here two cases: 1. When the unique terms of (50) are for the orders and (), we get e−b(r)=−h(0)r−h(1)2r2. (51) This solution is a traversable wormhole. We can show this as follows: The shape function and its derivative are given by β(r)=r[1+r(h(0)+h(1)2r)],β′(r)=1+r(2h(0)+3h(1)2r). (52) From (51), we take , and solving, we obtain . As [42] is a condition for the minimum and in this case we get , which is greater than zero for , being a minimum for . The redshift function has a finite value in . The shape function satisfies and , for . In general, in order to get consecutive orders and , in (50), we obtain traversable wormholes with minimum in , with and . 2. When the have only the terms of orders and in (50), we get e−b(r)=−h(0)r−h(2)3r3. (53) This solution is a traversable wormhole. The function has a minimum in , for . The redshift function has a finite value in . The shape function satisfies and , for . In general, for obtaining consecutive orders and , in (50), we get traversable wormholes with minimum in , with and . We also have a multitude of other solutions of traversable wormholes in (48), that connect two non-asymptotically flat regions. 2. When the condition of coordinates is given by the quasi-global coordinate (36), the equation (14) yields the following equation (e−b)′+1r(e−b)−r2T(r)=0, (54) whose general solution is ea(r)=e−b(r)=c0r+12r∫r2T(r)dr,c0∈R, (55) which, from (44), the torsion scalar is given by T(r)=a0a1[2k1(r)−1]−2k1(r)[g2(r)+g32a1−1r2]. (56) We can show that a particular case that highlights the generalization of this solution. Taking k1(r)=a0a1+∑nh(n)rn2[a0a1−g2(r)−g3(r)2a1+1r2], (57) where are real constants and , the torsion scalar in (56) becomes . Substituting it in (55), we get the following particular case ea(r)=e−b(r)=c0r+h(−3)2rlnr+12∑nh(n)(n+3)rn+2, (58) with . Two specific cases of this particular case are: a) when , we regain the same result as obtained in [22], with ; b) when we only have the terms for the values of , and . In this case the solution is of type Reissner-Nordstrom-(Anti) de Sitter (RN-(A)dS), with the mass , the electric charge () and the cosmological constant . This particular case reproduce various known terms, with respect to the power of the radial coordinate , as the linear term [30], the logarithmic term (in GR [36], in theory [31] and in other modified gravities [32]), the term of fourth power [33, 35], in that of the nth order [33, 34, 36] among others. In fact, the general case is more comprehensive. If we take the particular case of the example b), with the unique terms and , for (type S-dS), we obtain the line element as dS2=(1+c0r+h06r2)dt2−(1+c0r+h06r2)−1dr2−r2dΩ2. (59) The horizon is obtained through . The energy density in this case is given by . Then, the total mass is given by . Making the match of the interior metric with an exterior one, of type S-dS, where , the horizon is then obtained on the form . With , we write the total mass and its differential as M=rH2(1+h06r2H), (60) dM=drH2(1+h02r2H). (61) The Hawking temperature, the entropy and its differential can be calculated through (59), as TH = g′004π√−g00g11∣∣r=rH=14π(2Mr2H+h03rH), (62) S = 14AH=14∫π0∫2π0√g22g33dθϕ∣∣r=rH=πr2H, (63) dS = 2πrHdrH. (64) From the expressions (61), (62) and (64), taking into account (60), we can show that the solution (59) obeys the first law of thermodynamics for black holes. dM=THdS. (65) This fact is not surprising. Recently, Miao Li and collaborators [47] have demonstrated that it can exist a violation of the first law of thermodynamics in theory, when the invariance by the local Lorentz transformation is broken, due to the term in the equations of motion (10). When the first law is violated, there exists entropy production. But for our specific case of the choice of a set of diagonal tetrads in (13), it appears the imposition (21), which eliminates the terms of in the equations of motion. Then, we observe that the first law is always obeyed, as shown by Mian Li and collaborators [47], with the entropy . Here, we obtained the entropy by (63), but this is due to the fact that we can use the symmetry of temporal translation in the action (9), and put (), then, eliminating the constant in . With this, we get , conciliating our result with the general conjecture of Miao and collaborators. The same thermodynamic analysis can be made for other black holes solutions obtained in this work. We will just explain this particular case, due to the wide rang of classes of solutions obtained in this work. 3. When the coordinate condition is given by b′=−1r, (66) the equation (14) yields T(r)=2r0r+2r0a′, (67) where is a positive integration constant. Substituting (67) into (44), we get a(r)=∫[1r(k1(r)+1k1(r)−1)+r0k1(r)(k1(r)−1)(g2(r)+g3(r)2a1−1r2)−r0a02a1(2k1(r)−1k1(r)−1)]dr, (68) and eb(r)=r0r. (69) Now, for a particular , if we choose k1(r)=a0a1+2r0ln(∑nh(n)rn)2[a0a1−g2(r)−g3(r)2a1+1r2], (70) we regain the results mentioned in the previous item according to the choice of the constants and , but with given in (69) and in (58), and which reproduces the various terms of this case. 2. When the tangential pressure obeys the condition (43), equating (18) and (43), considering the imposition (21), and making , we obtain T(r) = k2(r)[2(e−b(a′′2+(a′4+12r)(a′−b′))−g8(r))−g9(r)a1]−a0a1[k2(r)+1]. (71) A direct solution can be obtained by taking the condition (32) and substituting into (71), that leads to x2(r)(e−b)′+y2(r)(e−b)−z2(r)=0, (72) x2(r)=k2(r)4(c0+1r),y2(r)=k2(r)4(c204+12r2)−c0r, (73) z2(r)=k2(r)(g8(r)+g9(r)2a1)+(a02a1)[k2(r)+1]. (74) The general solution of (72) is e−b(r)=exp(−∫y2(r)x2(r)dr)[c1+∫(exp[∫y2(r)x2(r)dr])z2(r)x2(r)dr], (75) where , and are given in (73) and (74). For the particular case in which k2(r)=16c0rc20r2+2,g8(r)=(c0r+1)4∑nh(n)rn−1−a02a1(1+c20r2+216c0r)−g9(r)2a1, (76) we get e−b(r<
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966992974281311, "perplexity": 634.8611797325341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519883.54/warc/CC-MAIN-20210120023125-20210120053125-00534.warc.gz"}
http://mathhelpforum.com/calculus/191637-recurrence-relation-proof.html
Math Help - Recurrence relation proof 1. Recurrence relation proof $n = 1,2,3...$ $I_{n} = \int \frac {x^{n-1}}{2-x} dx$ between limits 0 and 1. Writing $x^{n} = x^{n-1}(2-(2-x))$, show this sequence satisfies $I_{n+1} = 2I_{n} - \frac {1}{n}$. Now, I can't quite get a foothold into this puzzle. I'm thinking there's a simple logical step but I don't see it. Can someone help please? I wanted to try induction but I don't see how I'm supposed to use the idea they've told me to do so. It is required of the problem to use this hint. 2. Re: Recurrence relation proof Use the hint $I_{n+1} = \int_0^1 \frac{x^n}{2-x}\,dx = \int_0^1 \frac{x^{n-1}\left(2 - (2-x)\right)}{2-x}\,dx = \int_0^1 \frac{2x^{n-1}}{2-x}\,dx - \int_0^1 x^{n-1} \frac{2-x}{2-x}\,dx$ so $I_{n+1} = 2 I_{n} - \int_0^1 x^{n-1}\,dx$. Now integrate the last term.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884684681892395, "perplexity": 385.54369154784865}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447549415.47/warc/CC-MAIN-20141224185909-00057-ip-10-231-17-201.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/4029423/if-b-is-rational-and-c-is-real-can-x2-bx-c-have-one-rational-root-and-on
# If b is rational and c is real, can $x^2 + bx + c$ have one rational root and one irrational root? Use only definition of rational numbers to prove. If $$b$$ is rational and $$c$$ is real, can $$x^2 + bx + c$$ have one rational root and one irrational root? Use only the definition of rational numbers to prove. I think that it can't because since $$b$$ is rational, if two real roots exist then both are either rational or irrational, however, I'm not sure how to prove this using only the definition of rational numbers i.e. it is a number that can be expressed as $$\frac{x}{y}$$ where $$x$$ and $$y$$ are integers $$y$$ being nonzero. So you can't assume stuff like rational + irrational = irrational. • By real root you mean irrational root? – R.V.N. Feb 17 at 17:07 • And by "$c$ is real" do you mean "$c$ is irrational"? – Michael Cohen Feb 17 at 17:08 • Welcome to Mathematics Stack Exchange. Note that the sum of roots is $-b$ – J. W. Tanner Feb 17 at 17:09 • @R.V.N. yes my bad – TurboAverage419 Feb 17 at 17:09 • @MichaelCohen no c is any real number – TurboAverage419 Feb 17 at 17:09 Of course, note that all rational numbers are indeed real. I suppose you wanted to ask "Is it possible that one root is rational and the other is not". The answer to that is then "no". Note that the sum of the two roots is $$-b \in \Bbb Q$$. Using the definition of rationals, it follows that the difference of two rationals is rational. Conclude that $$\text{rational} + \text{irrational} = \text{rational}$$ is not possible. • Is rational numbers being closed under add/sub enough to conclude rational + irrational = rational not possible? Since we can't assume anything it seems there is no proof that there does not exist an irrational number that can satisfy the above equation. – TurboAverage419 Feb 17 at 17:17 • If $q_1, q_2$ are rationals and $r$ an irrational such that $q_1 + r = q_2$, what do you get by subtracting $q_1$ from both sides? – Aryaman Maithani Feb 17 at 17:20 • Oh I see now, if rational numbers are closed under subtraction that implies that an irrational number cannot be the difference of two rational numbers? – TurboAverage419 Feb 17 at 17:23 • Cannot be the difference of two rational numbers, yes – Aryaman Maithani Feb 17 at 17:23 • Argh, another typo! Thank you! – TurboAverage419 Feb 17 at 17:25 Hint Suppose $$r$$ is a rational root and $$j$$ is a irrational root. $$r+j=-b\to j=-(b+r).$$ Can you see the problem? • Thanks, your comment rephrased what someone else said and helped clarify it! – TurboAverage419 Feb 17 at 17:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.946227490901947, "perplexity": 376.60359839216505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.68/warc/CC-MAIN-20210731141123-20210731171123-00597.warc.gz"}
https://stats.stackexchange.com/questions/343550/backpropagation-partial-derivatives
# Backpropagation - partial derivatives I am currently reading the Neural networks and deep learning book by Michael Nielsen. I have a question regarding the backpropagation chapter: Background: He explains the influence of a neuron on the cost function by saying that there sits a demon in the neuron. The demon sits at the $j^{th}$ neuron in layer $l$. As the input to the neuron comes in, the demon messes with the neuron's operation. It adds a little change $\Delta z_j^l$ to the neuron's weighted input, so that instead of outputting $\sigma(z_j^l)$, the neuron instead outputs $\sigma (z_j^l+\Delta z_j^l)$. This change propagates through later layers in the network, finally causing the overall cost to change by an amount $\frac{\partial C}{\partial z_j^l} \Delta z_j^l$. Furthermore, he states that if $\frac{\partial C}{\partial z_j^l}$ has a large value (either positive or negative). Then the demon can lower the cost quite a bit by choosing $\Delta z_j^l$ to have the opposite sign to $\frac{\partial C}{\partial z_j^l}$. By contrast, if $\frac{\partial C}{\partial z_j^l}$ is close to zero, then the demon can't improve the cost much at all by perturbing the weighted input $z_j^l$. So far, I understand this. $\frac{\partial C}{\partial z_j^l}$ quantifies the influence of $z_j^l$ on the cost function $C$. Question: However, afterwards he states that there is a heuristic sense in which $\frac{\partial C}{\partial z_j^l}$ is a measure of the error in the neuron. Why is $\frac{\partial C}{\partial z_j^l}$ a measure of error in the neuron? I thought that it is just the influence of the neuron output on the cost function. • Wouldn't that imply that influence equals error? • Aren't there cases in which we want certain neurons to have greater influence on the cost function? The expression $\frac{\partial C}{\partial z_j^l}$ represents how much does the change of the weight $j$ in the layer $l$ influence the cost $C$. You are right that this is not equal to the absolute error caused by that weight1. 1 For example, take the $f(x) = |x|$: This function has the same derivative everywhere (1 or -1 on the positive and negative part, respectively), regardless of how far from the minimum you are.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8276324272155762, "perplexity": 281.65256258505235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735958.84/warc/CC-MAIN-20200805124104-20200805154104-00169.warc.gz"}
http://math.stackexchange.com/questions/126518/plot-z-i-z-i-16-on-the-complex-plane/126524
# Plot $|z - i| + |z + i| = 16$ on the complex plane Plot $|z - i| + |z + i| = 16$ on the complex plane Conceptually I can see what is going on. I am going to be drawing the set of points who's combine distance between $i$ and $-i = 16$, which will form an ellipse. I was having trouble getting the equation of the ellipse algebraically. I get to the point: $x^2 + (y - 1) ^2 + 2\sqrt{x^2 + (y - 1)^2}\sqrt{x^2 + (y + 1)^2} + x^2 + (y+ 1)^2 = 256$ $2x^2 + 2y^2 + 2\sqrt{x^2 + (y - 1)^2}\sqrt{x^2 + (y + 1)^2} = 254$ It seems like I'm and doing something the hard way. - The set of points with constant sum of distances from given two points is an ellipse. A derivation of the equation of the ellipse, given the two points and the sum of distances, is shown e.g in this question. – Martin Sleziak Mar 31 '12 at 9:51 Maybe it's quicker way. Equation $|z-i| + |z+i| = 16$ is equivalent to $$\sqrt{x^2 + (y-1)^2} = 16 - \sqrt{x^2 + (y+1)^2}.$$ Squaring both sides you obtain $$x^2 + (y-1)^2 = 256 - 32\sqrt{x^2+(y+1)^2} + x^2 + (y+1)^2.$$ Some terms cancel out hence you get $$8\sqrt{x^2 + (y+1)^2} = 64 + y,$$ and $$64(x^2 + (y+1)^2) = 64^2 + 128y + y^2.$$ Finally $$64x^2 + 63y^2 = 64 \cdot 63.$$ Now it is easy to write ellipse equation $$\frac{x^2}{63} + \frac{y^2}{64} = 1.$$ - Just keep going: $$2x^2+2y^2 + 2\sqrt{x^2 + (y - 1)^2}\sqrt{x^2 + (y + 1)^2} = 254$$ $$\sqrt{x^2 + (y - 1)^2}\sqrt{x^2 + (y + 1)^2} = 127 - (x^2+y^2)$$ $$(x^2 + (y - 1)^2)(x^2 + (y + 1)^2) = (127 - (x^2+y^2))^2$$ $$x^4 + x^2[(y - 1)^2 + (y + 1)^2] + [(y+1)(y-1)]^2 = 127^2 -2\cdot127\cdot(x^2+y^2)+ (x^2+y^2)^2$$ $$x^4 + 2x^2y^2 +2x^2 + (y^2-1)^2 = 127^2 -2\cdot127\cdot(x^2+y^2)+ x^4+y^4+2x^2y^2$$ $$x^4 + 2x^2y^2 +2x^2 + y^4-2y^2+1 = 127^2 -2\cdot127\cdot(x^2+y^2)+ x^4+y^4+2x^2y^2$$ $$2x^2 -2y^2+1 = 127^2 -2\cdot127\cdot(x^2+y^2)$$ $$x^2(2+2\cdot127) +y^2(-2+2\cdot127) = 127^2 -1$$ $$256x^2 +252y^2 = 16128$$ $$\frac {x^2} {63} +\frac {y^2} {64} = 1$$ - 177 should be 127 – Mark Bennet Mar 31 '12 at 9:52 @MarkBennet, indeed. Thanks. – davin Mar 31 '12 at 9:59 from here, where I used $\sqrt{x^2 + (y-1)^2} = 16 - \sqrt{x^2 + (y+1)^2}$ from the currently accepted answer. - -1 because this is not the plot of the complex equation of the question – miracle173 Mar 31 '12 at 11:48 @miracle173, why? – draks ... Apr 5 '12 at 7:34 This is $|z|=8$ – Emre Apr 5 '12 at 7:51 @Emre it's not, it's an ellipse (with a flattening factor $g=1-\frac {b}{a}=\frac{1}{64}$)! See the link(tinyurl.com/clunqaw)! – draks ... Apr 5 '12 at 10:13 ok, it is an ellipse and i revoke the -1 – miracle173 Apr 5 '12 at 21:02 I think you are on the right track. Wolframalpha also doesn't have a "simpler form" of the algebraic equation. (see http://www.wolframalpha.com/input/?i=%28%7Cx%2Biy-i%7C%2B%7Cx%2Biy%2Bi%7C%29%5E2%3D256) Maybe this question is meant to be done on computer. Usually questions done by hand will ask you to "sketch" the graph. - Sketching the graph gives a centrally symmetric ellipse with the imaginary axis as the major axis. The length of the half the major axis is clearly 8 (square 64), and pythagoras gives the square of half the minor axis as 63. This is reflected in the answer obtained by algebraic manipulation. $$2x^2+2y^2 + 2\sqrt{x^2 + (y - 1)^2}\sqrt{x^2 + (y + 1)^2} = 254$$ $$\sqrt{(x^2 + y^2 + 1) - 2y}\sqrt{(x^2 + y^2 + 1) + 2y} = 127 - x^2 - y^2$$ Square both sides: $$((x^2 + y^2 + 1) - 2y)((x^2 + y^2 + 1) + 2y) = (127 - x^2 - y^2)^2$$ $$(x^2+y^2+1)^2-4y^2 = 127^2+x^4+y^4-254x^2-254y^2+2x^2y^2$$ Cancelling as we go and gathering terms in the obvious way: $$256x^2+252y^2=127^2-1=128 \times 126$$ (difference of two squares) The divisions now become easy and we reach the canonical form: $$\frac {x^2} {63} + \frac {y^2} {64} = 1$$ - Don't we all love algebraic solutions... The ellipse has an equation of the form $$(\frac {x} {a})^2 + (\frac {y} {b})^2 = 1$$ The focals are aligned on the y axis, therefore • a is the side of a rectangle triangle, the other site being 1 and the hypothenuse 16/2: $$a = \sqrt{8^2-1}$$ • b is 16/2. The equation of the ellipse is $$(\frac {x} {\sqrt{63}})^2 + (\frac {y} {8})^2 = 1$$ Or, to write it as above $$\frac {x^2} {63} + \frac {y^2} {64} = 1$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8755491375923157, "perplexity": 697.373216768716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698411148/warc/CC-MAIN-20130516100011-00094-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/221206-i-m-pre-calc-but-have-forgotten-some-basic-algebra-help.html
# Math Help - I'm in Pre-Calc but have forgotten some Basic Algebra. Help? 1. ## I'm in Pre-Calc but have forgotten some Basic Algebra. Help? So I am currently in Pre-Calc and we first have to do a review of Algebra I, which I forgot most of. This assignment is effort only so I could easily just write garbage and get an A but I want to remember this. Could you just explain how I'd do a problem, not solve it for me. Links to problems (got an error when trying to insert picture) 1. http://i.imgur.com/uEyR5Xr.jpg 2. http://i.imgur.com/FUUQ6SY.jpg 3. http://i.imgur.com/0O0XF5H.jpg 4. http://i.imgur.com/xXcinfc.jpg 2. ## Re: I'm in Pre-Calc but have forgotten some Basic Algebra. Help? Rules of exponents: $a^ma^n= a^{m+n}$ $(a^m)^n= a^{mn}$ $\frac{1}{a^n}= a^{-n}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9456936120986938, "perplexity": 2167.601247628658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120057.96/warc/CC-MAIN-20140914011200-00145-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://fr.maplesoft.com/support/help/maplesim/view.aspx?path=MultivariatePowerSeries/Truncate&L=F
MultivariatePowerSeries/Truncate - Maple Help MultivariatePowerSeries Truncate Truncate a power series or univariate polynomial over power series Calling Sequence Truncate(p, d) Truncate(u, d) Parameters p - power series generated by this package u - univariate polynomial over power series generated by this package d - (optional) non-negative integer Description • The command Truncate(p, d) returns the sum of all homogeneous parts of p of degree at most d. In other words, Truncate(p, d) returns  the natural image of the power series p modulo  the ${\left(d+1\right)}^{\mathrm{th}}$ power of the maximal ideal of the ring of power series. • If d is greater than the current precision of p, then the necessary extra terms are computed. • If d is not specified, all currently computed terms are used and no extra ones are computed. In other words, the default value of d is the precision of p. • The command  Truncate(u, d) returns the polynomial obtained from u by replacing each coefficient p with the sum of the homogeneous parts of p of degree at most d. In other words, each coefficient is replaced by its image modulo the ${\left(d+1\right)}^{\mathrm{th}}$ power of the maximal ideal of the ring of power series. • If d is not specified, the precision of each coefficient is used. If the coefficients are currently known to different precisions, they will consequently be truncated at different degrees. • When using the MultivariatePowerSeries package, do not assign anything to the variables occurring in the power series and univariate polynomials over power series. If you do, you may see invalid results. Examples > $\mathrm{with}\left(\mathrm{MultivariatePowerSeries}\right):$ We define the geometric power series in the variables $x$ and $y$. > $a≔\mathrm{GeometricSeries}\left(\left[x,y\right]\right)$ ${a}{≔}\left[{PowⅇrSⅇriⅇs of}\frac{{1}}{{1}{-}{x}{-}{y}}{:}{1}{+}{x}{+}{y}{+}{\dots }\right]$ (1) It is initially computed only to low precision. > $\mathrm{Truncate}\left(a\right)$ ${1}{+}{x}{+}{y}$ (2) If we update its precision, then the Truncate command returns more terms. > $\mathrm{UpdatePrecision}\left(a,3\right)$ $\left[{PowⅇrSⅇriⅇs of}\frac{{1}}{{1}{-}{x}{-}{y}}{:}{1}{+}{x}{+}{y}{+}{{x}}^{{2}}{+}{2}{}{x}{}{y}{+}{{y}}^{{2}}{+}{{x}}^{{3}}{+}{3}{}{{x}}^{{2}}{}{y}{+}{3}{}{x}{}{{y}}^{{2}}{+}{{y}}^{{3}}{+}{\dots }\right]$ (3) > $\mathrm{Truncate}\left(a\right)$ ${{x}}^{{3}}{+}{3}{}{{x}}^{{2}}{}{y}{+}{3}{}{x}{}{{y}}^{{2}}{+}{{y}}^{{3}}{+}{{x}}^{{2}}{+}{2}{}{x}{}{y}{+}{{y}}^{{2}}{+}{x}{+}{y}{+}{1}$ (4) We can get lower precision by specifying the truncation degree. > $\mathrm{Truncate}\left(a,2\right)$ ${{x}}^{{2}}{+}{2}{}{x}{}{y}{+}{{y}}^{{2}}{+}{x}{+}{y}{+}{1}$ (5) We define a univariate polynomial over power series involving $a$. > $f≔\mathrm{UnivariatePolynomialOverPowerSeries}\left(\left[\mathrm{GeometricSeries}\left(x\right),\mathrm{GeometricSeries}\left(y\right),a\right],z\right)$ ${f}{≔}\left[{UnivariatⅇPolynomialOvⅇrPowⅇrSⅇriⅇs:}\left({1}{+}{x}{+}{\dots }\right){+}\left({1}{+}{y}{+}{\dots }\right){}{z}{+}\left({1}{+}{x}{+}{y}{+}{{x}}^{{2}}{+}{2}{}{x}{}{y}{+}{{y}}^{{2}}{+}{{x}}^{{3}}{+}{3}{}{{x}}^{{2}}{}{y}{+}{3}{}{x}{}{{y}}^{{2}}{+}{{y}}^{{3}}{+}{\dots }\right){}{{z}}^{{2}}\right]$ (6) The constant and linear coefficients of $z$ are known to a different precision than the quadratic coefficient. By default, the Truncate command will return all known coefficients regardless of degree. > $\mathrm{Truncate}\left(f\right)$ $\left({{x}}^{{3}}{+}{3}{}{{x}}^{{2}}{}{y}{+}{3}{}{x}{}{{y}}^{{2}}{+}{{y}}^{{3}}{+}{{x}}^{{2}}{+}{2}{}{x}{}{y}{+}{{y}}^{{2}}{+}{x}{+}{y}{+}{1}\right){}{{z}}^{{2}}{+}\left({1}{+}{y}\right){}{z}{+}{1}{+}{x}$ (7) If we specify the truncation degree as 2, then more terms of the constant and linear coefficient are computed and some terms of the quadratic coefficient are omitted. > $\mathrm{Truncate}\left(f,2\right)$ $\left({{x}}^{{2}}{+}{2}{}{x}{}{y}{+}{{y}}^{{2}}{+}{x}{+}{y}{+}{1}\right){}{{z}}^{{2}}{+}\left({{y}}^{{2}}{+}{y}{+}{1}\right){}{z}{+}{{x}}^{{2}}{+}{x}{+}{1}$ (8) Compatibility • The MultivariatePowerSeries[Truncate] command was introduced in Maple 2021.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.974285900592804, "perplexity": 872.8844171171385}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00333.warc.gz"}
https://homework.zookal.com/questions-and-answers/for-each-case-determine-the-change-in-entropy-for-the-367930369
1. Science 3. for each case determine the change in entropy for the... # Question: for each case determine the change in entropy for the... ###### Question details For each case determine the change in entropy for the surroundings and the universe, when 1 mole of a diatomic gas expands from its initial volume of 5 L and 31.5 ◦C to: CASE (a) a final volume of 25 L reversibly and isothermically. CASE (b) irreversibly and isothermically against an external pressure of 1 atm. CASE (c) a final volume of 25 L reversibly and adiabatically. Yes, it is zero, but you have to demonstrate it. CASE (d) irreversibly and adiabatically against an external pressure of 1 atm. PLEASE, complete answers from CASE A to D.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581844568252563, "perplexity": 1201.04440129132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378872.82/warc/CC-MAIN-20210307200746-20210307230746-00416.warc.gz"}
https://export.arxiv.org/abs/2103.12103
physics.app-ph (what is this?) # Title: Interplay between powder catchment efficiency and layer height in self-stabilized laser metal deposition Abstract: In laser metal deposition (LMD) the height of the deposited track can vary within and between layers, causing significant deviations during the process evolution. This variation has been mainly attributed to temperature accumulation within the part due to the deposition trajectory and part geometry. Previous works have shown that in certain conditions a self-stabilizing process occurs, maintaining a deposit height and a constant standoff distance between the part and the deposition nozzle. In this work, the link between the powder catchment efficiency and the deposition height stability is illustrated. To this purpose, an experimental system consisting of an online measurement of the specimen weight and a coaxial optical triangulation monitoring of the layer height was employed. Through analytical modeling the inline height measurement and the process parameters could be used for estimating the deposition efficiency in real-time, which was confirmed by the direct mass measurements. The data were used for understanding how the powder catchment efficiency varied and stabilized as the process proceeded in different conditions. The results confirm that the track height variations are caused mainly by powder catchment efficiency, which is governed by the melt pool relative position with respect to the powder cone and the laser beam. For a given set of parameters, the standoff distance can be estimated to achieve the highest powder catchment efficiency and a regular height through the build direction. Subjects: Applied Physics (physics.app-ph) Cite as: arXiv:2103.12103 [physics.app-ph] (or arXiv:2103.12103v1 [physics.app-ph] for this version)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8742380738258362, "perplexity": 1677.6903634295222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057091.31/warc/CC-MAIN-20210920191528-20210920221528-00131.warc.gz"}
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-molecular-science-5th-edition/chapter-11-chemical-kinetics-rates-of-reactions-questions-for-review-and-thought-topical-questions-page-523b/32b
## Chemistry: The Molecular Science (5th Edition) $2.5\times10^{-4} mol L^{-1} min^{-1}$ Rate=$k[NH_{3}]^{0}=2.5\times10^{-4} mol L^{-1} min^{-1}$. This reaction is a zero order reaction which is independent of the concentration of the reactant.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8038634061813354, "perplexity": 2253.8198803897035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663039492.94/warc/CC-MAIN-20220529041832-20220529071832-00653.warc.gz"}
https://www.physicsforums.com/threads/finding-the-solution-set-for-an-inequality.361924/
Finding the Solution Set for an Inequality 1. Dec 9, 2009 Learning_Math 1. I need to find the solution set for |(3x+2)/(x+3)|>3. 3. When I solve the inequality (3x+2)/(x+3)>3, I get 2>9 which is clearly false. When I solve the inequality (3x+2)/(x+3)> -3, I come with the solution set (-inf. -11/6). My teacher is saying that there the solutotion set is (-inf. -3)U(-3, -11/6). I just can't figure out how to get to that solution. I can't figure out where that -3 is coming from. In his sparse notes on my assignment, he says there are two subcases for each of the two cases in number 1. Those are when x < -3 and when x > -3. I just cant' figure out how to use these cases. 2. Dec 9, 2009 The first step for solving $$|X| > a$$, for any expression X and number a, is to eliminate the absolute values with this: $$X < -a \text{ or } X > a$$ If you need to solve an inequality like (this is entirely made up for illustration) $$\frac x {x+1} > 5$$ \begin{align*} \frac x {x+1} - 5 & > 0 \\ \frac x {x+1} - \frac{5(x+1)}{x+1} & > 0 \\ \frac{x - (5x+5)}{x+1} & > 0 \\ \frac{-4x - 5}{x+1} & > 0 \\ \frac{(-1)(4x+5)}{x+1} & > 0\\ \frac{4x+5}{x+1} & < 0 \end{align*} I passed from the next-to-last to the last line by multiplying by (-1). These steps let you avoid the all-to-common problem of multiplying both sides of an inequality by a variable term when you don't know whether it's positive or negative. 3. Dec 9, 2009 Learning_Math Thanks for that setup. I will remember it for future use. I have also been completely overlooking that (3x+2)/(x+3) is undefined when x = -3. So that is how I get (-inf. -3)U(-3, -11/6) instead of just (-inf. -11/6).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9958891868591309, "perplexity": 703.7528166183902}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146241.46/warc/CC-MAIN-20160205193906-00177-ip-10-236-182-209.ec2.internal.warc.gz"}
https://astronomy.stackexchange.com/questions/32015/how-did-early-astronomers-measure-distances
# How Did Early Astronomers Measure Distances? Prior to the era of radar and other forms of radio/RF/EM ranging, what approaches, methods, and techniques did early astronomers (e.g., Kepler, Cassini, Copernicus) use to measure the distance(s) from Earth to various objects (such as: Mars, Venus, Alpha Centauri, etc.)? Is there an especially informative article (say, Scientific American, Sky) or book that explains their efforts and results (e.g., how accurate were their computations?)? • References and citations are warmly welcomed! They were not very successful, but not for a want of trying. The ancient greeks tried to measure the angle between the sun and the moon when the moon is half full. They got a value of 87 degrees, (when really it is 89.85). From this they could estimate the relative distances of the Moon and the sun. Further observations of the moon from different locations on Earth could, in principle find the relative distance to the moon in terms of the radius of the Earth. And the radius of the Earth could be determined by noting the different lengths of shadows at the same time in different locations. Several Greeks attempted to put these together to find the distance of the sun, and while their methods were geometrically correct, their observations were not accurate enough to achieve an accurate value for the distance of the sun. Typically they estimates were about 5% of the true distance. Attempts based on similar methods were made in other centres of astronomy, in China, India and the Middle East. They all tended to dramatically underestimate the distance. Even after the invention of the telescope allowed the observations to be done more accurately the distance was usually under-estimated. only when careful measurements of the transit of Venus were made, was Jerome Lalande able to calculate the distance of Venus, and use Kepler's laws to derive the distance of the sun and other planets. Kepler had created a model of the solar system that predicted the relative distances of each planet very successfully, but not the absolute distance Nearly all other distances are based on the Earth-sun distance. The distances of the planets can be found using Kepler's laws. The actual distance of stars was unknown we were able to measure the slight change in position of a star as the Earth orbits the sun. If you know the distance to the sun you can calculate the distance to a star. Prior to the observation of this "stellar parallax" estimates of the distanct to stars was essentially guesses, and usually significant underestimates. So prior to radar ranging, the methods were geometric, and based on a ladder of measurements. The size of the Earth was used to determine the distance to the moon and sun. Then the distance to the sun was used to find the distance to the stars. For more distant stars there can be considerable uncertainty about the distance. If a star is too far to have parallax, then it's true distance will not be known to accuracy.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8710408806800842, "perplexity": 529.8771378800812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00167.warc.gz"}
http://openstudy.com/updates/5109f84ee4b0d9aa3c465a7d
Here's the question you clicked on: 55 members online • 0 viewing ## Dido525 2 years ago Set up an intergal for the colume of the solid obtained by ratating the region bounded by the parabolas x=8y-2y^2, x= 4y-y^2 about the x-axis. Delete Cancel Submit • This Question is Closed 1. Dido525 • 2 years ago Best Response You've already chosen the best response. 0 I have no idea how to do this :( . 2. tkhunny • 2 years ago Best Response You've already chosen the best response. 3 The maximum x-value for each is at y = 2. The intersections of the two are at (0,0) and (0,4). Would you like to chop it up into 3 pieces or simply apply Pappus' Theorem? 3. Dido525 • 2 years ago Best Response You've already chosen the best response. 0 I can't use Pappus' theorem. 4. tkhunny • 2 years ago Best Response You've already chosen the best response. 3 Why not? Specifically proscribed? Not yet presented? Or just not able? 5. Dido525 • 2 years ago Best Response You've already chosen the best response. 0 We won't learn it. 6. soty2013 • 2 years ago Best Response You've already chosen the best response. 0 @saifoo.khan 7. Dido525 • 2 years ago Best Response You've already chosen the best response. 0 First of all, I need to sketch this thing. I can't figure out how the thing looks like. 8. tkhunny • 2 years ago Best Response You've already chosen the best response. 3 Think it up yourself and surprise everyone! 3 pieces then. 1) Biggest Piece x from 4 to 8 and y from $$2 - \sqrt{\dfrac{8-x}{2}}$$ to $$2 + \sqrt{\dfrac{8-x}{2}}$$ 2) Top Piece x from 0 to 4 and y from $$2 + \sqrt{4-x}$$ to $$2 + \sqrt{\dfrac{8-x}{2}}$$ 3) Bottom Piece x from 0 to 4 and y from $$2 - \sqrt{\dfrac{8-x}{2}}$$ to $$2 + \sqrt{4-x}$$ Go! 9. tkhunny • 2 years ago Best Response You've already chosen the best response. 3 Why not flip it around to where you are more familiar and solve that problem. y = 8x-2x^2 and y= 4x-x^2 and wrap it around the y-axis! 10. Dido525 • 2 years ago Best Response You've already chosen the best response. 0 But that changes the entire question. 11. tkhunny • 2 years ago Best Response You've already chosen the best response. 3 No. The question is to calculate the volume. Unique answers do not care how you find them. It MUST be considered a valid methodology. 12. tkhunny • 2 years ago Best Response You've already chosen the best response. 3 Anyway, you can flip it around just to get a look at it, if you can't relate to the present condition. 13. Dido525 • 2 years ago Best Response You've already chosen the best response. 0 But if you switch the x and y's that should change the entire shape of the graph. 14. tkhunny • 2 years ago Best Response You've already chosen the best response. 3 That's silly. Why would you think that? Absolutely not the case. It is the EXACT SAME problem. 15. Dido525 • 2 years ago Best Response You've already chosen the best response. 0 |dw:1359609779642:dw| Something like that? REALLY bad sketch though. 16. tkhunny • 2 years ago Best Response You've already chosen the best response. 3 |dw:1359609887178:dw| A little better. You crossed the x-axis. That's no good. 17. Dido525 • 2 years ago Best Response You've already chosen the best response. 0 yeah I know. I felt lazy. 18. tkhunny • 2 years ago Best Response You've already chosen the best response. 3 Well, go ahead and do it, then. I gave you all the limits up above. Good luck. gtg 19. Dido525 • 2 years ago Best Response You've already chosen the best response. 0 Thanks! 20. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987586736679077, "perplexity": 4936.599622860826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297281.13/warc/CC-MAIN-20150323172137-00180-ip-10-168-14-71.ec2.internal.warc.gz"}
http://www.maplesoft.com/support/help/Maple/view.aspx?path=LinearAlgebra/Generic/Determinant
LinearAlgebra[Generic] - Maple Help Home : Support : Online Help : Mathematics : Linear Algebra : LinearAlgebra Package : Generic Subpackage : LinearAlgebra/Generic/Determinant LinearAlgebra[Generic] Determinant compute the determinant of a square Matrix Calling Sequence Determinant[R](A) Determinant[R](A,method=BerkowitzAlgorithm) Determinant[R](A,method=MinorExpansion) Determinant[R](A,method=BareissAlgorithm) Determinant[R](A,method=GaussianElimination) Parameters R - the domain of computation A - square Matrix of values in R Description • The parameter A must be a square (n x n) Matrix of values from R. • The (indexed) parameter R, which specifies the domain of computation, a commutative ring, must be a Maple table/module which has the following values/exports: R[0] : a constant for the zero of the ring R R[1] : a constant for the (multiplicative) identity of R R[+] : a procedure for adding elements of R (nary) R[-] : a procedure for negating and subtracting elements of R (unary and binary) R[*] : a procedure for multiplying elements of R (binary and commutative) R[=] : a boolean procedure for testing if two elements of R are equal • The optional argument method=... specifies the algorithm to be used. The specific algorithms are as follows: • method=MinorExpansion directs the code to use minor expansion. This algorithm uses O(n 2^n) arithmetic operations in R. • method=BerkowitzAlgorithm directs the code to use the Berkowitz algorithm. This algorithm uses O(n^4) arithmetic operations in R. • method=BareissAlgorithm directs the code to use the Bareiss algorithm. This algorithm uses O(n^3) arithmetic operations in R but requires exact division, i.e., it requires R to be an integral domain with the following operation defined: R[Divide]: a boolean procedure for dividing two elements of R where R[Divide](a,b,'q') outputs true if b | a and optionally assigns q the quotient such that a = b q. • method=GaussianElimination directs the code to use the Gaussian elimination algorithm. This algorithm uses O(n^3) arithmetic operations in R but requires R to be a field, i.e., the following operation must be defined: R[/]: a procedure for dividing two elements of R • If the method is not given and the operation R[Divide] is defined, then the Bareiss algorithm is used, otherwise if the operation R[/] is defined then GaussianElimination is used, otherwise the Berkowitz algorithm is used. Examples > $\mathrm{with}\left(\mathrm{LinearAlgebra}[\mathrm{Generic}]\right):$ > ${Z}_{\mathrm{0}},{Z}_{\mathrm{1}},{Z}_{\mathrm{+}},{Z}_{\mathrm{-}},{Z}_{\mathrm{*}},{Z}_{\mathrm{=}}≔0,1,\mathrm{+},\mathrm{-},\mathrm{*},\mathrm{=}:$ > $A≔\mathrm{Matrix}\left(\left[\left[2,1,4\right],\left[3,2,1\right],\left[0,0,5\right]\right]\right)$ ${A}{:=}\left[\begin{array}{rrr}{2}& {1}& {4}\\ {3}& {2}& {1}\\ {0}& {0}& {5}\end{array}\right]$ (1) > $\mathrm{Determinant}[Z]\left(A\right)$ ${5}$ (2) > ${Q}_{\mathrm{0}},{Q}_{\mathrm{1}},{Q}_{\mathrm{+}},{Q}_{\mathrm{-}},{Q}_{\mathrm{*}},{Q}_{\mathrm{/}},{Q}_{\mathrm{=}}≔0,1,\mathrm{+},\mathrm{-},\mathrm{*},\mathrm{/},\mathrm{=}:$ > $A≔\mathrm{Matrix}\left(\left[\left[2,1,4,6\right],\left[3,2,1,7\right],\left[0,0,5,1\right],\left[0,0,3,8\right]\right]\right)$ ${A}{:=}\left[\begin{array}{rrrr}{2}& {1}& {4}& {6}\\ {3}& {2}& {1}& {7}\\ {0}& {0}& {5}& {1}\\ {0}& {0}& {3}& {8}\end{array}\right]$ (3) > $\mathrm{Determinant}[Q]\left(A\right)$ ${37}$ (4) > $\mathrm{Determinant}[Q]\left(A,\mathrm{method}=\mathrm{BerkowitzAlgorithm}\right)$ ${37}$ (5)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9397210478782654, "perplexity": 1516.3223031386963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860112231.78/warc/CC-MAIN-20160428161512-00180-ip-10-239-7-51.ec2.internal.warc.gz"}
http://www.map.mpim-bonn.mpg.de/index.php?title=Geometric_3-manifolds&oldid=3879
# Geometric 3-manifolds ## 1 Introduction Let a group $G$$ {{Stub}} == Introduction == ; Let a group G act on a manifold X by homeomorphisms. A \left(G,X\right)-manifold is a manifold M with a \left(G,X\right)-atlas, that is, a collection \left\{\left(U_i,\phi_i\right):i\in I\right\} of homeomorphisms \phi_i:U_i\rightarrow \phi_i\left(U_i\right)\subset X onto open subsets of X such that all coordinate changes \gamma_{ij}=\phi_i\phi_j^{-1}:\phi_i\left(U_i\cap U_j\right)\rightarrow \phi_j\left(U_i\cap U_j\right) are restrictions of elements of G. Fix a basepoint x_0\in M and a chart \left(U_0,\phi_0\right) with x_0\in U_0. Let \pi:\widetilde{M}\rightarrow M be the universal covering. These data determine the developing map D:\widetilde{M}\rightarrow X that agrees with the analytic continuation of \phi_0\pi along each path, in a neighborhood of the path's endpoint. If we change the initial data x_0 and \left(U_0,\phi_0\right), the developing map D changes by composition with an element of G. If \sigma\in\pi_1\left(M,x_0\right), analytic continuation along a loop representing \sigma gives a chart \phi_0^\sigma that is comparable to \phi_0, since they are both defined at x_0. Let g_\sigma be the element of G such that \phi_0^\sigma=g_\sigma\phi_0. The map H:\pi_1\left(M,x_0\right)\rightarrow G, H\left(\sigma\right)=g_\sigma is a group homomorphism and is called the holonomy of M. If we change the initial data x_0 and \left(U_0,\phi_0\right), the holonomy homomorphisms H changes by conjugation with an element of G. A \left(G,X\right)-manifold is complete if the developing map D:\widetilde{M}\rightarrow X is surjective. {{cite|Thurston1997}} Section 3.4 {{beginthm|Definition}} A model geometry \left(G,X\right) is a smooth manifold X together with a Lie group of diffeomorphisms of X, such that: a) X is connected and simply connected; b) G acts transitively on X, with compact point stabilizers; c) G is not contained in any larger group of diffeomorphisms of X with compact point stabilizers; d) there exists at least one compact \left(G,X\right)-manifold. {{endthm}} {{cite|Thurston1997}} Definition 3.8.1 A 3-manifold is said to be a geometric manifold if it is a \left(G,X\right)-manifold for a 3-dimensional model geometry \left(G,X\right). == Construction and examples == ; {{beginthm|Theorem}}There are eight 3-dimensional model geometries: - the round sphere: X=S^3, G=O(4) - Euclidean space: X={\mathbb R}^3, G={\mathbb R}^3\rtimes O(3) - hyperbolic space: X= H^3, G=PSL\left(2,{\mathbb C}\right)\rtimes {\mathbb Z}/2{\mathbb Z} - X=S^2\times {\mathbb R}, G=O(3)\times \left({\mathbb R}\rtimes {\mathbb Z}/2{\mathbb Z}\right) - X={\mathbb H}^2\times {\mathbb R}, G=\left(PSL\left(2,{\mathbb R}\right)\rtimes{\mathbb Z}/2{\mathbb Z}\right)\times \left({\mathbb R}\rtimes {\mathbb Z}/2{\mathbb Z}\right) - the universal covering of the unit tangent bundle of the hyperbolic plane: G=X=\widetilde{PSL\left(2,{\mathbb R}\right)} - the Heisenberg group: G=X=Nil=\left\{\left(\begin{matrix}1&x&z\0&1&y\0&0&1\end{matrix}\right):x,y,z\in{\mathbb R}\right\} - the 3-dimensional solvable Lie group G=X=Sol={\mathbb R}^2\rtimes {\mathbb R} with conjugation t\rightarrow\left(\begin{matrix}e^t&0\0&e^{-t}\end{matrix}\right).{{endthm}} {{cite|Thurston1997}} Section 3.8 Outline of Proof: Let G^\prime be the connected component of the identity of G, and let G_x^\prime be the stabiliser of x\in X. G^\prime acts transitively and G_x^\prime is a closed, connected subgroup of SO\left(3\right). Case 1: G_x^\prime=SO\left(3\right). Then X has constant sectional curvature. The Cartan Theorem implies that (up to rescaling) X is isometric to one of S^3, {\mathbb R}^3, H^3. Case 2: G_x^\prime \simeq SO\left(2\right). Let V be the G^\prime-invariant vector field such that, for each x\in X, the direction of V_x is the rotation axis of G_x^\prime. V descends to a vector field on compact \left(G,X\right)-manifolds, therefore the flow of V must preserve volume. In our setting this implies that the flow of V acts by isometries. Hence the flowlines define a 1-dimensional foliation {\mathcal{F}} with embedded leaves. The quotient X/{\mathcal{F}} is a 2-dimensional manifold, which inherits a Riemannian metric such that G^\prime acts transitively by isometries. Thus Y:=X/{\mathcal{F}} has constant curvature and is (up to rescaling) isometric to one of S^2, {\mathbb R}^2, H^2. X is a pricipal bundle over Y with fiber {\mathbb R} or S^1, The plane field \tau orthogonal to \mathcal{F} has constant curvature, hence it is either a foliation or a contact structure. Case 2a: \tau is a foliation. Thus X is a flat bundle over Y. Y is one of S^2, {\mathbb R}^2, H^2, hence \pi_1Y=0, which implies that X=Y\times {\mathbb R}. Case 2b: \tau is a contact structure. For Y=S^2 one would obtain for G the group of isometries of S^3 that preserve the Hopf fibration. This is not a maximal group with compact stabilizers, thus there is no model geometry in this case. For Y={\mathbb R}^2 one obtains G=X=Nil. Namely, G is the subgroup of the group of automorphisms of the standard contact structure dz-xdy=0 on {\mathbb R}^3 consisting of thise automorphisms which are lifts of isometries of the x-y-plane. For Y={\mathbb H}^2 one obtains G=X=\widetilde{PSL\left(2,{\mathbb R}\right)}. Case 3: G_x^\prime=1. Then X=G^\prime/G_x^\prime=G^\prime is a Lie group. The only 3-dimensional unimodular Lie group which is not subsumed by one of the previous geometries is G=Sol. == Invariants == ; ... == Classification/Characterization == A closed 3-manifold is called: - irreducible, if every embedded 2-sphere bounds an embedded 3-ball, - geometrically atoroidal, if there is no embedded incompressible torus, - homotopically atoroidal, if there is no immersed incompressible torus. ; {{beginthm|Theorem|(Geometrization)}} Let M be a closed, orientable, irreducible, geometrically atoroidal 3-manifold. a) If M is homotopically atoroidal, then it admits an H^3-geometry. b) If M is not homotopically atoroidal, then it admits (at least) one of the seven non-H^3-geometries. {{endthm}} {{beginthm|Example|(Geometrization of mapping tori)}} Let \Phi:\Sigma_g\rightarrow \Sigma_g be an orientation-preserving homeomorphism of the surface of genus g. a) If g=1, then the mapping torus M_\Phi satisfies the following: 1. If \Phi is periodic, then M_\Phi admits an {\mathbb R}^3 geometry. 2. If \Phi is reducible, then M_\Phi contains an embedded incompressible torus. 3. If \Phi is Anosov, then M_\Phi admits a Sol geometry. b) If g\ge 2, then the mapping torus M_\Phi satisfies the following: 1. If \Phi is periodic, then M_\Phi admits an H^2\times{\mathbb R}-geometry. 2. If \Phi is reducible, then M_\Phi contains an embedded incompressible torus. 3. If \Phi is pseudo-Anosov, then M_\Phi admits an H^3-geometry. {{endthm}} == Further discussion == ; ... == References == {{#RefList:}} [[Category:Manifolds]] {{Stub}}G$ act on a manifold $X$$X$ by homeomorphisms. A $\left(G,X\right)$$\left(G,X\right)$-manifold is a manifold $M$$M$ with a $\left(G,X\right)$$\left(G,X\right)$-atlas, that is, a collection $\left\{\left(U_i,\phi_i\right):i\in I\right\}$$\left\{\left(U_i,\phi_i\right):i\in I\right\}$ of homeomorphisms $\displaystyle \phi_i:U_i\rightarrow \phi_i\left(U_i\right)\subset X$ onto open subsets of $X$$X$ such that all coordinate changes $\displaystyle \gamma_{ij}=\phi_i\phi_j^{-1}:\phi_i\left(U_i\cap U_j\right)\rightarrow \phi_j\left(U_i\cap U_j\right)$ are restrictions of elements of $G$$G$. Fix a basepoint $x_0\in M$$x_0\in M$ and a chart $\left(U_0,\phi_0\right)$$\left(U_0,\phi_0\right)$ with $x_0\in U_0$$x_0\in U_0$. Let $\pi:\widetilde{M}\rightarrow M$$\pi:\widetilde{M}\rightarrow M$ be the universal covering. These data determine the developing map $\displaystyle D:\widetilde{M}\rightarrow X$ that agrees with the analytic continuation of $\phi_0\pi$$\phi_0\pi$ along each path, in a neighborhood of the path's endpoint. If we change the initial data $x_0$$x_0$ and $\left(U_0,\phi_0\right)$$\left(U_0,\phi_0\right)$, the developing map $D$$D$ changes by composition with an element of $G$$G$. If $\sigma\in\pi_1\left(M,x_0\right)$$\sigma\in\pi_1\left(M,x_0\right)$, analytic continuation along a loop representing $\sigma$$\sigma$ gives a chart $\phi_0^\sigma$$\phi_0^\sigma$ that is comparable to $\phi_0$$\phi_0$, since they are both defined at $x_0$$x_0$. Let $g_\sigma$$g_\sigma$ be the element of $G$$G$ such that $\phi_0^\sigma=g_\sigma\phi_0$$\phi_0^\sigma=g_\sigma\phi_0$. The map $\displaystyle H:\pi_1\left(M,x_0\right)\rightarrow G, H\left(\sigma\right)=g_\sigma$ is a group homomorphism and is called the holonomy of $M$$M$. If we change the initial data $x_0$$x_0$ and $\left(U_0,\phi_0\right)$$\left(U_0,\phi_0\right)$, the holonomy homomorphisms $H$$H$ changes by conjugation with an element of $G$$G$. A $\left(G,X\right)$$\left(G,X\right)$-manifold is complete if the developing map $D:\widetilde{M}\rightarrow X$$D:\widetilde{M}\rightarrow X$ is surjective. [Thurston1997] Section 3.4 Definition 1.1. A model geometry $\left(G,X\right)$$\left(G,X\right)$ is a smooth manifold $X$$X$ together with a Lie group of diffeomorphisms of $X$$X$, such that: a) $X$$X$ is connected and simply connected; b) $G$$G$ acts transitively on $X$$X$, with compact point stabilizers; c) $G$$G$ is not contained in any larger group of diffeomorphisms of $X$$X$ with compact point stabilizers; d) there exists at least one compact $\left(G,X\right)$$\left(G,X\right)$-manifold. [Thurston1997] Definition 3.8.1 A 3-manifold is said to be a geometric manifold if it is a $\left(G,X\right)$$\left(G,X\right)$-manifold for a 3-dimensional model geometry $\left(G,X\right)$$\left(G,X\right)$. ## 2 Construction and examples Theorem 2.1.There are eight 3-dimensional model geometries: - the round sphere: $X=S^3, G=O(4)$$X=S^3, G=O(4)$ - Euclidean space: $X={\mathbb R}^3, G={\mathbb R}^3\rtimes O(3)$$X={\mathbb R}^3, G={\mathbb R}^3\rtimes O(3)$ - hyperbolic space: $X= H^3, G=PSL\left(2,{\mathbb C}\right)\rtimes {\mathbb Z}/2{\mathbb Z}$$X= H^3, G=PSL\left(2,{\mathbb C}\right)\rtimes {\mathbb Z}/2{\mathbb Z}$ - $X=S^2\times {\mathbb R}, G=O(3)\times \left({\mathbb R}\rtimes {\mathbb Z}/2{\mathbb Z}\right)$$X=S^2\times {\mathbb R}, G=O(3)\times \left({\mathbb R}\rtimes {\mathbb Z}/2{\mathbb Z}\right)$ - $X={\mathbb H}^2\times {\mathbb R}, G=\left(PSL\left(2,{\mathbb R}\right)\rtimes{\mathbb Z}/2{\mathbb Z}\right)\times \left({\mathbb R}\rtimes {\mathbb Z}/2{\mathbb Z}\right)$$X={\mathbb H}^2\times {\mathbb R}, G=\left(PSL\left(2,{\mathbb R}\right)\rtimes{\mathbb Z}/2{\mathbb Z}\right)\times \left({\mathbb R}\rtimes {\mathbb Z}/2{\mathbb Z}\right)$ - the universal covering of the unit tangent bundle of the hyperbolic plane: $G=X=\widetilde{PSL\left(2,{\mathbb R}\right)}$$G=X=\widetilde{PSL\left(2,{\mathbb R}\right)}$ - the Heisenberg group: $G=X=Nil=\left\{\left(\begin{matrix}1&x&z\\0&1&y\\0&0&1\end{matrix}\right):x,y,z\in{\mathbb R}\right\}$$G=X=Nil=\left\{\left(\begin{matrix}1&x&z\\0&1&y\\0&0&1\end{matrix}\right):x,y,z\in{\mathbb R}\right\}$ - the 3-dimensional solvable Lie group $G=X=Sol={\mathbb R}^2\rtimes {\mathbb R}$$G=X=Sol={\mathbb R}^2\rtimes {\mathbb R}$ with conjugation $t\rightarrow\left(\begin{matrix}e^t&0\\0&e^{-t}\end{matrix}\right)$$t\rightarrow\left(\begin{matrix}e^t&0\\0&e^{-t}\end{matrix}\right)$. [Thurston1997] Section 3.8 Outline of Proof: Let $G^\prime$$G^\prime$ be the connected component of the identity of $G$$G$, and let $G_x^\prime$$G_x^\prime$ be the stabiliser of $x\in X$$x\in X$. $G^\prime$$G^\prime$ acts transitively and $G_x^\prime$$G_x^\prime$ is a closed, connected subgroup of $SO\left(3\right)$$SO\left(3\right)$. Case 1: $G_x^\prime=SO\left(3\right)$$G_x^\prime=SO\left(3\right)$. Then $X$$X$ has constant sectional curvature. The Cartan Theorem implies that (up to rescaling) $X$$X$ is isometric to one of $S^3, {\mathbb R}^3, H^3$$S^3, {\mathbb R}^3, H^3$. Case 2: $G_x^\prime \simeq SO\left(2\right)$$G_x^\prime \simeq SO\left(2\right)$. Let $V$$V$ be the $G^\prime$$G^\prime$-invariant vector field such that, for each $x\in X$$x\in X$, the direction of $V_x$$V_x$ is the rotation axis of $G_x^\prime$$G_x^\prime$. $V$$V$ descends to a vector field on compact $\left(G,X\right)$$\left(G,X\right)$-manifolds, therefore the flow of $V$$V$ must preserve volume. In our setting this implies that the flow of $V$$V$ acts by isometries. Hence the flowlines define a 1-dimensional foliation ${\mathcal{F}}$${\mathcal{F}}$ with embedded leaves. The quotient $X/{\mathcal{F}}$$X/{\mathcal{F}}$ is a 2-dimensional manifold, which inherits a Riemannian metric such that $G^\prime$$G^\prime$ acts transitively by isometries. Thus $Y:=X/{\mathcal{F}}$$Y:=X/{\mathcal{F}}$ has constant curvature and is (up to rescaling) isometric to one of $S^2, {\mathbb R}^2, H^2$$S^2, {\mathbb R}^2, H^2$. $X$$X$ is a pricipal bundle over $Y$$Y$ with fiber ${\mathbb R}$${\mathbb R}$ or $S^1$$S^1$, The plane field $\tau$$\tau$ orthogonal to $\mathcal{F}$$\mathcal{F}$ has constant curvature, hence it is either a foliation or a contact structure. Case 2a: $\tau$$\tau$ is a foliation. Thus $X$$X$ is a flat bundle over $Y$$Y$. $Y$$Y$ is one of $S^2, {\mathbb R}^2, H^2$$S^2, {\mathbb R}^2, H^2$, hence $\pi_1Y=0$$\pi_1Y=0$, which implies that $X=Y\times {\mathbb R}$$X=Y\times {\mathbb R}$. Case 2b: $\tau$$\tau$ is a contact structure. For $Y=S^2$$Y=S^2$ one would obtain for $G$$G$ the group of isometries of $S^3$$S^3$ that preserve the Hopf fibration. This is not a maximal group with compact stabilizers, thus there is no model geometry in this case. For $Y={\mathbb R}^2$$Y={\mathbb R}^2$ one obtains $G=X=Nil$$G=X=Nil$. Namely, $G$$G$ is the subgroup of the group of automorphisms of the standard contact structure $dz-xdy=0$$dz-xdy=0$ on ${\mathbb R}^3$${\mathbb R}^3$ consisting of thise automorphisms which are lifts of isometries of the x-y-plane. For $Y={\mathbb H}^2$$Y={\mathbb H}^2$ one obtains $G=X=\widetilde{PSL\left(2,{\mathbb R}\right)}$$G=X=\widetilde{PSL\left(2,{\mathbb R}\right)}$. Case 3: $G_x^\prime=1$$G_x^\prime=1$. Then $X=G^\prime/G_x^\prime=G^\prime$$X=G^\prime/G_x^\prime=G^\prime$ is a Lie group. The only 3-dimensional unimodular Lie group which is not subsumed by one of the previous geometries is $G=Sol$$G=Sol$. ... ## 4 Classification/Characterization A closed 3-manifold is called: - irreducible, if every embedded 2-sphere bounds an embedded 3-ball, - geometrically atoroidal, if there is no embedded incompressible torus, - homotopically atoroidal, if there is no immersed incompressible torus. Theorem 4.1 (Geometrization). Let $M$$M$ be a closed, orientable, irreducible, geometrically atoroidal 3-manifold. a) If $M$$M$ is homotopically atoroidal, then it admits an $H^3$$H^3$-geometry. b) If $M$$M$ is not homotopically atoroidal, then it admits (at least) one of the seven non-$H^3$$H^3$-geometries. Example 4.2 (Geometrization of mapping tori). Let $\Phi:\Sigma_g\rightarrow \Sigma_g$$\Phi:\Sigma_g\rightarrow \Sigma_g$ be an orientation-preserving homeomorphism of the surface of genus $g$$g$. a) If $g=1$$g=1$, then the mapping torus $M_\Phi$$M_\Phi$ satisfies the following: 1. If $\Phi$$\Phi$ is periodic, then $M_\Phi$$M_\Phi$ admits an ${\mathbb R}^3$${\mathbb R}^3$ geometry. 2. If $\Phi$$\Phi$ is reducible, then $M_\Phi$$M_\Phi$ contains an embedded incompressible torus. 3. If $\Phi$$\Phi$ is Anosov, then $M_\Phi$$M_\Phi$ admits a $Sol$$Sol$ geometry. b) If $g\ge 2$$g\ge 2$, then the mapping torus $M_\Phi$$M_\Phi$ satisfies the following: 1. If $\Phi$$\Phi$ is periodic, then $M_\Phi$$M_\Phi$ admits an $H^2\times{\mathbb R}$$H^2\times{\mathbb R}$-geometry. 2. If $\Phi$$\Phi$ is reducible, then $M_\Phi$$M_\Phi$ contains an embedded incompressible torus. 3. If $\Phi$$\Phi$ is pseudo-Anosov, then $M_\Phi$$M_\Phi$ admits an $H^3$$H^3$-geometry. ...
{"extraction_info": {"found_math": true, "script_math_tex": 132, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 136, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571833610534668, "perplexity": 3650.4650860389556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00228.warc.gz"}
https://math.stackexchange.com/questions/1455993/gamma-triplication-formula-using-beta-function
# Gamma Triplication formula using Beta Function Recently i have seen that you can develope the Legendre Duplication Formula by using Beta FUnction. http://mathworld.wolfram.com/LegendreDuplicationFormula.html And i´m interesting to develope the triplication formula also using Beta Function. $$\Gamma (3z)=(2 \pi)^{-1}3^{3z-1/2}\Gamma(z)\Gamma \left(z+\frac{1}{3}\right)\Gamma\left(z+\frac{2}{3}\right)$$ But i don´t see how to do ot, since in the article they use $m=n=z$ wich gives inmediatly $2z$ in the denominator. • My advice would be to split $(3n)!$ into three subproducts, depending on the remainder each term leaves when divided by $3$. – Lucian Sep 29 '15 at 14:02 • – cgiovanardi Sep 19 '16 at 21:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872742652893066, "perplexity": 657.7625076082912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000175.78/warc/CC-MAIN-20190626053719-20190626075719-00362.warc.gz"}
http://weblib.cern.ch/collection/ATLAS%20Conference%20Notes?ln=el&as=1
# ATLAS Conference Notes Πρόσφατες προσθήκες: 2019-11-14 07:34 Measurement of suppression of large-radius jets and its dependence on substructure in Pb+Pb at 5.02 TeV by ATLAS detector This note presents measurement of the nuclear modification factor of large-radius jets in $\sqrt{s_{\mathrm{NN}}} = 5.02$ TeV Pb+Pb collisions by the ATLAS experiment. [...] ATLAS-CONF-2019-056. - 2019. Original Communication (restricted to ATLAS) - Full text 2019-11-04 10:58 Longitudinal flow decorrelations in Xe+Xe collisions at $\sqrt{s_{\mathrm{NN}}}=5.44$ TeV with the ATLAS detector First measurement of longitudinal decorrelations of harmonic flow $v_n$ for $n=2$, 3 and 4 in Xe+Xe collisions at \mbox{$\sqrt{s_{\mathrm{NN}}}=5.44$}~TeV is obtained using the ATLAS detector at the LHC. [...] ATLAS-CONF-2019-055. - 2019. Original Communication (restricted to ATLAS) - Full text 2019-11-04 10:52 Production of $\varUpsilon(\mathrm{nS})$ mesons in Pb+Pb and $\it{pp}$ collisions at 5.02 TeV with ATLAS The measurement of the production of bottomonium states, $\mit{\Upsilon}(\text{1S})$, $\mit{\Upsilon}(\text{2S})$ and $\mit{\Upsilon}(\text{3S})$, in Pb+Pb and $pp$ collisions at a center-of-mass energy per nucleon pair of $5.02~\mathrm{TeV}$ is presented. [...] ATLAS-CONF-2019-054. - 2019. - 20 p. Original Communication (restricted to ATLAS) - Full text 2019-11-04 10:46 Measurement of azimuthal anisotropy of muons from charm and bottom hadrons in Pb+Pb collisions at $\sqrt{s_\mathrm{NN}}=5.02~\mathrm{TeV}$ with the ATLAS detector Azimuthal anisotropies of muons from charm and bottom decays are measured in Pb+Pb collisions at $\sqrt{s_\mathrm{NN}}= 5.02~\mathrm{TeV}$. [...] ATLAS-CONF-2019-053. - 2019. Original Communication (restricted to ATLAS) - Full text 2019-11-04 10:41 Measurement of $Z$-tagged charged-particle yields in 5.02 TeV Pb+Pb and $pp$ collisions with the ATLAS detector The yields of charged particles azimuthally balanced by a high-transverse-momentum ($p_\mathrm{T}$) $Z$ boson are measured in $pp$ and Pb+Pb collision data recorded by the ATLAS detector at the Large Hadron Collider. [...] ATLAS-CONF-2019-052. - 2019. Original Communication (restricted to ATLAS) - Full text 2019-11-04 10:36 Measurement of non-exclusive dimuon pairs produced via $\gamma\gamma$ scattering in Pb+Pb collisions at $\sqrt{s_{\mathrm{NN}}} = 5.02$ TeV with the ATLAS detector Results are presented of a measurement of dimuons produced via $\gamma\gamma$ scattering processes in inelastic, non-ultra-peripheral Pb+Pb collisions at $\sqrt{s_{\mathrm{NN}}} = 5.02$ TeV. [...] ATLAS-CONF-2019-051. - 2019. Original Communication (restricted to ATLAS) - Full text 2019-10-17 10:59 Test of CP invariance in vector-boson fusion production of the Higgs boson in the $H\rightarrow\tau\tau$ channel in proton--proton collisions at $\sqrt{s}$ = 13 TeV with the ATLAS detector A test of CP invariance in Higgs boson production via vector-boson fusion in the $H\rightarrow\tau\tau$ decay mode is presented. [...] ATLAS-CONF-2019-050. - 2019. Original Communication (restricted to ATLAS) - Full text 2019-10-17 10:57 Constraints on the Higgs boson self-coupling from the combination of single-Higgs and double-Higgs production analyses performed with the ATLAS experiment Constraints on the Higgs boson self-coupling are set by combining the single Higgs boson analyses targeting the $\gamma \gamma$, $ZZ^*$, $WW^*$, $\tau^+ \tau^-$ and $b\bar{b}$ decay channels and the double Higgs boson analyses in the $b\bar{b}b\bar{b}$, $b\bar{b}\tau^+\tau^-$ and $b\bar{b} \gamma \gamma$ decay channels, using data collected at $\sqrt{s} = 13$ TeV with the ATLAS detector at the LHC. [...] ATLAS-CONF-2019-049. - 2019. - 16 p. Original Communication (restricted to ATLAS) - Full text 2019-10-17 10:53 Study of $J/\psi p$ resonances in the $Λ^0_b→J/\psi pK^−$ decays in $pp$ collisions at $\sqrt{s}$=7 and 8TeV with the ATLAS detector A study of $J/\psi p$ resonances in the $Λ^0_b→J/\psi pK^−$ decays with large $m(pK^-)$ invariant masses by the ATLAS experiment at the LHC is presented. [...] ATLAS-CONF-2019-048. - 2019. - 24 p. Original Communication (restricted to ATLAS) - Full text 2019-10-17 10:45 Measurement of the production cross-section of $J/\psi$ and $\psi(2{\mathrm S})$ mesons at high transverse momentum in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector The measurements of the prompt and non-prompt differential cross-sections of $J/\psi$ and $\psi(2\mathrm{S})$ mesons with $p_{T}$ between 60 and 360 GeV and rapidity range $|y|<2$ are reported. [...] ATLAS-CONF-2019-047. - 2019. Original Communication (restricted to ATLAS) - Full text
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990518689155579, "perplexity": 2979.24802256695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614880.58/warc/CC-MAIN-20200124011048-20200124040048-00348.warc.gz"}
https://mathematicalphysicsubu.com/page/2/
# Seminar “Superintegrability and structure of higher rank quadratic algebras” Speaker: Ian Marquette (University of Queensland) Date and time: September 25th, 16:00 h Place: Aula 24, Facultad de Ciencias # Coreductive Lie bialgebras and dual homogeneous spaces Quantum homogeneous spaces are noncommutative spaces with quantum group covariance. Their semiclassical counterparts are Poisson homogeneous spaces, which are quotient manifolds of Lie groups M=G/H equipped with an additional Poisson structure π which is compatible with a Poisson-Lie structure Π on G. Since the infinitesimal version of Π defines a unique Lie bialgebra structure δ on the Lie algebra 𝔤=Lie(G), in this new paper (arXiv:1909.01000) we exploit the idea of Lie bialgebra duality in order to introduce the notion of dual homogeneous space of a given homogeneous space M=G/H with respect to the Lie bialgebra δ. Then, by considering the natural notions of reductive and symmetric homogeneous spaces, we extend these concepts to the dual space thus showing that an even richer duality framework arises. In order to analyse the physical implications of this new duality, the case of M being a Minkowski or (Anti-) de Sitter Poisson homogeneous spacetime is fully studied, and the corresponding dual reductive and symmetric spaces are explicitly constructed in the case of the well-known κ-deformation, where the cosmological constant Λ is introduced as an explicit parameter in order to describe all Lorentzian spaces simultaneously. In particular, the fact that the dual space is reductive is shown to provide a natural condition for the representation theory of the quantum analogue of M that ensures the existence of physically meaningful uncertainty relations between the noncommutative spacetime coordinates. Finally we show that, despite the dual spaces are not endowed in general with an invariant metric, their geometry can be described by making use of K-structures. # The κ-(A)dS noncommutative spacetime The (3+1)-dimensional κ-(A)dS noncommutative spacetime is explicitly constructed in this new paper (arXiv:1905.12358) by quantizing its semiclassical counterpart, which is the κ-(A)dS Poisson homogeneous space. Under minimal physical assumptions, it is explicitly proven that this is the only possible generalization to the case of non-vanishing cosmological constant of the well-known κ-Minkowski spacetime. The κ-(A)dS noncommutative spacetime is shown to have a quadratic subalgebra of local spatial coordinates whose first-order brackets in terms of the cosmological constant parameter define a quantum sphere, while the commutators between time and space coordinates preserve the same structure of the κ-Minkowski spacetime. When expressed in ambient coordinates, the quantum κ-(A)dS spacetime is shown to be defined as a noncommutative pseudosphere. # Life of cosmological perturbations in MDR models, and the prospect of travelling primordial gravitational waves In this new paper (arXiv:1905.08484) we follow the life of a generic primordial perturbation mode (scalar or tensor) subject to modified dispersion relations (MDR), as its proper wavelength is stretched by expansion. A necessary condition ensuring that travelling waves can be converted into standing waves is that the mode starts its life deep inside the horizon and in the trans-Planckian regime, then leaves the horizon as the speed of light corresponding to its growing wavelength drops, to eventually become cis-Planckian whilst still outside the horizon, and finally re-enter the horizon at late times. We find that scalar modes in the observable range satisfy this condition, thus ensuring the viability of MDR models in this respect. For tensor modes we find a regime in which this does not occur, but in practice it can only be realised for wavelengths in the range probed by future gravity wave experiments if the quantum gravity scale experienced by gravity waves goes down to the PeV range. In this case travelling -rather than standing- primordial gravity waves could be the tell-tale signature of MDR scenarios. # Seminar “A geometric approach to infinite-dimensional Lie bialgebras and Poisson-Lie groups with applications” Speaker: Javier de Lucas (University of Warsaw) Date and time: May 7th, 12:00 h Place: Aula 24, Facultad de Ciencias # Seminar “Quantum geometry of the integer lattice and Hawking effect” Speaker: Shahn Majid (Queen Mary University of London) Date and time: March 29th, 12:00 h Place: Aula 24, Facultad de Ciencias # Curvature as an integrable deformation In this work (arXiv:1903.09543), the generalization of (super)integrable Euclidean classical Hamiltonian systems to the two-dimensional sphere and the hyperbolic space by preserving their (super)integrability properties is reviewed. The constant Gaussian curvature of the underlying spaces is introduced as an explicit deformation parameter, thus allowing the construction of new integrable Hamiltonians in a unified geometric setting in which the Euclidean systems are obtained in the vanishing curvature limit. In particular, the constant curvature analogue of the generic anisotropic oscillator Hamiltonian is presented, and its superintegrability for commensurate frequencies is shown. As a second example, an integrable version of the Hénon-Heiles system on the sphere and the hyperbolic plane is introduced. Projective Beltrami coordinates are shown to be helpful in this construction, and further applications of this approach are sketched. # Seminar “Localization and reference frames in kappa-Minkowski Spacetime” Speaker: Flavio Mercati (Università di Napoli Federico II, Italy) Date and time: March 21th, 12:00 h Place: Aula 24, Facultad de Ciencias # Relativistic compatibility of the interacting κ-Poincaré model and implications for the relative locality framework In this new paper (arXiv:1903.04593), we investigate the relativistic properties under boost transformations of the κ-Poincaré model with multiple causally connected interactions, both at the level of its formulation in momentum space only and when it is endowed with a full phase space construction, provided by the relative locality framework. Previous studies focussing on the momentum space picture showed that in presence of just one interaction vertex the model is relativistic, provided that the boost parameter acting on each given particle receives a “backreaction” from the momenta of the other particles that participate in the interaction. Here we show that in presence of multiple causally-connected vertices the model is relativistic if the boost parameter acting on each given particle receives a backreaction from the total momentum of all the particles that are causally connected, even those that do not directly enter the vertex. The relative locality framework constructs spacetime by defining a set of dual coordinates to the momentum of each particle and interaction coordinates as Lagrange multipliers that enforce momentum conservation at interaction events. We show that the picture is Lorentz invariant if one uses an appropriate “total boost” to act on the particles’ worldlines and on the interaction coordinates. The picture we develop also allows for a reinterpretation of the backreaction as the manifestation of the “total boost” action. Our findings provide the basis to consistently define distant relatively boosted observers in the relative locality framework. # Seminar “Quantum mechanics and the covariance of physical laws in quantum reference frames” Speaker: Flaminia Giacomini (University of Vienna, Austria) Date and time: March 13th, 12:00 h Place: Aula 24, Facultad de Ciencias
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9076265692710876, "perplexity": 968.6690552317216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586239.2/warc/CC-MAIN-20210612162957-20210612192957-00154.warc.gz"}
http://www.scientificlib.com/en/Mathematics/LX/CauchyPrincipalValue.html
# . In mathematics, the Cauchy principal value, named after Augustin Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined. Formulation Depending on the type of singularity in the integrand f, the Cauchy principal value is defined as one of the following: the finite number $$\lim_{\varepsilon\rightarrow 0+} \left[\int_a^{b-\varepsilon} f(x)\,\mathrm{d}x+\int_{b+\varepsilon}^c f(x)\,\mathrm{d}x\right]$$ where b is a point at which the behavior of the function f is such that $$\int_a^b f(x)\,\mathrm{d}x=\pm\infty$$ for any a < b and $$\int_b^c f(x)\,\mathrm{d}x=\mp\infty$$ for any c > b (one sign is "+" and the other is "−"; see plus or minus for precise usage of notations ±, ∓). or the finite number $$\lim_{a\rightarrow\infty}\int_{-a}^a f(x)\,\mathrm{d}x$$ where $$\int_{-\infty}^0 f(x)\,\mathrm{d}x=\pm\infty$$ and $$\int_0^\infty f(x)\,\mathrm{d}x=\mp\infty$$ (again, see plus or minus for precise usage of notation ±, ∓). In some cases it is necessary to deal simultaneously with singularities both at a finite number b and at infinity. This is usually done by a limit of the form $$\lim_{\varepsilon \rightarrow 0+} \left[\int_{b-\frac{1}{\varepsilon}}^{b-\varepsilon} f(x)\,\mathrm{d}x+\int_{b+\varepsilon}^{b+\frac{1}{\varepsilon}}f(x)\,\mathrm{d}x \right].$$ or in terms of contour integrals of a complex-valued function f(z); z = x + iy, with a pole on the contour. The pole is enclosed with a circle of radius ε and the portion of the path outside this circle is denoted L(ε). Provided the function f(z) is integrable over L(ε) no matter how small ε becomes, then the Cauchy principal value is the limit:[1] $$\mathrm{P} \int_{L} f(z) \ \mathrm{d}z = \int_L^* f(z)\ \mathrm{d}z = \lim_{\varepsilon \to 0 } \int_{L( \varepsilon)} f(z)\ \mathrm{d}z,$$ where two of the common notations for the Cauchy principal value appear on the left of this equation. In the case of Lebesgue-integrable functions, that is, functions which are integrable in absolute value, these definitions coincide with the standard definition of the integral. Principal value integrals play a central role in the discussion of Hilbert transforms [2] Examples Consider the difference in values of two limits: $$\lim_{a\rightarrow 0+}\left(\int_{-1}^{-a}\frac{\mathrm{d}x}{x}+\int_a^1\frac{\mathrm{d}x}{x}\right)=0,$$ $$\lim_{a\rightarrow 0+}\left(\int_{-1}^{-a}\frac{\mathrm{d}x}{x}+\int_{2a}^1\frac{\mathrm{d}x}{x}\right)=-\ln 2.$$ The former is the Cauchy principal value of the otherwise ill-defined expression $$\int_{-1}^1\frac{\mathrm{d}x}{x}{\ } \left(\mbox{which}\ \mbox{gives}\ -\infty+\infty\right).$$ Similarly, we have $$\lim_{a\rightarrow\infty}\int_{-a}^a\frac{2x\,\mathrm{d}x}{x^2+1}=0,$$ but $$\lim_{a\rightarrow\infty}\int_{-2a}^a\frac{2x\,\mathrm{d}x}{x^2+1}=-\ln 4.$$ The former is the principal value of the otherwise ill-defined expression $$\int_{-\infty}^\infty\frac{2x\,\mathrm{d}x}{x^2+1}{\ } \left(\mbox{which}\ \mbox{gives}\ -\infty+\infty\right).$$ Distribution theory Let $$C_0^\infty(\mathbb{R})$$ be the set of smooth functions with compact support on the real line \mathbb{R}. Then, the map $$\operatorname{p.\!v.}\left(\frac{1}{x}\right)\,: C_0^\infty(\mathbb{R}) \to \mathbb{C}$$ defined via the Cauchy principal value as $$\operatorname{p.\!v.}\left(\frac{1}{x}\right)(u)=\lim_{\varepsilon\to 0+} \int_{\mathbb{R}\setminus [-\varepsilon;\varepsilon]} \frac{u(x)}{x} \, \mathrm{d}x = \int_0^{+\infty} \frac{u(x)-u(-x)}{x}\, \mathrm{d}x \quad\text{ for }u\in C_0^\infty(\mathbb{R})$$ is a distribution. The map itself may sometimes be called the principal value (hence the notation p.v.). This distribution appears for example in the Fourier transform of the Heaviside step function. The principal value is not exclusively defined on smooth functions ; it is enough that u be integrable, with compact support and differentiable at point 0. It is the inverse distribution of function x and is almost the only distribution with this property : $$xf = 1 \quad \Rightarrow \quad f = \operatorname{p.\!v.}\left(\frac{1}{x}\right) + K \delta$$ where K is a constant and δ the Dirac distribution. More generally, the principal value can be defined for a wide class of singular integral kernels on the Euclidean space Rn. If K(x) has an isolated singularity at the origin, but is an otherwise "nice" function, then the principal value distribution is defined on compactly supported smooth functions by $$(\operatorname{p.\!v.} K)(f) = \lim_{\epsilon\to 0}\int_{\mathbb{R}^n\setminus B_\epsilon(0)} f(x)K(x)\,\mathrm{d}x.$$ Such a limit may not be well defined or, being well-defined, it may not necessarily define a distribution. It is, however, well-defined if K is a continuous homogeneous function of degree −n whose integral over any sphere centered at the origin vanishes. This is the case, for instance, with the Riesz transforms. Nomenclature The Cauchy principal value of a function f can take on several nomenclatures, varying for different authors. Among these are: $$PV \int f(x)\,\mathrm{d}x,\quad \int_L^* f(z)\, \mathrm{d}z,\quad -\!\!\!\!\!\!\int f(x)\,\mathrm{d}x, P\ , P.V. , \mathcal{P}\ , P_v\ , (CPV)\$$, and V.P. Augustin Louis Cauchy Hilbert transform Sokhatsky–Weierstrass theorem References ^ Ram P. Kanwal (1996). Linear Integral Equations: theory and technique (2nd Edition ed.). Boston: Birkhäuser. p. 191. ISBN 0817639403. ^ Frederick W. King (2009). Hilbert Transforms. Cambridge: Cambridge University Press. ISBN 978-0-521-88762-5.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851923584938049, "perplexity": 409.3784222977529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00381.warc.gz"}
https://brilliant.org/problems/calvin-of-the-jungle/
Calvin Of The Jungle Calculus Level 3 Last summer, I went zip lining through the Ozark Mountains. There are two platforms, and you can see the higher platform in the top left of the image, while the cameraman is standing on the lower platform. There is a rope tied between these two platforms, and I hanged from a pulley suspended on the cable. In this way, I was propelled by gravity towards the lower platform. The horizontal distance between the platforms was 100 feet, and the vertical distance between the platforms was 15 feet. The rope joining the platforms was 102 feet long, and remained taut throughout my journey (rope from me to either platform was a straight line). As I swung through the jungle, what kind of curve did I trace out? ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336069226264954, "perplexity": 1375.3230725472242}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718311.12/warc/CC-MAIN-20161020183838-00393-ip-10-171-6-4.ec2.internal.warc.gz"}
https://portlandpress.com/clinsci/article/103/1/7/67357/Empirical-estimates-of-mean-aortic-pressure
Mean arterial pressure (MAP) is estimated at the brachial artery level by adding a fraction of pulse pressure (form factor; = 0.33) to diastolic pressure. We tested the hypothesis that a fixed form factor can also be used at the aortic root level. We recorded systolic aortic pressure (SAP) and diastolic aortic pressure (DAP), and we calculated aortic pulse pressure (PP) and the time-averaged MAP in the aorta of resting adults (n = 73; age 43±14 years). Wave reflection was quantified using the augmentation index. The aortic form factor (range 0.35-0.53) decreased with age, MAP, PP and augmentation index (each P<0.001). The mean form factor value (0.45) gave a reasonable estimation of MAP (MAP = DAP+0.45PP; bias = 0±2mmHg), and the bias increased with MAP (P<0.001). An alternative formula (MAP = DAP+PP/3+5mmHg) gave a more precise estimation (bias = 0±1mmHg), and the bias was not related to MAP. This latter formula was consistent with the previously reported mean pulse wave amplification of 15mmHg, and with unchanged MAP and diastolic pressure from aorta to periphery. Multiple linear regression showed that 99% of the variability of MAP was explained by the combined influence of DAP and SAP, thus confirming major pressure redundancy. Results were obtained irrespective of whether the marked differences in heart period and extent of wave reflection between subjects were taken into account. In conclusion, the aortic form factor was strongly influenced by age, aortic pressure and wave reflection. An empirical formula (MAP = DAP+PP/3+5mmHg) that is consistent with mechanical principles in the arterial system gave a more precise estimate of MAP in the aorta of resting humans. Only two distinct pressure-powered functions were carried out in the (SAP, DAP, MAP, PP) four-pressure set. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9324542880058289, "perplexity": 3633.3895523260653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00527.warc.gz"}
http://mathhelpforum.com/advanced-statistics/193906-independent-exponential-random-variables.html
# Math Help - Independent exponential random variables 1. ## Independent exponential random variables If $X_1$ and $X_2$ are independent exponential random variables with parameters $\lambda_{1}$ and $\lambda_{2}$, find the distribution of $Z=\frac{X_1}{X_2}$ Hints? 2. ## Re: Independent exponential random variables Originally Posted by I-Think If $X_1$ and $X_2$ are independent exponential random variables with parameters $\lambda_{1}$ and $\lambda_{2}$, find the distribution of $Z=\frac{X_1}{X_2}$ Hints? Quotient of two random variables http://www.math.uiuc.edu/~r-ash/Stat/StatLec1-5.pdf 3. ## Re: Independent exponential random variables As in the reference reported by Mr Fantastic if X and Y are independent random variables with p.d.f. $p_{x}(*)$ and $p_{y}(*)$, then the random variable $Z=\frac{X}{Y}$ has p.d.f... $p_{z}(z)= \int_{- \infty}^{+ \infty} |y|\ p_{x}(z y)\ p_{y}(y)\ dy$ (1) If X and Y are exponential random variables then... $p_{x}(x)= \lambda_{1}\ e^{- \lambda_{1}\ x}\ u(x)$ (2) $p_{y}(y)= \lambda_{2}\ e^{- \lambda_{2}\ y}\ u(y)$ (3) ... where u(*) is the Haeviside Step Function and (1) becomes after some steps... $p_{z}(z)= \frac{\lambda_{1}}{\lambda_{2}}\ \frac{1}{(1+\frac{\lambda_{1}}{\lambda_{2}}\ z)^{2}}$ (4) Surprisingly simple!... Marry Christmas from Serbia $\chi$ $\sigma$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92914879322052, "perplexity": 731.3257072776081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159376.39/warc/CC-MAIN-20160205193919-00058-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/circuits-kirchoffs.270031/
# Circuits, Kirchoffs 1. Nov 6, 2008 ### Sheneron 1. The problem statement, all variables and given/known data http://img185.imageshack.us/my.php?image=p2827altvk4.gif (b) Calculate the potential difference between points a and b. 3. The attempt at a solution I have solved the circuit using Kirchhoff's laws; I had to solve for the current in the 2 ohm resistor. But I am stuck at this question. I surmised that the potential difference would be using the 2 ohm resistor since we solved for it specifically. I found the potential difference at that resistor, and that yielded the correct answer but I have no clue why. I was wondering if someone could explain this to me? Thanks. 2. Nov 7, 2008 ### Staff: Mentor Not sure I understand your confusion. You solved for the voltages and currents in the circuit using KCL (most likely). That gives you the voltage across the 2 Ohm resistor and thus the current through it. Why don't you post your equations and solution, and point to what is confusing you about your final answer? 3. Nov 7, 2008 ### Sheneron Well my question is why do I use the 2 ohm resistor to find the potential difference at points a and b? Similar Discussions: Circuits, Kirchoffs
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.907582700252533, "perplexity": 313.31123697193675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948583808.69/warc/CC-MAIN-20171216045655-20171216071655-00552.warc.gz"}
http://mathhelpforum.com/differential-geometry/77712-function-integrable-print.html
function integrable • March 9th 2009, 07:41 AM manjohn12 function integrable Suppose $a. Let $k$ be any non-zero real number. Define $f: \mathbb{R} \to \mathbb{R}$ as follows: $f(x) = \begin{cases} 0 \ \ \ \ \ \text{if} \ x < c \\ k \ \ \ \ \ \text{if} \ c \leq x \leq d \\ 0 \ \ \ \ \ \text{if} \ d < x \end{cases}$ (a) Prove that $f$ is Riemann Integrable on $[a,b]$. What is its integral? (b) If every $<$ in definition of $f$ were changed to $\leq$ and vice versa, how would the answer change? For (a) the integral would be $k(d-c)$. Here is a proof. Proof. Let $\epsilon >0$. Choose $\delta >0$ such that $\delta < \frac{\epsilon}{2|d-c|}$. Let $P = \{x_{0}, x_{1}, x_{2}, \ldots, x_{n-1}, x_{n} \}$ be a partition of $[a,b]$ with $||P|| < \delta$. Choose $x_{1}^{*}, x_{2}^{*}, \ldots , x_{n}^{*}$ arbitrarily such that $x_{i-1} \leq x_{i}^{*} \leq x_{i}$. Then $\mathcal{R}(f,P) = \sum_{i=1}^{n} f(x_{i}^{*})(x_{i}-x_{i-1})$. We have to break this up into three sums? Because ultimately we want to show that $|\mathcal{R}(f,P)-k(d-c)| < \epsilon$. • March 9th 2009, 08:05 AM HallsofIvy For any such partition, you can make a finer partition by adding c and d as "break points". Once you do that, the three sums are simple. • March 9th 2009, 08:07 AM manjohn12 Quote: Originally Posted by HallsofIvy For any such partition, you can make a finer partition by adding c and d as "break points". Once you do that, the three sums are simple. Because then two sums are $0$ and we are only worried about the "middle" sum?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 2076.811265471425}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659449.65/warc/CC-MAIN-20150417045739-00229-ip-10-235-10-82.ec2.internal.warc.gz"}
http://questionbankcollections.blogspot.com/2011/06/ee2365-control-engineering-anna.html
## Search This Blog ### EE2365 Control Engineering ANNA UNIVERSITY QUESTION BANK, QUESTION PAPER, IMPORTANT QUESTIONS 2 MARKS and 16 MARKS QUESTIONS Wednesday, June 29, 2011 · EE2365 Control Engineering  ANNA UNIVERSITY QUESTION BANK, QUESTION PAPER, IMPORTANT QUESTIONS 2 MARKS and 16 MARKS QUESTIONS 1.       What is control system? A system consists of a number of components connected together to perform a specific function.  In a system when the output quantity is controlled by varying the input quantity then the system is called control system. 2.       What are the two major types of control system? The two major types of control system are open loop and closed loop 3.       Define open loop control system. The control system in which the output quantity has no effect upon the input quantity is called open loop control system.  This means that the output is not feedback to the input for correction. 4.       Define closed loop control system. The control system in which the output has an effect upon the input quantity so as to maintain the desired output value is called closed loop control system. 5.       What are the components of feedback control system? The components of feedback control system are plant, feedback path elements, error detector and controller. 6.       Distinguish between open loop and closed loop system Open loop system Closed loop system 1.       Inaccurate2.       Simple and economical3.       The changes in output due to external disturbance are not corrected4.   They are generally stable AccurateComplex and costlierThe changes in output due to external disturbances are corrected automaticallyGreat efforts are needed to design a stable system 7.       Why negative feedback is invariably preferred in closed loop system? The negative feedback results in better stability in steady state and rejects any disturbance signals. 8.       Define transfer function. The transfer function of a system is defined as the ratio of the Laplace transform of output to Laplace transform of input with zero initial conditions. 9.       What are the basic elements used for modeling mechanical translational system. Mass, spring and dashpot. 10.    What are the basic elements used for modeling mechanical rotational system? Moment of inertia J, dashpot with rotational frictional coefficient B and torsional spring with stiffness K. 11.    Write the force balance equation of an ideal mass element. F = M d2x/dt2 12.    Write the force balance equation of ideal dashpot element. F = B dx/dt 13.    Write the force balance equation of ideal spring element. F = kx 14.    Name two types of electrical analogous for mechanical system. The two types of analogies for the mechanical system are Force voltage and force current analogy. 15.    Write the analogous electrical elements in force voltage analogy for the elements of mechanical translational system. a.       Force-voltage e b.       Velocity v-current i c.        Displacement x-charge q d.       Frictional coefficient B-Resistance R e.        Mass M- Inductance L f.        Stiffness K-Inverse of capacitance 1/C 16.    Write the analogous electrical elements in force current analogy for the elements of mechanical translational system. a.       Force-current i b.       Velocity v-voltage v c.        Displacement x-flux φ d.       Frictional coefficient B-conductance 1/R e.        Mass M- capacitance C f.        Stiffness K-Inverse of inductance 1/L 17.    What is block diagram? A block diagram of a system is a pictorial representation of the functions performed by each component of the system and shows the flow of signals. The basic elements of block diagram are block, branch point and summing point. 18.    What is the basis for framing the rules of block diagram reduction technique? The rules for block diagram reduction technique are framed such that any modification made on the diagram does not alter the input output relation. 19.    What is a signal flow graph? A signal flow graph is a diagram that represents a set of simultaneous algebraic equations. By taking L. T the time domain differential equations governing a control system can be transferred to a set of algebraic equations in s-domain. 20.    What is transmittance? The transmittance is the gain acquired by the signal when it travels from one node to another node in signal flow graph. 21.    What is sink and source? Source is the input node in the signal for graph and it has only outgoing branches.  Sink is an output node in the signal flow graph and it has only incoming branches. 22.    Define non touching loop. The loops are said to be no touching if they do not have common nodes. 23.    Write Masons Gain formula. Masons Gain formula states that the overall gain of the system is T =       1  Σ Pk  Δk Δ   k k- Forward path in the signal flow graph Pk- Forward path gain of kth forward path Δ = 1-[sum of individual loop gains ] +[sum of gain products of all possible combinations of two non touching loops]-[sum of gain products of all possible combinations of three non touching loops]+… Δk =  Δ for that part of the graph which is not touching kth forward path. 24.    What is servomechanism? The servomechanism is a feedback control system, in which the output is mechanical position (or time derivatives of position velocity and acceleration). 25.    What is stepper motor? A stepper motor is a device which transforms electrical pulses into equal increments of rotary shaft motion called steps. 26.    What is servomotor? The motors used in automatic control systems or in servomechanism are called servomotors.  They are used to convert electrical signal into angular motion. 27.    What is synchro? A synchro is a device used to convert an angular motion to an electrical signal or vice versa. 28.    Name the test signals used in control system The commonly used test input signals in control system are impulse, step, ramp, acceleration and sinusoidal signals. 29.    What is step signal? The step signal is a signal whose value changes from zero to A at t= 0 and remains constant at A for t>0. 30.    What is ramp signal? The ramp signal is a signal whose value increases linearly with time from an initial value of zero at t=0. The ramp signal resembles a constant velocity. 31.    What is a parabolic signal? The parabolic signal is a signal whose value varies as a square of time from an initial value of zero at t=0. This parabolic signal represents constant acceleration input to the signal. 32.    What is transient response? The transient response is the response of the system when the system changes from one state to another. 33.    What is steady state response? The steady state response is the response of the system when it approaches infinity. 34.    What is an order and type number of a system? The order of a system is the order of the differential equation governing the system. The order of the system can be obtained from the transfer function of the given system. The type number is the number of poles at the origin. 35.    Define damping ratio. Damping ratio is defined as the ratio of actual damping to critical damping. 36.    List the time domain specifications. The time domain specifications are i.         Delay time ii.        Rise time iii.      Peak time iv.      Peak overshoot 37.    Define delay time. The time taken for response to reach 50% of final value for the very first time is delay time. 38.    Define rise time. The time taken for response to rise from 0% to 100% for the very first time is rise time. 39.    Define peak time. The time taken for the response to reach the peak value for the first time is peak time. 40.    Define peak overshoot. Peak overshoot is defined as the ratio of maximum peak value measured from the Maxmium value to final value. 41.    Define settling time. Settling time is defined as the time taken by the response to reach and stay within specified error. 42.    What is the need for a controller? The controller is provided to modify the error signal for better control action. 43.    What are the different types of controllers? a.       Proportional controller b.       PI controller c.        PD controller d.       PID controller 44.    What is proportional controller? It is a device that produces a control signal which is proportional to the input error signal. 45.    What is PI controller? It is a device that produces a control signal consisting of two terms –one proportional to error signal and the other proportional to the integral of error signal. ## 46.    What is PD controller? PD controller is a proportional   plus derivative controller which produces an output signal consisting of two times -one proportional to error signal and other proportional to the derivative of the signal. 47.    What is the significance of integral controller and derivative controller in a PID controller? The proportional controller stabilizes the gain but produces a steady state error.  The integral control reduces or eliminates the steady state error. 48.    Why derivative controller is not used in control systems. The derivative controller produces a control action based on the rate of change of error signal and it does not produce corrective measures for any constant error. 49.    What is the disadvantage in proportional controller? The disadvantage in proportional controller is that it produces a constant steady state error. 50.    What is the effect of PD controller on system performance? The effect of PD controller is to increase the damping ratio of the system and so the peak overshoot is reduced. 51.    Why derivative controller is not used in control system? The derivative controller produces a control action based on rare of change of error signal and it does not produce corrective measures for any constant error.  Hence derivative controller is not used in control system 52.    What is the effect of PI controller on the system performance? The PI controller increases the order of the system by one, which results in reducing the steady state error. But the system becomes less stable than the original system. 53.    What is steady state error? The steady state error is the value of error signal e (t) when t tends to infinity. 54.    What are static error constants? The Kp, Kv, and Ka are called static error constants. 55.    What is the drawback of static coefficients? The main drawback of static coefficient is that it does not show the variation of error with time and input should be standard input. 56.    What are the three constants associated with a steady state error? Positional error constant Velocity error constant Acceleration error constant 57.    What are the main advantages of generalized error co-efficient? Steady state is function of time. Steady state can be determined from any type of input 58.    What is frequency response? # The response of the system at the steady state when the input to the system is a   sinusoidal signal is called as the frequency response. 50.    List out the different frequency domain specifications? The frequency domain specifications are i) Resonant peak.              ii) Resonant frequency. 51.    Define –resonant peak (mr)? The maximum value of the magnitude of closed loop transfer function is called resonant peak. 52.    Define resonant frequency (fr)? The frequency at which resonant peak occurs is called resonant frequency. 53.    What is bandwidth? The bandwidth is the range of frequencies for which the system gain Is more than 3 dB. The bandwidth is a measure of the ability of a feedback system to reproduce the input signal-noise rejection characteristics and rise time. 54.    Define cut-off rate? The slope of the log-magnitude curve near the cut-off is called cut-off rate.  The cut-off rate indicates the ability to distinguish the signal from noise. 55.    Define gain margin? The gain margin, kg is defined as the reciprocal of the magnitude of the open loop transfer function at phase cross over frequency. Gain margin kg  =  1 / | G(jwpc|. 56.    Define phase cross over. The frequency at which, the phase of open loop transfer function crosses -180° is  called phase cross over frequency  wpc. 57.    What is phase margin? The phase marging is the amount of phase lag at the gain cross over frequency required to bring system to the verge of instability. 58.    Define gain cross over. The gain cross over frequency  w gc is the frequency at which the magnitude of the open loop transfer function is unity. 59.    What is Bode plot? The Bode plot is the frequency response plot of the transfer function of a system.  A Bode plot consists of two graphs.  One is the plot of magnitude of sinusoidal transfer function versus log w. The other is a plot of the phase angle of a sinusoidal function versus logw. 60.    What are the main advantages of Bode plot? a.       Multiplication of magnitude can be in to addition. b.       A simple method for sketching an approximate log curve is available. c.        It is based on asymptotic approximation.  Such approximation is sufficient if rough information on the frequency response characteristic is needed. d.       The phase angle curves can be easily drawn if a template for the phase angle curve of 1+ jw  is available. 61.    Define corner frequency? The frequency at which the two asymptotic meet in a magnitude plot is called corner  frequency. 62.    Define phase lag and phase lead. A negative phase angle is called phase lag. A positive   phase angle is called phase lead. 63.    What are M circles? The magnitude of closed loop transfer function with unit feed back can be shown to be for every value if M. These circles are called M circles. 64.    What are N circles? If the phase of closed loop transfer function with unity feedback is α, then tan α will be in the form of circles for every value of α; these circles are called N circles. 65.    What is Nichols chart? The chart consisting if M & N loci in the log magnitude versus phase Diagram is called Nichols chart. 66.    What are two contours of Nichols chart? Nichols chart of M and N contours, superimposed on ordinary graph.  The M contours are the magnitude of closed loop system in decibels and the N contours are the phase angle locus of closed loop system. 67.    How the resonant peak (Mr), resonant frequency (wr), and band width are determined from Nichols chart? The resonant peak is given by the value of m. Contour which is tangent to G (jw) locus. The resonant frequency is given by the frequency of G(jw ) at the tangent point. The bandwidth is given by frequency corresponding to the intersection point of G(jw ) and –3dB M-contour. 68.    What are the advantages of Nichols chart? a.    It is used to find the closed loop frequency response from open loop frequency response. b.    Frequency domain specifications can be determined from Nichols chart. c.     The gain of the system can be adjusted to satisfy the given specification. 69.    What is Nyquist contour? The contour that encloses entire right half of S plane is called Nyquist contour. 70.    State Nyquist stability criterion. If the Nyquist plot of the open loop transfer function G(s) corresponding to the Nyquist contour in the S-plane encircles the critical point –1+j0 in the counter clockwise direction as many times as the number of right half S-plane poles of G(s), the closed loop system is stable. 71.    What are the two segments of Nyquist contour? a.       A finite line segment C1 along the imaginary axis. b.       An arc C2 of infinite radius. 72.    What are the effects of adding a zero to a system? Adding a zero to a system results in pronounced early peak to system response thereby the peak overshoot increases appreciably. 73.    State magnitude criterion. The magnitude criterion states that s=sa will be a point on root locus if for that value of s,                                     | D(s) | = |G(s)H(s) |  =1 74.    State angle criterion. The Angle criterion states that s=sa will be a point on root locus for that value of s,, ∟D(s) = ∟G(s)H(s)  =odd multiple of 180° 75.    Define BIBO stability. A linear relaxed system is said to have BIBIO stability if every bounded input results in a bounded output. 76.    What is the necessary condition for stability? The necessary condition for stability is that all the coefficients of the characteristic polynomial be positive. 77.    What is the necessary and sufficient condition for stability? The necessary and sufficient condition for stability is that all of the elements in the first column of the Routh array should be positive. The symmetry of roots with respect to both real and imaginary axis called quadrant symmetry. 79.    What is limitedly stable system? For a bounded input signal if the output has constant amplitude oscillations. Then the system may be stable or unstable under some limited constraints such a system is called limitedly stable system. 80.    Define relative stability Relative stability is the degree of closeness of the system, it and indication of strength or degree of stability. 81.    What are root loci? The path taken by the roots of the open loop transfer function when the loop gain is varied from 0 to α are called root loci. 82.    What is a dominant pole? The poles lying close / on the imaginary axis are dominant poles. 83.    What are the main significances of root locus? a.       The main root locus technique is used for stability analysis. b.       Using root locus technique the range of values of K, for as table system can be determined 84.    What are the two types of compensation? b.       Feedback compensation or parallel compensation 85.    What are the three types of compensators? a.       Lag compensator 86.    What are the uses of lead compensator? a.       Speeds up the transient response b.       Increases the margin of stability of a system c.        Increases the system error constant to a limited extent. 87.    What is the use of lag compensator? Improve the steady state behavior of a system, while nearly preserving its transient response. 88.    When is lag lead compensator required? The lag lead compensator is required when both the transient and steady state response of a system has to be improved. 89.    What is a compensator? A device inserted into the system for the purpose of satisfying the specifications is called as a compensator. Part B 1.       Determine the transfer function X1(S) / F(S)  and  X2(S) / F(S) of the mechanical system shown in figure. 2.       Write the governing differential equations of the mechanical system shown in figure. 3.  Write the governing differential equations of the mechanical system shown in figure. 4.  Write the governing differential equations of the mechanical system shown in figure. 5.   Obtain the closed loop transfer function C(S) / R(S) of the system whose block diagram is shown in figure.  Use Block diagram reduction technique and verify the transfer function with signal flow graph technique. 6.       Derive the transfer function of i.     Armature controlled d.c. motor ii.  Field control d.c. motor with necessary block diagram 7.       Explain the rules for block diagram reduction. 8.       Determine the transfer function of the system shown in the following figure. 9.       Measurements are conducted on a servomechanism show the system response to be                                                      C(t) = 1 + 0.2 e- 60t – 1.2 e- 10t  when subjected to unit step input. i.         Obtain the expression for the closed loop transfer function. ii.        Determine the undamped natural frequency and damping ratio of the system. 10.    Obtain the unit impulse response and unit step response of a unity feedback system whose open loop transfer function is G(s) = (2 s + 1) / s2. 11.    Determine the damping factor and natural frequency of the system when K= 0.  What is the steady state error resulting from unit ramp input? Determine the derivative feedback constant Kwhich will increase the damping factor of the system to 0.6.  What is the steady state error to unit ramp input with this setting? 12.    Derive an expression for the time response of an under damped II order system subjected to unit step input. Plot the response and mark the time domain specifications in it. Define rise time, peak time, peak overshoot and settling time. 13.    Enumerate the advantages of generalized error co-efficients and determine the generalized error                                         co-efficients and steady state error for a system whose open loop transfer function is                                                 G(s) = 1 / (s (s + 1) (s + 10)) and the  feedback transfer function is H(s) = (s + 2) with input                                            r(t) = 6 + t + t2. 14.    The open loop transfer function of a unity feedback system is given by G(s) = 20 / (s2 + 5s + 6).  Determine the damping ratio, maximum overshoot, rise time and peak time.  Derive the used formula. 15.    Determine the time response specifications and expression for output of the system described by he differential equation d2y / dt+ 5 dy / dt +16 y =19x for  unit step input ( y – output and x-input). 16.     A unity feedback heat treatment system has G(s) = 10000 / [(1 + s) (1+ 0.5 s)  (1+ 0.02 s)].The output set point is 5000C.What is the steady state temperature? 17.    A unity feedback system is characterized by the open loop transfer function                                                            G(s) = 1 / (s (1 + 0.5 s) (1 + 0.2s)).  Determine the steady state errors for unit step, unit ramp, and unit acceleration inputs. 18.    Derive the unit step, ramp and impulse response of a first order system and draw the curves. 19.    Evaluate the static error co-efficients for a unity feedback system having a forward path transfer function  G(s) = 50 / (s (s + 10)). 20.    For the system shown in fig. What is the steady state error for unit step input? 21.    A second order mechanical system is represented y the transfer function θ(s) / I((s) = 1 / Js2 + fs + k. A step input of 10 N-m is applied to the system. 22.    The open loop transfer function of a unity feedback system is given by                                                    G(s) = 10 (s + 3) / (s (s + 2) (s2 + 4s + 100)).  Draw the Bode plot and hence find the gain margin and phase margin. 23.    Draw the Bode plot for the function G(s) = K s/ [(1 + 0.2 s) (1 + 0.02s)].  Determine the value of K for a gain cross over frequency of 20 rad/sec. 24.    The open loop transfer function of a unity feedback system is G(s) = 100 (1 + 0.2 s) / s (1+0.1 s). Draw the Bode plot and hence find the gain margin and phase margin. 25.    Draw the Bode plot of the system whose open loop transfer function is                                                             G(s) H(s) = K / (s (1 + s) (1 + 0.1 s) (1 + 0.02 s)).  Determine the value of K for the gain margin of 10 dB. 26.    Sketch the Bode plot for a unity feedback system characterised by                                                                     G(s) H(s) = ( K (1 + 0.2 s) ( 1 + 0.025 s)) / (s( 1 + 0.01 s) ( 1 + 0.005 s)). 27.     Sketch the root locus for the open loop transfer function of unity feedback system  is given by     . 28.     Determine the range of K for stability of unity feedback system whose open loop transfer function is  . 29.    Sketch the root locus for the open loop transfer function of unity feedback system  given by     . 30.    Construct the root locus for a system with G(s) = [K (s + 9)] / [s (s2 + 4s + 11)] and H(s) = 1.  Locate the closed loop poles so that the dominant closed loop poles have a damping ratio of 0.5.  Determine the corresponding value of gain K. 31.    Sketch the Nyquist plot for a feedback system with the open loop transfer function                                                  G(s) H(s) = [K (s + 3) (s + 5)] / [(s - 2) (s - 4)].  Determine the range of K for which the system is stable. 32.    Design a phase lead compensator for a system with open loop transfer function  G(s) H(s) =  K / (s (s + 2)) so that the velocity error constant is 20 sec-1, phase margin is at least 500 and gain margin is at least 10 dB. 33.    The   open    loop   transfer function   of   a  unity  feedback   system   is  given    by                                                G(s) = K / [(s + 2) (s + 4) (s2 + 6s + 25)].  By applying Routh criterion, discuss  the stability  of   the  closed loop  system as a function of  K.  Determine K which  will cause  sustained  oscillations in the system.  What are the corresponding oscillation frequencies? 34.    Construct the root locus for the function G(s) H(s) = [K (s + 2)] / (s + 2)2 and discuss about the stability of the system. 35.    Sketch the Nyquist plot for a system with the open loop transfer function                                                               G(s) H(s) = [K (1 + 0.4 s) (s + 1)] / [(1 + 8s) (s - 1)].  Determine the range of K for which the system is stable. 36.    Design a phase lag compensator so that the system G(s) H(s) = 100 / [s (s + 1)] will have phase margin of 150. 37.    For a system with characteristic equation  s4 + 22 s3 + 10 s2 + s + K = 0, discuss on the stability of the system as a function of K. Obtain the marginal value of  K for stability, and the frequency of oscillations at that value of K.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.840045154094696, "perplexity": 861.0041563490815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646484.16/warc/CC-MAIN-20141024030046-00156-ip-10-16-133-185.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/184148-markov-chain-tossing-coins.html
# Thread: Markov chain for tossing coins 1. ## Markov chain for tossing coins Question: A coin is tossed until five consecutive heads appear. Model this process as a Markov chain where the states are the numbers of consecutive heads. (0,1,...,5). a. Find the probability that it takes 10 or fewer tosses to observe five consecutive heads. b. Find the mean number of tosses it takes to obtain five consecutive heads. What I know - the probability of getting 5 H in a row is 1/32 Can you help? 2. ## Re: Markov chain for tossing coins Start - State 0 Flip a Tails = Stay in State 0 probability is 50% Flip a Heads = Move to State 1 probability is 50% Next? 3. ## Re: Markov chain for tossing coins Thanks for answering. So, if I flip a tails I have to start again, right? But if I get heads I flip again and I either end up with a tails and have to start again or get heads and keep going. How do I summarize this in a chart does it have 0-10 on the top and on the side and then just work on the probabilities within it? 4. ## Re: Markov chain for tossing coins After working on this for some time, I am wondering what the matrix looks like. 5. ## Re: Markov chain for tossing coins Originally Posted by CountingPenguins Question: A coin is tossed until five consecutive heads appear. Model this process as a Markov chain where the states are the numbers of consecutive heads. (0,1,...,5). a. Find the probability that it takes 10 or fewer tosses to observe five consecutive heads. b. Find the mean number of tosses it takes to obtain five consecutive heads. What I know - the probability of getting 5 H in a row is 1/32 Can you help? The states are 0,1,2,3,4,5 the probability of transition from n to state n+1, n=0,1,2,3,4 is 0.5,from state n to state 0 , n=1,2,3,4 is 0.5. State 5 is absorbing (the transition probability from 5 to 5 is 1),the transition probability from state 0 to state 0 is 0.5. All other transition probabilities are 0. Now write this as a transition matrix. CB 6. ## Re: Markov chain for tossing coins Thank you! I got .015625 for part a using .5^(5+1) and 62 coin flips on average to get 5 in a row - using 2^(5+1)-2. 7. ## Re: Markov chain for tossing coins Originally Posted by CountingPenguins Thank you! I got .015625 for part a using .5^(5+1) and 62 coin flips on average to get 5 in a row - using 2^(5+1)-2. How to you argue that those are the answers. Post what you have for the transition matrix A and given that before the first flip you are in state 0 what is [1,0,0,0,0,0]A^10? CB
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9176645874977112, "perplexity": 738.1984955607418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00187-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41467-017-01440-4?error=cookies_not_supported&code=637e8364-33af-4d11-98d8-99cd70ae2213
Article | Open | Published: # Search for domain wall dark matter with atomic clocks on board global positioning system satellites Nature Communicationsvolume 8, Article number: 1195 (2017) | Download Citation ## Abstract Cosmological observations indicate that dark matter makes up 85% of all matter in the universe yet its microscopic composition remains a mystery. Dark matter could arise from ultralight quantum fields that form macroscopic objects. Here we use the global positioning system as a ~ 50,000 km aperture dark matter detector to search for such objects in the form of domain walls. Global positioning system navigation relies on precision timing signals furnished by atomic clocks. As the Earth moves through the galactic dark matter halo, interactions with domain walls could cause a sequence of atomic clock perturbations that propagate through the satellite constellation at galactic velocities ~ 300 km s−1. Mining 16 years of archival data, we find no evidence for domain walls at our current sensitivity level. This improves the limits on certain quadratic scalar couplings of domain wall dark matter to standard model particles by several orders of magnitude. ## Introduction Despite the overwhelming cosmological evidence for the existence of dark matter (DM), there is as of yet no definitive evidence for DM in terrestrial experiments. Multiple cosmological observations suggest that ordinary matter makes up only about 15% of the total matter in the universe, with the remaining portion composed of DM1. All the evidence for DM (e.g., galactic rotation curves, gravitational lensing, cosmic microwave background) comes from galactic or larger scale observations through the gravitational pull of DM on ordinary matter1. Extrapolation from the galactic to laboratory scales presents a challenge because of the unknown nature of DM constituents. Various theories postulate additional non-gravitational interactions between standard model (SM) particles and DM. Ambitious programs in particle physics have mostly focused on (so far unsuccessful) searches for weakly interacting massive particle (WIMP) DM candidates with 10–103 GeV c −2 masses (c is the speed of light) through their energy deposition in particle detectors2. The null results of the WIMP searches have partially motivated an increased interest in alternative DM candidates, such as ultralight fields. These fields, in contrast to particle candidates, act as coherent entities on the scale of an individual detector. Here we focus on ultralight fields that may cause apparent variations in the fundamental constants of nature. Such variations in turn lead to shifts in atomic energy levels, which may be measurable by monitoring atomic frequencies3,4,5. Such monitoring is performed naturally in atomic clocks, which tell time by locking the frequency of externally generated electromagnetic radiation to atomic frequencies. Here, we analyse time as measured by atomic clocks on board global positioning system (GPS) satellites to search for DM-induced transient variations of fundamental constants6. In effect we use the GPS constellation as a ~ 50,000 km-aperture DM detector. Our DM search is one example of using GPS for fundamental physics research. Another recent example includes placing limits on gravitational waves7. GPS works by broadcasting microwave signals from nominally 32 satellites in medium-Earth orbit. The signals are driven by an atomic clock (either based on Rb or Cs atoms) on board each satellite. By measuring the carrier phases of these signals with a global network of specialised GPS receivers, the geodetic community can position stations at the 1 mm level for purposes of investigating plate tectonics and geodynamics8. As part of this data processing, the time differences between satellite and station clocks are determined with <0.1 ns accuracy9. Such high-quality timing data for at least the past decade are publicly available and are routinely updated. Here we analyse data from the Jet Propulsion Laboratory10. A more detailed overview of the GPS architecture and data processing relevant to our search is given in Supplementary Note 1, which includes Supplementary Figs. 1 and 2 and Supplementary Table 1. The large aperture of the GPS network is well suited to search for macroscopic DM objects, or clumps. Examples of clumpy DM candidates are numerous: topological defects (TDs)11,12, Q-balls13,14,15, solitons16,17, axion stars18,19 and other stable objects formed due to dissipative interactions in the DM sector. For concreteness, we consider specifically TDs. Each TD type (monopoles, strings or domain walls) would exhibit a transient in GPS data with a distinct signature. Topological defects may be formed during the cooling of the early universe through a spontaneous symmetry breaking phase transition11,12. Technically, this requires the existence of hypothesised self-interacting DM fields, φ. While the exact nature of TDs is model-dependent, the spatial scale of the DM object, d, is generically given by the Compton wavelength of the particles that make up the DM field $d=ℏ∕ ( m φ c )$, where m φ is the field particle mass, and $ℏ$ is the reduced Plank constant. The fields that are of interest here are ultralight: for an Earth-sized object the mass scale is $m φ ~1 0 - 14 eV c - 2$, hence the probed parameter space is complementary to that of WIMP searches2, as well as searches for other DM candidates20,21,22. Searches for TDs have been performed via their gravitational effects, including gravitational lensing23,24,25. Limits on TDs have been placed by the Planck26 and Background Imaging of Cosmic Extragalactic Polarization 2 (BICEP2)27 collaborations from fluctuations in the cosmic microwave background. So far the existence of TDs is neither confirmed nor ruled out. The past few years have brought several proposals for TD searches via their non-gravitational signatures6,28,29,30,31,32. Here, we report the results of the search for domain walls, quasi-2D cosmic structures. As a result, we improve the limits on certain quadratic scalar couplings of domain wall DM to standard model particles by several orders of magnitude. ## Results ### Domain wall theory and expected signal We focus on the search for domain walls, since they would leave the simplest DM signature in the data. General signature matching for the vast set of GPS data has proven to be computationally expensive and is in progress. While we interpret our results in terms of domain wall DM, we remark that our search applies equally to the situation where walls are closed on themselves, forming a bubble that has transverse size significantly exceeding the terrestrial scale. The galactic structure formation in that case may occur as per conventional cold dark matter theory33, since from the large distance perspective the bubbles of domain walls behave as point-like objects. We employ the known properties of the DM halo to model the statistics of encounters of the Earth with TDs. Direct measurements34 of the local dark matter density give 0.3 ± 0.1 GeV cm−3, and we adopt the value of ρ DM ≈ 0.4 GeV cm−3 for definitiveness. According to the standard halo model, in the galactic rest frame the velocity distribution of DM objects is isotropic and quasi-Maxwellian, with dispersion35 $v≃290$ km s−1 and a cut-off above the galactic escape velocity of $v esc ≃550$ km s−1. The Milky Way rotates through the DM halo with the Sun moving at ~ 220 km s−1 towards the Cygnus constellation. For the goals of this work we can neglect the much smaller orbital velocities of the Earth around the Sun (~ 30 km s−1) and GPS satellites around the Earth (~ 4 km s−1). Thereby one may think of a TD wind impinging upon the Earth, with typical relative velocities $v g ~300$ km s−1. Assuming the standard halo model, the vast majority of events (~ 95%) would come from the forward-facing hemisphere centred about the direction of the Earth’s motion through the galaxy, with typical transit times through the GPS constellation of about 3 min. An example of a domain wall crossing is shown in Fig. 1. Note that we make an additional assumption that the distribution of wall velocities is similar to the standard halo model, which is expected if the gravitational force is the main force governing wall dynamics within the galaxy. However, even if this distribution is somewhat different, the qualitative feature of a TD wind is not expected to change. A positive DM signal can be visualised as a coordinated propagation of clock glitches at galactic velocities through the GPS constellation, see Fig. 2. The powerful advantage of working with the network is that non-DM clock perturbations do not mimic this signature. The only systematic effect that has propagation velocities comparable to v g is the solar wind36, an effect that is simple to exclude based on the distinct directionality from the Sun and the fact that the solar wind does not affect the satellites in the Earth’s shadow. As the nature of non-gravitational interactions of DM with ordinary matter is unknown, we take a phenomenological approach that respects the Lorentz and local gauge invariances. We consider quadratic scalar interactions between the DM objects and clock atoms that can be parameterised in terms of shifts in the effective values of fundamental constants6. The relevant combinations of fundamental constants include $α= e 2 ∕ℏc≈1∕137$, the dimensionless electromagnetic fine-structure constant (e is the elementary charge), the ratio m q /Λ QCD of the light quark mass to the quantum chromodynamics (QCD) energy-scale, and m e and m p , the electron and proton masses. With the quadratic scalar coupling, the relative change in the local value for each such fundamental constant is proportional to the square of the DM field $δ X r , t X = Γ X φ r , t 2 ,$ (1) where Γ X is the coupling constant between dark and ordinary matter, with X = α,m e ,m p ,m q /Λ QCD (see Supplementary Note 2 for further details). As the DM field vanishes outside the TD, the apparent variations in the fundamental constants occur only when the TD overlaps with the clock. This temporary shift in the fundamental constants leads in-turn to a transient shift in the atomic energy levels referenced by the clocks, which may be measurable by monitoring atomic frequencies3,4,5. The frequency shift can be expressed as $δ ω r , t ω c = ∑ X K X δ X r , t X ,$ (2) where ω c is the unperturbed clock frequency and K X are known coefficients of sensitivity to effective changes in the constant X for a particular clock transition37. It is worth noting that the values of the sensitivity coefficients K X depend on experimental realisation. Here we compare spatially separated clocks (to be contrasted with the conventional frequency ratio comparisons3,4,5), and thus our used values of K X somewhat differ from other places in the literature37; full details are presented in Supplementary Note 2. For example, for the microwave frequency 87Rb clocks on board the GPS satellites, the sensitivity coefficients are $δ ω ω c Rb = 4.34 Γ α - 0.019 Γ q + Γ e ∕ p φ 2 ≡ Γ eff Rb φ 2 ,$ (3) where we have introduced the short-hand notation $Γ q ≡ Γ m q ∕ Λ QCD$ and $Γ e ∕ p ≡2 Γ m e - Γ m p$, and the effective coupling constant Γ eff ≡ ∑ X K X Γ X . From Eqs. (1) and (2), the extreme TD-induced frequency excursion, δω ext, is related to the field amplitude φ max inside the defect as $δ ω ext = Γ eff ω c φ max 2$. Further, assuming that a particular TD type saturates the DM energy density, we have6 $φ max 2 =ℏc ρ DM T v g d$. Here, $T$ is the average time between consecutive encounters of the clock with DM objects, which, for a given ρ DM, depends on the energy density inside the defect6 $ρ inside = ρ DM T v g ∕d.$ Thus the expected DM-induced fractional frequency excursion reads $δ ω ext ω c = Γ eff ℏc ρ DM v g Td,$ (4) which is valid for TDs of any type (monopoles, walls and strings). The frequency excursion is positive for Γ eff > 0, and negative for Γ eff < 0. The key qualifier for the preceding Eq. (4) is that one must be able to distinguish between the clock noise and DM-induced frequency excursions. Discriminating between the two sources relies on measuring time delays between DM events at network nodes. Indeed, if we consider a pair of spatially separated clocks (Fig. 2), the DM-induced frequency shift Eq. (2) translates into a distinct pattern. The velocity of the sweep is encoded in the time delay between two DM-induced spikes and it must lie within the boundaries predicted by the standard halo model. Generalisation to the multi-node network is apparent (see GPS-specific discussion below). The distributed response of the network encodes the spatial structure and kinematics of the DM object, and its coupling to atomic clocks. ### Analysis and search Working with GPS data introduces several peculiarities into the above discussion (see Supplementary Note 1 for details). The most relevant is that the available GPS clock data are clock biases (i.e., time differences between the satellite and reference clocks) S (0)(t k ) sampled at times (epochs) t k every 30 s. Thus we cannot access the continuously sampled clock frequencies as in Fig. 2. Instead, we formed discretised pseudo-frequencies $S ( 1 ) ( t k ) ≡ S ( 0 ) ( t k ) - S ( 0 ) ( t k - 1 )$. Then the signal is especially simple if the DM object transit time through a given clock, d/v g , is smaller than the 30-s epoch interval (i.e., thin DM objects with $d≲1 0 4 km$, roughly the size of the Earth), since in this case S (1) collapses into a solitary spike at t k if the DM object was encountered during the (t k−1, t k ) interval. The exact time of interaction within this interval is treated as a free parameter. One of the expected S (1) signatures for a thin domain wall propagating through the GPS constellation is shown in Fig. 3a. This signature was generated for a domain wall incident with v = 300 km s−1 from the most probable direction. The derivation of the specific expected domain wall signal is presented in Supplementary Note 3, which includes Supplementary Fig. 3. As the DM response of Rb and Cs satellite clocks can differ due to their distinct effective coupling constants Γ eff, we treated the Cs and Rb satellites as two sub-networks, and performed the analysis separately. Within each sub-network we chose the clock on board the most recently launched satellite as the reference because, as a rule, such clocks are the least noisy among all the clocks in orbit. To search for domain wall signals, we analysed the S (1) GPS data streams in two stages. At the first stage, we scanned all the data from May 2000 to October 2016 searching for the most general patterns associated with a domain wall crossing, without taking into account the order in which the satellites were swept. We required at least 60% of the clocks to experience a frequency excursion at the same epoch, which would correspond to when the wall crossed the reference clock (vertical blue line in Fig. 3a). This 60% requirement is a conservative choice based on the GPS constellation geometry, and ensures sensitivity to walls with relative speeds of up to $v≲700km s - 1$. Then, we checked if these clocks also exhibit a frequency excursion of similar magnitude (accounting for clock noise) and opposite sign anywhere else within a given time window (red tiles in Fig. 3a). Any epoch for which these criteria were met was counted as a potential event. We considered time windows corresponding to sweep durations through the GPS constellation of up to 15,000 s, which is sufficiently long to ensure sensitivity to walls moving at relative velocities $v≲4km s - 1$ (given that <0.1% of DM objects are expected to move with velocities outside of this range). Further details of the employed search technique are presented in the Methods section and Supplementary Note 4. The tiled representation of the GPS data stream depends on the chosen signal cut-off $S cut ( 1 )$ (see Fig. 3). We systematically decreased the cut-off values and repeated the above procedure. Above a certain threshold, $S thresh ( 1 )$, no potential events were seen. This process is demonstrated for a single arbitrarily chosen data window in Fig. 3b, c. The thresholds for the Rb and Cs subnetworks above which no potential events were seen are $S thresh ( 1 ) Rb =0.48ns$ and $S thresh ( 1 ) ( Cs ) =0.56ns$ for v ≈ 300 km s−1 sweeps. The second stage of the search involved analysing the potential events in more detail, so that we may elevate their status to candidate events if warranted by the evidence. We examined a few hundred potential events that had S (1) magnitudes just below $S thresh ( 1 )$, by matching the data streams against the expected patterns; one such example is shown in Fig. 3a. At this second stage, we accounted for the ordering and time at which each satellite clock was affected. The velocity vector and wall orientation were treated as free parameters within the bounds of the standard halo model. As a result of this pattern matching, we found that none of these events were consistent with domain wall DM, thus we have found no candidate events at our current sensitivity. Analysing numerous potential events well below $S thresh ( 1 )$ has proven to be substantially more computationally demanding, and is beyond the scope of the current work. ## Discussion As we did not find evidence for encounters with domain walls at our current sensitivity, there are two possibilities: either DM of this nature does not exist, or the DM signals are below our sensitivity. In the latter case we may constrain the possible range of the coupling strengths Γ eff. For the discrete pseudo-frequencies, and considering the case of thin domain walls, Eq. (4) becomes $Γ eff < S thresh ( 1 ) ℏ c π ρ DM T s ( d ) d 2 .$ (5) Our technique is not equally sensitive to all values for the wall widths, d, or average times between collisions, $T$. This is directly taken into account by introducing a sensitivity function, s(d)[0,1], that is included in Eq. (5) to determine the final limits at the 90% confidence level. For example, the smallest width is determined by the servo-loop time of the GPS clocks, i.e., by how quickly the clock responds to the changes in atomic frequencies. In addition, we are sensitive to events that occur less frequently than once every ~ 150 s (so the expected patterns do not overlap), which places the lower bound on $T$. Further, we incorporate the expected event statistics into Eq. (5). Details are presented in the Methods section. Our results are presented in Fig. 4. To be consistent with previous literature6,38, the limits are presented for the effective energy scale $Λ eff ≡1∕ Γ eff$. Further, on the assumption that the coupling strength Γ α dominates over the other couplings in the linear combination in Eq. (3), we place limits on Λ α . The resulting limits are shown in Fig. 5, together with existing constraints38,39. For certain parameters, our limits exceed the 107 TeV level; astrophysical limits39 on Λ α , which come from stellar and supernova energy-loss observations40,41, have not exceeded ~ 10 TeV. The derived constraints on Λ α can be translated into a limit on the transient variation of the fine-structure constant, $δ α α =ℏc ρ DM v g T d Λ α 2 K α ,$ (6) which for d = 104 km corresponds to $δα∕α≲1 0 - 12$. Because of the scaling of the constraints on Λ X , this result is independent of $T$, and scales inversely with d (within the region of applicability). It is worth contrasting this constraint with results from the searches for slow linear drifts of fundamental constants. For example, the search5 resulting in the most stringent limits on long-term drifts of α was carried out over a year and led to $δ α α ≲3×1 0 - 17$. Such long-term limits apply only for very thick walls of thickness $d≫ v g ×1yr~1 0 10 km$, which are outside our present discovery reach. Further, by combining our results from the Rb and Cs GPS sub-networks with the recent limits on Λ α from an optical Sr clock38, we also place independent limits on Λ e/p , and Λ q ; for details, see Supplementary Note 5. These limits are presented in Fig. 6 as a function of the average time between events. For certain values of the d and $T$ parameters, we improve current bounds on Λ e/p by a factor of ~ 105 and for the first time establish limits on Λ q . While we have improved the current constraints on DM-induced transient variation of fundamental constants by several orders of magnitude, it is possible that DM events remain undiscovered in the data noise. Our current threshold $S thresh ( 1 )$ is larger than the GPS data noise by a factor of ~ 5–20, depending on which clocks/time periods are examined. By applying a more sophisticated statistical approach with greater computing power, we expect to improve our sensitivity by up to two orders of magnitude. Indeed, the sensitivity of the search is statistically determined by the number of clocks in the network, N clocks, and the Allan deviation42, σ y (τ 0), evaluated at the data sampling interval τ 0 = 30 s reads, $S ( 1 ) ≲ σ y τ 0 τ 0 N clocks ,$ (7) or, combining with Eq. (5), $Λ X ≲d ℏ c ρ DM T K X N clocks σ y τ 0 τ 0 .$ (8) Note that this estimate differs from a previous estimate6, since while arriving at Eq. (7), we assumed a more realistic white frequency noise (instead of white phase noise). The projected discovery reach of GPS data analysis is presented in Fig. 5. Prospects for the future include incorporating substantially improved clocks on next-generation satellites, increasing the network density with other Global Navigation Satellite Systems, such as European Galileo, Russian Global Navigation Satellite System (GLONASS), and Chinese BeiDou, and including networks of laboratory clocks38,43. Such an expansion can bring the total number of clocks to ~ 100. Moreover, the GPS search can be extended to other TD types (monopoles and strings), as well as different DM models, such as virialized DM fields32,44. In summary, by using the GPS as a dark matter detector, we have substantially improved the current limits on DM domain wall induced transient variation of fundamental constants. Our approach relies on mining an extensive set of archival data, using existing infrastructure. As the direct DM searches are widening to include alternative DM candidates, it is anticipated that the mining of time-stamped archival data, especially from laboratory precision measurements, will play an important role in verifying or excluding predictions of various DM models45. In the future, our approach can be used for a DM search with nascent networks of laboratory atomic clocks that are orders of magnitude more accurate than the GPS clocks43. ## Methods ### Crossing duration distribution Before placing limits on Γ eff, we must account for the fact that we do not have equal sensitivity to each domain wall width, d, or equivalently crossing durations, $τ≡ d v ⊥ ,$ (9) where v is the component of the velocity perpendicular to the face of the DM wall. This is due in part to aspects of the clock hardware and operation, the time-resolution (data sampling frequency), and the employed search method. Therefore, for a given d, we must determine the proportion of events that have crossing durations within the range τ min to τ max, where τ min(max) is the minimum (maximum) crossing duration for which this method is sensitive. There are two factors that determine τ min. The first is the servo-loop time—the fastest perturbation that can be recorded by the clock. This servo-loop time is manually adjusted by military operators and is not available to us at a given epoch, however, it is known46,47 to be within 0.01 and 0.1 s. As such, we consider the best and worse case scenarios: $τ min Servo best = 0.01 s , τ min Servo worst = 0.1 s .$ (10) Note, that below τ Servo we still have sensitivity to DM events, however, the sensitivity in this region is determined by the response of the quartz oscillator to the temporary variation in fundamental constants. However, the resulting limits for crossing durations shorter than the servo-loop time are generally weaker than the existing astrophysics limits (see Supplementary Note 5, including Supplementary Figs. 48), so we consider this no further. The second condition that affects τ min is the clock degeneracy: the employed GPS data set has only 30 s resolution, so any clocks which are affected within 30 s of the reference clock will not exhibit any DM-induced frequency excursion in their data; see satellites 15 and 16 in Fig. 3a. For <60% of the clocks to experience the jump, the (thin-wall) DM object would have to be travelling at over 700 km s−1, which is close to the galactic escape velocity (for head-on collisions), so the degeneracy does not affect the derived limits in a substantial way. (In fact, assuming the standard halo model, <0.1% of events are expected to have v  > 700 km s−1.) This velocity corresponds to a crossing duration for the entire network of ~ 70 s. Transforming this to crossing duration for a single clock, τ, amounts to multiplying by the ratio d/(D GPS ): $τ min Degen . = d D GPS 70s.$ (11) For the thickest walls we consider (~ 104 km), this leads to $τ min Degen .$ of 14 s. Combining the servo-loop and degeneracy considerations, we arrive at the expression $τ min best = max 0.01 s , d D GPS 70 s , τ min worst = max 0.1 s , d D GPS 70 s .$ (12) For walls thicker than d ~ 100 km, τ min is determined by the d/(D GPS)70 s term. As to the maximum crossing duration, there are also two factors that affect τ max. First, the wall must pass each clock in less than the sampling interval of 30 s—this is the condition for the wall to be considered thin: $τ max thin =30s.$ (13) If a wall takes longer than 30 s to pass by a clock, the simple single-data–point signals shown in Fig. 3 would become more complicated, and would require a more-detailed pattern-matching technique. Second, we only consider time windows, J w , of a certain size in our analysis (see Supplementary Note 4). If a wall moves so slowly that it does not sweep all the clocks within this window, the event would be missed: $τ max window = J w d D GPS .$ (14) Therefore, the overall expression for τ max is: $τ max =min 30 s , J w d D GPS .$ (15) Making J w large, however, also tends to increase $S thresh ( 1 )$ (since there is a higher chance that a large window will satisfy the condition for a potential event). By performing the analysis for multiple values for J w , we can probe the largest portion of the parameter space; for further details see Supplementary Note 4 and Supplementary Tables 2 and 3. In this work, we consider windows of J w up to 500 epochs (15,000 s), which corresponds to a minimum velocity of ~ 4 km s−1, which is roughly the orbit speed of the satellites. This has a negligible effect on our sensitivity, since <0.1% of walls are expected to have v  < 4 km s−1. ### Domain wall width sensitivity Assuming the standard halo model, the relative scalar velocity distribution of DM objects that cross the GPS network is quasi-Maxwellian $f v v = C v 2 v c 3 exp - v - v c 2 v c 2 - exp - v + v c 2 v c 2 ,$ (16) where v c  = 220 km s−1 is the Sun’s velocity in the halo frame, and C is a normalisation constant. The form of Eq. (16) is a consequence of the motion of the reference frame. However, the distribution of interest for domain walls is the perpendicular velocity distribution for walls that cross the network $f ⊥ v ⊥ =C′ ∫ v ⊥ ∞ f v ( v ) v ⊥ v 2 dv,$ (17) where v is the component of the wall’s velocity that is perpendicular to the wall, and C′ is a normalisation constant. Note that this is not the distribution of perpendicular velocities in the galaxy—instead, it is the distribution of perpendicular velocities that are expected to cross paths with the GPS constellation (walls with velocities close to parallel to face of the wall are less likely to encounter the GPS satellites, and objects with higher velocities more likely to). Now, define a function f τ (d, τ), such that the integral $∫ τ a τ b f τ ( d , τ ) dτ$ gives the fraction of events due to walls of width d that have crossing durations between τ a and τ b . Note, this function must have the following normalisation: $∫ 0 ∞ f τ d , τ dτ=1$ for all d, and is given by $f τ d , τ = d τ 2 f ⊥ v ⊥ .$ (18) Plots of the velocity and crossing-time distributions are given in Fig. 7. Then, our sensitivity at a particular wall width is $s d = ∫ τ min τ max f τ d , τ dτ.$ (19) Plots of the sensitivity function for a few various cases of parameters are presented in Fig. 8. ### Data availability We used publicly available GPS timing and orbit data for the past 16 years from the Jet Propulsion Laboratory10. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Bertone, G., Hooper, D. & Silk, J. Particle dark matter: evidence, candidates and constraints. Phys. Rep. 405, 279–390 (2005). 2. 2. Liu, J., Chen, X. & Ji, X. Current status of direct dark matter detection experiments. Nat. Phys. 13, 212–216 (2017). 3. 3. Rosenband, T. et al. Frequency ratio of Al+ and Hg+ single-Ion optical clocks; metrology at the 17th decimal place. Science 319, 1808–1812 (2008). 4. 4. Huntemann, N. et al. Improved limit on a temporal variation of m p /m e from comparisons of Yb + and Cs atomic clocks. Phys. Rev. Lett. 113, 210802 (2014). 5. 5. Godun, R. M. et al. frequency ratio of two optical clock transitions in 171Yb+ and constraints on the time variation of fundamental constants. Phys. Rev. Lett. 113, 210801 (2014). 6. 6. Derevianko, A. & Pospelov, M. Hunting for topological dark matter with atomic clocks. Nat. Phys. 10, 933–936 (2014). 7. 7. Aoyama, S., Tazai, R. & Ichiki, K. Upper limit on the amplitude of gravitational waves around 0.1 Hz from the global positioning system. Phys. Rev. D 89, 067101 (2014). 8. 8. Blewitt, G. in Treatise on Geophysics (ed. Schubert, G.), 307–338 (Elsevier, 2015). 9. 9. Ray, J. & Senior, K. Geodetic techniques for time and frequency comparisons using GPS phase and code measurements. Metrologia 42, 215–232 (2005). 10. 10. Murphy, D. et al. JPL Analysis Center Technical Report (in IGS Technical Report) 2015, 77 (2015). 11. 11. Kibble, T. Some implications of a cosmological phase transition. Phys. Rep. 67, 183–199 (1980). 12. 12. Vilenkin, A. Cosmic strings and domain walls. Phys. Rep. 121, 263–315 (1985). 13. 13. Coleman, S. Q-balls. Nucl. Phys. B 262, 263–283 (1985). 14. 14. Kusenko, A. & Steinhardt, P. J. Q-ball candidates for self-interacting dark matter. Phys. Rev. Lett. 87, 141301 (2001). 15. 15. Lee, K., Stein-Schabes, J. A., Watkins, R. & Widrow, L. M. Gauged Q balls. Phys. Rev. D 39, 1665–1673 (1989). 16. 16. Marsh, D. J. E. & Pop, A.-R. Axion dark matter, solitons and the cusp-core problem. Mon. Not. R. Astron. Soc. 451, 2479–2492 (2015). 17. 17. Schive, H.-Y., Chiueh, T. & Broadhurst, T. Cosmic structure as the quantum interference of a coherent dark wave. Nat. Phys. 10, 496–499 (2014). 18. 18. Hogan, C. & Rees, M. Axion miniclusters. Phys. Lett. B. 205, 228–230 (1988). 19. 19. Kolb, E. W. & Tkachev, I. I. Axion miniclusters and bose stars. Phys. Rev. Lett. 71, 3051–3054 (1993). 20. 20. Bozek, B., Marsh, D. J. E., Silk, J. & Wyse, R. F. G. Galaxy UV-luminosity function and reionization constraints on axion dark matter. Mon. Not. R. Astron. Soc. 450, 209–222 (2015). 21. 21. Arvanitaki, A. & Dubovsky, S. Exploring the string axiverse with precision black hole physics. Phys. Rev. D 83, 044026 (2011). 22. 22. Arvanitaki, A. & Geraci, A. A. Resonant detection of axion mediated forces with nuclear magnetic resonance. Phys. Rev. Lett. 113, 161801 (2014). 23. 23. Schneider, P., Ehlers, J. & Falco, E. E. Gravitational Lenses (Springer-Verlag, 1992). 24. 24. Vilenkin, A. & Shellard, E. Cosmic Strings and Other Topological Defects (Cambridge University Press, 1994). 25. 25. Cline, D. B. (ed.). Sources and Detection of Dark Matter and Dark Energy in the Universe. Fourth International Symposium Held at Marina del Rey, CA, USA February 23–25, 2000 (Springer, 2001). 26. 26. The Planck Collaboration. Planck 2013 results. XXV. Searches for cosmic strings and other topological defects. Astron. Astrophys. 571, A25 (2014). 27. 27. The BICEP2 Collaboration. Detection of B-Mode polarization at degree angular scales by BICEP2. Phys. Rev. Lett. 112, 241101 (2014). 28. 28. Pospelov, M. et al. Detecting domain walls of axionlike models using terrestrial experiments. Phys. Rev. Lett. 110, 021803 (2013). 29. 29. Pustelny, S. et al. The global network of optical magnetometers for exotic physics (GNOME): A novel scheme to search for physics beyond the standard model. Ann. Phys. 525, 659–670 (2013). 30. 30. Hall, E. D. et al. Laser interferometers as dark matter detectors. Preprint at http://arxiv.org/abs/ (2016). 31. 31. Stadnik, Y. V. & Flambaum, V. V. Searching for topological defect dark matter via nongravitational signatures. Phys. Rev. Lett. 113, 151301 (2014). 32. 32. Kalaydzhyan, T. & Yu, N. Extracting dark matter signatures from atomic clock stability measurements. Preprint at http://arxiv.org/abs/ (2017). 33. 33. Blumenthal, G. R., Faber, S. M., Primack, J. R. & Rees, M. J. Formation of galaxies and large-scale structure with cold dark matter. Nature 311, 517–525 (1984). 34. 34. Bovy, J. & Tremaine, S. On the local dark matter density. Astrophys. J. 756, 89 (2012). 35. 35. Freese, K., Lisanti, M. & Savage, C. Colloquium: annual modulation of dark matter. Rev. Mod. Phys. 85, 1561 (2013). 36. 36. Meyer-Vernet, N. Basics of the Solar Wind (Cambridge University Press, 2007). 37. 37. Flambaum, V. V. & Dzuba, V. A. Search for variation of the fundamental constants in atomic, molecular, and nuclear spectra. Can. J. Phys. 87, 25–33 (2009). 38. 38. Wcisƚo, P. et al. Experimental constraint on dark matter detection with optical atomic clocks. Nat. Astron. 1, 0009 (2016). 39. 39. Olive, K. A. & Pospelov, M. Environmental dependence of masses and coupling constants. Phys. Rev. D 77, 043524 (2008). 40. 40. Raffelt, G. G. Particle physics from stars. Annu. Rev. Nucl. Part. Sci. 49, 163–216 (1999). 41. 41. Hirata, K. S. et al. Observation in the Kamiokande-II detector of the neutrino burst from supernova SN1987A. Phys. Rev. D 38, 448–458 (1988). 42. 42. Barnes, J. A. et al. Characterization of frequency stability. IEEE Trans. Instrum. Meas. 20, 105–120 (1971). 43. 43. Riehle, F. Optical clock networks. Nat. Photonics 11, 25–31 (2017). 44. 44. Derevianko, A. Detecting dark matter waves with precision measurement tools. Preprint at http://arxiv.org/abs/ (2016). 45. 45. Budker, D. & Derevianko, A. A data archive for storing precision measurements. Phys. Today 68, 10–11 (2015). 46. 46. Griggs, E., Kursinski, E. R. & Akos, D. Short-term GNSS satellite clock stability. Radio Sci. 50, 813–826 (2015). 47. 47. Dupuis, R. T., Lynch, T. J. & Vaccaro, J. R. in 2008 IEEE International Frequency Control Symposium, 655–660 (2008). 48. 48. Wolfram Research, Inc., Mathematica, Version 11.2, Champaign, IL (2017). ## Acknowledgements We thank A. Sushkov and Y. Stadnik for discussions, and J. Weinstein for his comments on the manuscript. We acknowledge the International GNSS Service for the GPS data acquisition. We used JPL’s GIPSY software for the orbit reference frame conversion. This work was supported in part by the US National Science Foundation grant PHY-1506424. ## Author information ### Affiliations 1. #### Department of Physics, University of Nevada, Reno, NV, 89557, USA • Benjamin M. Roberts • , Geoffrey Blewitt • , Conner Dailey • , Mac Murphy • , Alex Rollings • , Wyatt Williams •  & Andrei Derevianko 2. #### Nevada Geodetic Laboratory, Nevada Bureau of Mines and Geology, University of Nevada, Reno, NV, 89557, USA • Geoffrey Blewitt 3. #### Department of Physics and Astronomy, University of Victoria, Victoria, BC, Canada, V8P 1A1 • Maxim Pospelov 4. #### Perimeter Institute for Theoretical Physics, Waterloo, ON, Canada, N2J 2W9 • Maxim Pospelov 5. #### National Institute of Standards and Technology, Boulder, CO, 80305, USA • Jeff Sherman ### Contributions B.M.R., G.B., M.P., J.S. and A.D. contributed to the concept and design of the work. B.M.R., C.D., M.M., A.R., W.W. and A.D. contributed to the data processing and analysis. The manuscript was mainly written by B.M.R., G.B. and A.D. ### Competing interests The authors declare no competing financial interests. ### Corresponding author Correspondence to Andrei Derevianko.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 67, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8214503526687622, "perplexity": 1511.8045037875606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590074.12/warc/CC-MAIN-20180718080513-20180718100513-00159.warc.gz"}
https://electronics.stackexchange.com/questions/16864/totem-pole-output-driver
# Totem Pole Output Driver I was wondering if someone could explain what a totem-pole output driver is? I have seen the term mentioned in connection to the inner circuitry of an AND gate (consisting of transistors) and in terms such as "Standard Totem Pole Output." However, nowhere is it actually explained what it is and what function it serves... thanks!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9747165441513062, "perplexity": 1386.0407097769796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703644033.96/warc/CC-MAIN-20210125185643-20210125215643-00451.warc.gz"}
https://undergroundmathematics.org/calculus-of-powers/tangent-or-normal
### Calculus of Powers Package of problems ## Warm-up Below are eight equations of straight line graphs. • Which lines are perpendicular to each other? • Can you find the coordinates of the point where each pair of perpendicular lines intersects? $y-x-2=0$ $y-3x-2=0$ $3y+x-6=0$ $16y+2x+31=0$ $y-8x+6=0$ $2y-x-3=0$ $y+x+2=0$ $y+2x+1=0$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429203271865845, "perplexity": 1297.616813139806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00397.warc.gz"}
https://hal.inria.fr/hal-00692499
# Knockout Prediction for Reaction Networks with Partial Kinetic Information * Auteur correspondant Abstract : In synthetic biology, a common application field for computational methods is the prediction of knockout strategies for reaction networks. Thereby, the major challenge is the lack of information on reaction kinetics. In this paper, we propose an approach, based on abstract interpretation, to predict candidates for reaction knockouts, relying only on partial kinetic information. We consider the usual deterministic steady state semantics of reaction networks and a few general properties of reaction kinetics. We introduce a novel abstract domain over pairs of real domain values to compute the differences between steady states that are reached before and after applying some knockout. We show that this abstract domain allows us to predict correct knockout strategy candidates independent of any particular choice of reaction kinetics. Our predictions remain candidates, since our abstract interpretation over-approximates the solution space. We provide an operational semantics for our abstraction in terms of constraint satisfaction problems and illustrate our approach on a realistic network. An extended version of this paper with appendices containing detailed information on proofs and on the example model is provided at http://www.lifl.fr/BioComputing/extendedPapers/vmcai13.pdf. Keywords: Abstract interpretation, deterministic semantics, steady state, constraint satisfaction, synthetic biology. Keywords : Type de document : Communication dans un congrès 14th International Conference on Verification, Model Checking, and Abstract Interpretation, Jan 2013, Rome, Italy. Springer, 7737, pp.355-374, 2013, Lecture Notes in Computer Science Domaine : https://hal.inria.fr/hal-00692499 Contributeur : Hal Biocomputing <> Soumis le : lundi 12 novembre 2012 - 17:38:37 Dernière modification le : vendredi 15 janvier 2016 - 16:57:32 Document(s) archivé(s) le : mercredi 13 février 2013 - 02:20:09 ### Fichier hal.pdf Fichiers produits par l'(les) auteur(s) ### Identifiants • HAL Id : hal-00692499, version 1 ### Citation Mathias John, Mirabelle Nebut, Joachim Niehren. Knockout Prediction for Reaction Networks with Partial Kinetic Information. 14th International Conference on Verification, Model Checking, and Abstract Interpretation, Jan 2013, Rome, Italy. Springer, 7737, pp.355-374, 2013, Lecture Notes in Computer Science. <hal-00692499> Consultations de la notice ## 299 Téléchargements du document
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8021007180213928, "perplexity": 3541.1396876837484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00390-ip-10-171-10-108.ec2.internal.warc.gz"}
http://support.sas.com/documentation/cdl/en/statug/68162/HTML/default/statug_logistic_details12.htm
# The LOGISTIC Procedure ### Model Fitting Information For the jth observation, let be the estimated probability of the observed response. The three criteria displayed by the LOGISTIC procedure are calculated as follows: • –2 log likelihood: where and are the weight and frequency values of the jth observation, and is the dispersion parameter, which equals 1 unless the SCALE= option is specified. For binary response models that use events/trials MODEL statement syntax, this is where is the number of events, is the number of trials, is the estimated event probability, and the statistic is reported both with and without the constant term. • Akaike’s information criterion: where p is the number of parameters in the model. For cumulative response models, , where k is the total number of response levels minus one and s is the number of explanatory effects. For the generalized logit model, . • Schwarz (Bayesian information) criterion: where p is the number of parameters in the model, is the number of trials when events/trials syntax is specified, and with single-trial syntax. The AIC and SC statistics give two different ways of adjusting the –2 Log L statistic for the number of terms in the model and the number of observations used. These statistics can be used when comparing different models for the same data (for example, when you use the SELECTION= STEPWISE option in the MODEL statement). The models being compared do not have to be nested; lower values of the statistics indicate a more desirable model. The difference in the –2 Log L statistics between the intercepts-only model and the specified model has a degree-of-freedom chi-square distribution under the null hypothesis that all the explanatory effects in the model are zero, where p is the number of parameters in the specified model and k is the number of intercepts. The likelihood ratio test in the "Testing Global Null Hypothesis: BETA=0" table displays this difference and the associated p-value for this statistic. The score and Wald tests in that table test the same hypothesis and are asymptotically equivalent; for more information, see the sections Residual Chi-Square and Testing Linear Hypotheses about the Regression Coefficients.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9386780858039856, "perplexity": 725.5984996680693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156311.20/warc/CC-MAIN-20180919220117-20180920000117-00181.warc.gz"}
http://mathoverflow.net/questions/35643/conjugate-gradient-for-a-slightly-singular-system
# Conjugate Gradient for a “slightly” singular system. Suppose I have a symmetric $N \times N$ matrix A which has a one-dimensional Nullspace $N$. A is positive definite on $N^\bot$. In my case $N$ is the space of constant vectors (i.e. generated by the all-one vector). I have to solve the problem $Ax = b$, with $b \in R(A)$ which has infinitely many solutions. I am looking for the minimum norm solution. The matrix $A$ is very large and sparse, direct methods aren't really an option. The rank-deficient least squares algorithms I have seen also appear to be prohibitive. I was solving a non-singular version of this problem with Conjugate Gradient. Is there anyway I can modify the algorithm to solve this particular problem? EDIT: Boiling the problem down to the bone the question is if $A$ is positive semi-definite with exactly one 0 eigenvalue, does CG work? Thanks. - mathoverflow.net/questions/7903 –  J. M. Aug 15 '10 at 13:00 I'd suggest you to shift away the singularity: solve $(A+ee^T)y=b$ instead and then orthogonalize $y$ with respect to $e$ to get $x$. $A+ee^T$ is not sparse but you can compute matrix-vector products cheaply, and that's all you need for CG. EDIT: forgot to define it, $e$ is the vector of all ones - This is exactly what I did. With the help of a preconditioner it converges sufficiently fast. In hindsight it was the obvious route to take. Thanks for pointing it out to me. –  RadonNikodym Aug 16 '10 at 8:21 Plain CG, without the regularization, will converge to the pseudo-inverse solution provided you start the iteration with $y=0$. The regularized (preconditioned) solution will be different and the operation count will depend on the sparsity, so it's good to maintain this if you can. I guess partly it must depend on whether you can tolerate a solution that has a nonzero component in the null space. - You can add a linear constraint, stipulating that your solution should be orthogonal to the null space, assuming it is known to you ahead of time. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115796089172363, "perplexity": 309.6011660809639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654292.99/warc/CC-MAIN-20150417045734-00096-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/proton-collision-problem.543229/
# Proton collision problem 1. Oct 23, 2011 ### loganblacke 1. The problem statement, all variables and given/known data A proton with a speed of 1.23x104 m/s is moving from ∞ directly towards another proton. Assuming that the second proton is fixed in place, find the position where the moving proton stops momentarily before turning around. 2. Relevant equations Potential Energy - U/q = V 3. The attempt at a solution Clueless. 2. Oct 23, 2011 ### Delphi51 The proton is moving initially so it has kinetic energy. You must think about what energy change takes place as it is repelled by the other proton. The calculation itself will be easy. 3. Oct 23, 2011 ### cmb :surprised ...in a little proton vice?? 4. Oct 23, 2011 ### loganblacke Okay so conservation of energy, I get that. Ek = .5(1.67x10-27kg)(1.23x104m/s)2 When the proton stops, Ek = 0, I'm not understanding how this relates to the position of the proton when it stops. 5. Oct 23, 2011 ### Delphi51 It lost kinetic energy. But energy is conserved; it must have changed into some other kind of energy. It is the energy due to electric charges being near each other and is called electric potential energy. It depends on the distance between charges. Write "initial kinetic energy = electric potential energy when stopped" Put in the detailed formulas for both types of energy. Wikipedia has the potential energy one here: http://en.wikipedia.org/wiki/Electric_potential_energy Then you can solve for the distance between charges. 6. Oct 23, 2011 ### loganblacke I really really appreciate your help with this but it appears as if there are 50 equations for electric potential energy. I'm assuming the initial kinetic energy would be the 1/2mv2, however it isn't obvious to me which equation to use for the electric potential energy.. 7. Oct 23, 2011 ### loganblacke I tried using 1/2mv2 = q*(1/4∏ε0)(Q/r)... and then solving for r but the answer I ended up with was wrong. 8. Oct 23, 2011 ### Delphi51 It is the first equation in the Wikipedia article (link in previous post). Your choice of the form with k or the form with 1/(4∏ε0). 9. Oct 24, 2011 ### loganblacke I tried using both equations and I'm still getting the wrong answer. Here is what I got.. (1/(4∏*(8.854*10-12)))*((Q1*Q2)/r)=.5(1.67*10-27)(1.23*104)2 Q1 and Q2 would be the charge of each proton which I think is 1.6*10-19 If i solve for r, I get 4.92*1018. Another website said that the charge of a proton is e, so I tired that as well and got 1.89*10-30. 10. Oct 24, 2011 ### Delphi51 The elementary charge IS 1.6 x 10^-19, so you should not have got a different answer that way! You know, it is much easier to solve the equation before putting the numbers in - just easier to manipulate letters than lengthy numbers. k*e²/R = ½m⋅v² R = 2k*e²/(m⋅v²) Putting in the numbers at this stage, I get an answer in nanometers. 11. Oct 24, 2011 ### loganblacke Finally got it right, 1.821 x 10-9 Apparently e and e on the graphing calculator are two very different things! 12. Oct 24, 2011 ### Delphi51 Looks good! Yes, the e on the calculator is 2.718, the base of the natural logarithm.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9512149691581726, "perplexity": 1014.4979294357792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646875.28/warc/CC-MAIN-20180319101207-20180319121207-00329.warc.gz"}
http://math.stackexchange.com/questions/497616/is-there-an-explicit-formula-for-the-bernoulli-numbers-that-doesnt-implicitly-r
# Is there an explicit formula for the Bernoulli numbers that doesn't implicitly recapitulate Faulhaber's formula? So, I'm familiar with the "standard" explicit formula for the Bernoulli numbers: $$B_m (n) = \sum^m_{k=0}\sum^k_{v=0}(-1)^v {k \choose v} {(n+v)^m \over k+1}$$ where choosing $n=0$ gives the Bernoulli numbers of the first kind and $n=1$ gives the Bernoulli numbers of the second kind. The term $(n+v)^m$ used in this formula basically means calculating Faulhaber's formula in a slight disguise, which of course depends on the Bernoulli numbers again. Wikipedia claims that the paper by Louis Saalschütz found at http://digital.library.cornell.edu/cgi/t/text/text-idx?c=math;idno=00450002 has some 38 other explicit formulae for the Bernoulli numbers, but unfortunately I can't read German and would not trust my understanding of the given formulae as a result. Is there any known explicit formula that doesn't rely on Faulhaber's formula the way the given formula does? Alternately, is there an English translation of that paper accessible anywhere? - In your mind does $e=\sum_{k\ge0}1/k!$ not 'implicitly recapitulate' the other definition $e=\lim(1+1/n)^n$ as $n\to\infty$? – oldrinb Sep 18 '13 at 15:47 Not necessarily, although of course they are related. The fact is that you can calculate those expressions separately from one another, whereas using the expression I gave to calculate the coefficients for Faulhaber's formula requires you to employ Faulhaber's formula, which is the sort of dependence I would like to avoid. – David Sep 18 '13 at 15:51 These aren't Bernoulli numbers, they are Bernoulli polynomials. Which would you like an expression for, the former or the latter? In either case, do some of the formulas on the wiki page entice you? en.wikipedia.org/wiki/Bernoulli_polynomials – Alex R. Sep 18 '13 at 19:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639504551887512, "perplexity": 366.05890683504015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121976.98/warc/CC-MAIN-20160428161521-00099-ip-10-239-7-51.ec2.internal.warc.gz"}
https://mikepawliuk.ca/2012/10/10/stevos-forcing-class-fall-2012-class-6/
# Stevo’s Forcing Class Fall 2012 – Class 6 (This is the sixth lecture in Stevo Todorcevic’s Forcing class, held in the fall of 2012. You can find the fifth lecture here. Quotes by Stevo are in dark blue; some are deep, some are funny, some are paraphrased so use your judgement. As always I appreciate any type of feedback, including reporting typos, in the comments below.) ## Caliber Properties Definition. $\mathbb{P}$ has caliber $(\theta, \lambda, \kappa)$ if for every $\theta$-sized family $F \subseteq \mathbb{P}$ can be refined to a $\lambda$-sized $G \subseteq F$ such that all families of $H \subseteq G$ size $\kappa$ have a lower bound in $\mathbb{P}$. Examples: • $\mathbb{P}$ has pre-caliber $\theta$ iff $\mathbb{P}$ has caliber $(\theta, \theta, <\omega)$. • $\mathbb{P}$ has property $K_\theta$ iff $\mathbb{P}$ has caliber $(\theta, \theta, 2)$. • $\mathbb{P}$ has property K(naster) iff $\mathbb{P}$ has caliber $(\omega_1, \omega_1, 2)$. Topological Definition (Šanin): A compact space $K$ is $\theta$-caliber iff the poset of non-empty open subsets of $K$ have pre-caliber $\theta$. This definition was introduced because separability is not preserved by products, but caliber properties are preserved. Exercise: Show that Cohen $\times$ Random has caliber $(\omega_1, \omega, \omega)$ while Cohen * Random does not. ## The ccc Functor is ccc Here is a summary of the notation from Class 5, where we defined this functor. We are starting with a powerfully ccc poset $\mathbb{P}$ of cardinality $<\theta_2$ and we are building its specializing poset $\mathcal{S}(\PP)$ and • $d: [\kappa]^{<\omega} \rightarrow \mathbb{P}$ such that $d[X]^{<\omega} = \mathbb{P}$ for all $X \in [\kappa]^\kappa$; • Fix a 1-1 sequence of reals $\{r_\xi : \xi < \kappa\} \subseteq \{0,1\}^\omega$; • $c: \kappa \times \kappa \rightarrow \omega$ is not constant on any product of two infinite sets; • $c^* (\alpha_0, ..., \alpha_{k-1}) := \max \{c (\alpha_i, \alpha_j) : i \neq j < k\}$; • $\mathcal{S}(\PP) \subseteq [\kappa]^{<\omega}$; • For $F \in [\kappa]^{<\omega}, \vec s \in (2^{<\omega})^{<\omega}$ of some length $k$, define $\PP_F := \{d(\alpha_0, ..., \alpha_{k-1}) : \forall i < k, \alpha_0 < ... < \alpha_{k-1} \in F, r_{\alpha_i} \upharpoonright c^* (\alpha_0, ..., \alpha_{k-1}) = s_i \}$; • $F \in \mathcal{S}(\PP)$ iff $\forall \vec s \in (2^{<\omega})^{<\omega}$, $\PP_F$ is centred in $\mathbb{P}$. Claim: $\mathcal{S}(\PP)$ is a (powerfully) ccc poset. proof. Let $\mathcal{F} \subseteq \mathcal{S}(\PP)$ be a given uncountable family forming a $\Delta$-system with some root $D$. We may assume (by refining) that for some $m < \omega$ and all $f \in \mathcal{F}$ that • $\Delta(r_\xi, r_\eta) < m$, for all $\xi \neq \eta$; • $c(\xi, \eta) < m$, for all $\xi \neq \eta$ We may also assume that for some $n, \vert F \vert = n < \omega$ for all $F \in \mathcal{F}$, and • $r_\xi \upharpoonright m \neq r_\eta \upharpoonright m$, for all $\xi \neq \eta$; • $F \mapsto \{r_\xi \upharpoonright m : \xi \in F\}$, for $F \in \mathcal{F}$ is a constant mapping, and in fact that $\forall F,G \in \mathcal{F}$ we get $r_{F(i)} \upharpoonright m = r_{G(i)} \upharpoonright m$, for all $i < n$, (listing $F = \{F(0), ..., F(n-1)\}$). Now we do something unusual.” Using the fact that $\mathbb{P}^k$ is ccc for $k = \vert (2^{\leq m})^{\leq n}\vert$, we may assume that for each $\vec s \in 2^{\leq m})^{\leq n}$ that $\bigcup \{\bigcup\PP_F (\vec s) : F \in \mathcal{F}\}$ is a centred family in $\mathbb{P}$. This exists in a forcing extension by $\mathbb{P}$, where $\mathbb{P}$ becomes $\sigma$-centred. This looks like I’m cheating, going to a forcing extension. But I’m only looking for 2 compatible elements. Use absoluteness of the colouring’s property. Find $\overline{m} > m$ and two uncountable (“this is a separable metric space thing“) $\mathcal{F}_0, \mathcal{F}_1 \subseteq \mathcal{F}$ such that $\forall F \in \mathcal{F}_0, \forall G \in \mathcal{F}_1, \forall i < n$ we have $\Delta(r_{F(i)}, r_{G(i)}) < \overline{m}$. Here the key comes.” By the property of $c$ given in a previous lemma, there are $F \in \mathcal{F}_0, G \in \mathcal{F}_1$ such that $c(\alpha, \beta) > \overline{m}$ for $\alpha \in F \setminus D$ and $\beta \in G \setminus D$because of the unbounded property of $c$. Claim: $F \cup G \in \mathcal{S}(\PP)$. What does it mean? You have to check the defining property of $\mathcal{S}(\PP)$. Take $\vec s \in (2^{<\omega})^{<\omega}$, where $\vec s = (s_0, ..., s_k)$, and $\vert s_i \vert = \vert s_j \vert = l, \forall i,j$ for some particular $l$. There are three cases: 1. $l < m$; 2. $m \leq l < \overline{m}$; 3. $\overline{m} \leq l$. The important thing is what these numbers represent.” – Ivan Khatchatourian Case 1 [$l < m$]: Claim: $\PP_{F\cup G} (\vec s) = \PP_{F} (\vec s) \cup \PP_{G} (\vec s)$ Let $d (\alpha_0, ..., \alpha_k) \in \PP_{F\cup G} (\vec s)$, which means $\{\alpha_0, ..., \alpha_k\} \subseteq F\cup G$, and $r_{\alpha_i}\upharpoonright l = s_i$ for all $i \leq k$. Okay, so let us see… Why you cannot start in one and go to another? [Picture argument to be filled in later] Then $\{\alpha_0, ..., \alpha_k\} \subseteq F$ or $\{\alpha_0, ..., \alpha_k\} \subseteq G$. Case 2 [$m \leq l < \overline{m}$]: Claim: $\PP_{F \cup G} = \emptyset$. Again, the picture tells you much more than you can get by chasing definitions.” Take $d (\alpha_0, ..., \alpha_k) \in \PP_{F\cup G} (\vec s)$, and $r_{\alpha_i} \upharpoonright l = s_i$, for all $i \leq k$ where $l = c^* (\alpha_0, ..., \alpha_k)$. There are no $i,j$ such that $\alpha_i \in F \setminus D$ and $\alpha_i \in G \setminus D$. So $\{\alpha_i\} \subseteq F$ or $\{\alpha_i\} \subseteq G$. The main thing is now comes case 3.” Case 3 [$\overline{m} \leq l$]: Claim: $\vert \PP_F (\vec s) \vert \leq 1$. Suppose not, so you have two possibilities:” $\{\alpha_0, ..., \alpha_k\}, \{\alpha_0^\prime, ..., \alpha_k^\prime\} \subseteq F \cup G$ and $c^* (\alpha_0, ..., \alpha_k) = l = c^* (\alpha_0^\prime, ..., \alpha_k^\prime)$, and $r_{\alpha_i}\upharpoonright l = s_i = r_{{\alpha_i}^\prime} \upharpoonright l$ for all $i \leq k$. If $\alpha_i \neq \alpha_i^\prime$, then $\alpha_i \in F \setminus D, \alpha_i^\prime \in G \setminus D$. The projections” $r_\alpha \mapsto r_\alpha \upharpoonright m$ “are 1-1” for $\alpha \in F$. “The place below where they split is below $\overline{m}$, a contradiction.” [QED] “It is a slippery argument but … *Shrug*.” “Showing that $\mathcal{S}(\PP)$ is powerfully ccc is the same argument. Corollary. TFAE: • MA$(\aleph_1)$; • Every compact ccc scpace has caliber $\aleph_1$; • Every ccc poset has precaliber $\aleph_1$. The proof of this is like the previous proof comparing $\mathfrak{m}_{SH}$ and $\mathfrak{m}$. Note that this functor also specializes trees: Let $X \subseteq \mathcal{S}(\PP)$ be a centred subset of size $\kappa$. Then $\Gamma := \bigcup X \in [\kappa]^\kappa$ and $[\Gamma]^{<\omega} \in \mathcal{S}(\PP)$ and $d [X]^{<\omega} = \mathbb{P}$ and $\displaystyle \bigcup_{\vec s \in (2^{<\omega})^{<\omega}} \PP_X (\vec s) = \mathbb{P}$ Question: ATFE? (Are The Following Equivalent) • MA$(\aleph_1)$; • Every ccc poset has property K. This involves checking something for every pair. As the previous proof shows. Now we go to the CH Log of weight is density. Definition: $\cc_d := \sup \{ d(K) : K \textrm{ is compact, ccc, weight }\leq \mathfrak{c}\}$. Note that it is comparable to replace weight with $\pi$-weight here. Question: Is $\cc_d = \mathfrak{c}$? In combinatorial language, how many centred sets do you need to cover a poset? ## Some Combinatorial CHs CH$^\omega$: $\cc_d = \aleph_1$, i.e. every ccc poset of size $\leq \mathfrak{c}$ is $\aleph_1$-centred. CH$^n$: Every ccc poset of size $\leq \mathfrak{c}$ is $\aleph_1$-n-linked. CH$^2$: Every ccc poset of size $\leq \mathfrak{c}$ is $\aleph_1$-2-linked. Open Question: Are they different? We will see that most consequences of CH are actually consequences of these forms of CH.” Theorem: CH$^3$ implies $\theta_2 = \aleph_2$. These give you another way to create ccc posets. Here ccc is a quite weak condition. Theorem*: If every ccc poset has property $K_{\bb^+}$, then $\bb \leq \theta_2$. In this course we pretend $\mathfrak{c} = \theta_2$. Theorem**: There is a productively ccc poset $\mathbb{P}$ without property $K_{\bb^+}$. So plugging in CH$^3$ gives $\bb \leq \aleph_1$, i.e. $\bb = \aleph_1$, so $\bb^+ = \aleph_2$. Corollary: CH$^2$ implies $\bb = \aleph_1$. (Downward) Lemma: Suppose $(X, \leq_W)$ is a well-ordered separable metric space. “We know well orderedness usually has nothing to do with metric spaces.” Let $F: X \rightarrow \textrm{Exp}(X)$ be a (closed) set mapping which respects $\leq_W$, i.e. $F(X) \subseteq \{ y : y\leq x\}$. Let $\mathbb{P} = \{ A \in [X]^{<\omega}: \forall x \neq y \in A, y \notin F(x)\}$ be the collection of finite $F$-free sets ordered by reverse inclusion. THEN, $\mathbb{P}$ is productively ccc. Recall that $\mathbb{P} \times \dot \mathbb{Q}$ is ccc iff $\vdash_\mathbb{P}$$\mathbb{Q}$ is ccc”. proof. Let $A \subseteq \mathbb{P}$ be a given uncountable family of finite $F$-free subsets of $X$. We need to find $a \neq b$ such that $a \cup b$ is $F$-free. You see, being free is a 2-dimensional property. As usual, by forming a $\Delta$-system and removing the root, we may assume that elements of $A$ are pairwise disjoint and of the same size $n$. Showing they’re free on a tail is enough. Think of $A$ as a subset of $X^n$. Let $A_0 \subseteq A$ be a countable dense subset of $A$. Since $X$ is a separable metric space, we may assume that for some fixed sequence $V_0, ..., V_n$ of pariwise disjoint open subsets of $X$ we have that • $A \subseteq V_0 \times V_1 \times ... \times V_{n-1}$; and • for all $a \in A$, and $i < n$ we have $F(a(i)) \cap V_j = \emptyset$. We only need to check that $a(i)$ and $b(i)$ are compatible, by separability. That is, we don’t need to check $a(i)$ against $b(j)$, for $i \neq j$. Now we use the well-ordering. There is always this ‘moreover’ business. Moreover, we may assume that for all $i we have $\textrm{otp}\{a(i): a \in A\} = \omega_1$. Find $c \in A$ such that for all $i and $a \in A_0$ we have $a(i) <_W c(i)$. (For this, use uncountability.) Take $b \in A$ such that $b(i) >_W c(i), \forall i < n$. Then $\displaystyle b \in (X \setminus F(c_0)) \times (X \setminus F(c_1)) \times ... \times (X \setminus F(c_{n-1})) =: \vec U$ Closed sets look downwards, they don’t hit $b(i)$‘s. So there is $a \in A_0$ such that $a \in \vec U$. So $a \cup c$ is $F$-free. “$a$ cannot hit $c$ because it is above. $c$ cannot hit $a$ because I picked it out. $b$ was just for non-emptyness.” [QED] Question: Do we need the well-ordering? It is unnatural for metric spaces. Remark: Changing this lemma to arbitrary posets is equivalent to OCA. Now we come to this example there.” Example: Let $X \subseteq \omega^{\uparrow \omega}$ be a $<^*$-well-ordered, $<^*$-unbounded family of cardinality $\bb$. Define $F: X \rightarrow\textrm{Exp}(X)$ by $F(x) := \{y \in X : \forall n, y(n) \leq x(n)\} \subseteq \{y \in X : y \leq^* x\}$. So the poset $\mathbb{P}$ of finite $F$-free sets is ccc. Recall from oscillation theory: $\{\{x\}: x \in X\} \subseteq \PP_X$ does not have a 2-linked subset of size $\bb$. [Assume $\bb^+ < \theta_2$.] Apply the functor $\mathbb{P} \mapsto \mathcal{S}(\PP)$ to the poset $\PP_X$ to get a poset $\mathcal{S}(\PP)$ of size $\bb^+$ with no 3-linked subset of size $\bb^+$. We are losing 1-dimesnsion. Aside: “CH$^3$ seems to be quite strong. Some consequences of CH$^3$. • $\textrm{cf}(\cc) = \aleph_1$; • Every $f: \mathbb{R} \rightarrow \mathbb{R}$ can be decomposed into $\aleph_1$ continuous sub-functions. CH$^\omega$ implies the density of Lebesgue measure algebra is $\aleph_1$. This gives you Luzin sets, Sierpinksi functions, etc.. ## 3 thoughts on “Stevo’s Forcing Class Fall 2012 – Class 6” 1. Chris Eagle says: Hey Mike, I think that the notion of caliber was introduced by Sanin (in fact, the S should have a hacek on it), not Shannon. Thanks for posting these notes, it’s very helpful! Like 1. Micheal Pawliuk says: Fixed! Thanks for finding that. Like
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 237, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981902837753296, "perplexity": 1352.0428006334967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738609.73/warc/CC-MAIN-20200810042140-20200810072140-00513.warc.gz"}
https://www.italki.com/post/question-163224
xarmanla What is the meaning of the following: لمح البصر Is the expression common in modern standard Arabic? Oct 9, 2012 1:48 AM yeah it is common , it is an expression resembles the speed of something as the speed of " lamh el basar " it means the speed of some thing is so fast that you can not see, like the speed of light October 9, 2012 it's an idiom in arabic and as u said it means in the blink of an eye to express something that happened so fast and it's common in modern standard arabic and another equivalent of it is فى غمضه عين which is common in egyptian accent October 12, 2012 it means the speed of some thing is so fast that you can not see, like the speed of light or it means the speed of some thing is very quickly October 9, 2012 I think it means in a glisp October 9, 2012 it is happening in very short time and very fast ,for example : Lightning , Lightning happens very fast and very short time . Is the expression common in modern standard Arabic? you will hear it ,and there another phrases more same meaning and more common , like طرفة عين, غمضة عين October 14, 2012
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8164600729942322, "perplexity": 1369.4637193528256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00255.warc.gz"}
http://math.stackexchange.com/questions/146467/graph-measurability-difficult-but-interesting
# Graph measurability (difficult but interesting) Let $\mu(\cdot)$ be a probability measure on $W \subseteq \mathbb{R}^p$, so that $\int_W \mu(dw) = 1$. Consider a locally bounded function $f: \mathbb{R}^n \times \mathbb{R}^m \times W \rightarrow \mathbb{R}^n$ such that $\forall w \in W$ $\ (x,y) \mapsto f(x,y,w)$ is continuous, $\forall (x,y) \in \mathbb{R}^{n} \times \mathbb{R}^m$ $\ w \mapsto f(x,y,w)$ is measurable. Also consider a locally bounded function $g: \mathbb{R}^n \rightarrow \mathbb{R}^m$. For $\epsilon: \mathbb{R}^n \rightarrow \mathbb{R}_{\geq 0}$ continuous and positive definite, define the set-valued maps $$F(x,y,w) := f( x + \epsilon(x) \bar{\mathbb{B}},y,w) + \epsilon(x) \bar{\mathbb{B}}$$ $$G(x) := g( x + \epsilon(x)\bar{\mathbb{B}} ) + \epsilon(x)\bar{\mathbb{B}}$$ $$H(x,u,w):= \left( \begin{array}{c} F(x,y,w) \\ G( F(x,y,w) ) \end{array} \right)$$ Prove that the map $$w \mapsto graph\left( H(\cdot,\cdot,w) \right):= \{ (x,y,z) \in \mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R}^{n+m} \mid z \in H(x,y,w) \}$$ is measurable. Definition of measurability for a set-valued map: A set-valued mapping $S: T \rightrightarrows \mathbb{R}^q$ is measurable if for every open set $\mathcal{O} \subset \mathbb{R}^q$ the set $S^{-1}(\mathcal{O}) \subset T$ is measurable, i.e. $S^{-1}(\mathcal{O}) \in \mathcal{A}$ (some $\sigma$-field of subsets of $T$). In particular, the set $dom S = S^{-1}(\mathbb{R}^q)$ must be measurable. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994582533836365, "perplexity": 198.57436149402983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773058.130/warc/CC-MAIN-20141217075253-00015-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/calculate-the-mass-of-co2-produced-from-the-complete-combustion.339017/
# Calculate the mass of CO2 produced from the complete combustion 1. Sep 20, 2009 ### Zhalfirin88 1. The problem statement, all variables and given/known data Gasoline consists primarily of octane, C8H18 Calculate the mass of CO2 produced from the complete combustion of 3.79 L (1.00 gallon) of gasoline (assume octane, density = 0.756 g/mL) with excess O2 3. The attempt at a solution The balanced chemical formula would be: 2C8H18 + 25O2 $$\rightarrow$$ 16CO2 + 18H2O For my work: $$3790mL (octane)* \frac{0.756g(octane))}{mL} * \frac{1 mol (octane)}{114.144g (octane)} * \frac{8 mol (CO_2)}{1 mol (octane)} * \frac {44.00g CO_2}{1 mol CO_2}$$ $$= 8835.8957g CO_2$$ Is it correct? What about significant figures? Last edited: Sep 20, 2009 2. Sep 20, 2009 ### Bohrok Re: Stoich Looks correct. Since you start with a number with 3 significant figures and use a conversion factor with 3 sig figs, use three in your answer. Similar Discussions: Calculate the mass of CO2 produced from the complete combustion
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9043235778808594, "perplexity": 3645.7265216529354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133449.19/warc/CC-MAIN-20170824101532-20170824121532-00430.warc.gz"}
https://www.physicsforums.com/threads/help-me-build-a-physics-toys.83808/
# Homework Help: Help me build a physics toys 1. Aug 1, 2005 ### rouvin Guys im having trouble making a physics toy. I dont have an any idea what to do please help me any suggestion, examples and websites will do... Please. I need this to graduate.. Thanks for the help guys 2. Aug 1, 2005 ### Maxos Try with a reversing-top 3. Aug 1, 2005 ### EnumaElish Last edited by a moderator: May 2, 2017
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8302364349365234, "perplexity": 4476.4977511014895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828501.85/warc/CC-MAIN-20181217091227-20181217113227-00409.warc.gz"}
http://www.physics-astronomy.com/2016/03/latest-cern-results-indicate-there-is.html
New experiments and research is always pushing the limit of acknowledged theories until we reach at a level where they don’t work anymore. And the new results from CERN show that we might be on the verge of new physics. The new data from CERN looks at a distinct particle called a B meson. The present theory of particle physics, the Standard Model, has very definite predictions on the frequency and angle at which the B meson decays, but it just doesn’t match what has been observed in the experiment. Image: Computer recreation of unusual decay of Bs meson in the LHCb detector. CERN The Standard Model of Physics puts all the acknowledged subatomic particles into a single most crucial theory. There are six quarks (up, down, charm, strange, top, bottom), six leptons (electrons, muons, tau, and their corresponding neutrinos), the particles which carry the force (gluons, photons, Z and W bosons) and the Higgs boson. The B meson is a product of a down quark and a bottom antiquark, and once made, it decays in only 0.0015 nanoseconds. According to present theory, the B mesons can decay into different and definite sets of particles. These two particles will have a definite energy and will be released at a certain angle. However, this experiment showed proof for a type of decay that was not observed before and that was not projected by the model. “Up to now all measurements match the predictions of the Standard Model. However, we know that the Standard Model cannot explain all the features of the Universe,” Professor Mariusz Witek, one of the co-authors of the research paper, said in a statement. “How did the dominance of matter over antimatter in the universe come about? What is dark matter? Those questions remain unanswered. What's more, the force we all experience every day, gravity, isn’t even included in the model.” Even though this paper, issued in the Journal of High Energy Physics, is certainly promising, the detection is not yet a discovery. The signal was perceived with a confidence level of 3.4 sigmas. Physicists only admit a discovery after it passes the 5 sigma mark, this means that we have a probability of less than one in 3.5 million that the discovery is just a fluke. CERN has now imitated a new stage of high-energy collisions that will optimistically deliver an insightful look into the potential bold new world beyond the Standard Model. This post was written by Umer Abrar. To contact the author of this post, write to [email protected] or add/follow him on facebook : Share it:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9153680205345154, "perplexity": 762.8549533692207}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00320-ip-10-171-10-70.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/403032/why-is-this-a-topology-on-y/403051
Why is this a topology on $Y$? Let $f$ be a continuous map from $X$ onto $Y$, and $f$ is a quotient map. Why is the family $\tau_q$ of all $f(U)$, where $U$ is open in $X$, a topology on $Y$? - What have you tried so far? –  Daniel Rust May 26 '13 at 16:41 I can't prove that $\tau _q$ is closed under finite intersection. –  user73564 May 26 '13 at 16:42 I am confused by the hypotheses. You say $f$ must be a cts map, which implies that $Y$ came with a topology. But this can't possibly be used anywhere, since we are only being asked to check that another collection of subsets of $Y$ forms a topology. –  user29743 May 26 '13 at 16:44 All sets $O$ that are open in the original topology on $Y$ (which must be assumed to exist or continuity of $f$ does not make any sense) are open under this definition, as for onto maps $O = f[f^{-1}[O]]$. Don't yet see how to use this yet. –  Henno Brandsma May 26 '13 at 16:55 Does change something if we suppose that $f$ is a quotient map? (I edited my original post) –  user73564 May 26 '13 at 17:04 It is not, in fact, a topology. For a simple counterexample, let $X=\{1,2,3,4\}$ with the topology $\{ \{1,2\}, \{3,4\}, X, \emptyset\}$. Let $Y=\{1,2,3\}$ with the indiscrete topology. Define $f$ as $f(1)=1, f(2)=f(3)=2, f(4)=3$. Now, $$\tau_q = \{ Y, \emptyset, \{1,2\}, \{2,3\}\}$$ which is not closed under intersection. Edit: $f$ is also a quotient map. We can suppose in addition that $f$ is a quotient map, just in case that it changes something... thank you –  user73564 May 26 '13 at 17:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9586655497550964, "perplexity": 169.25594732380478}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115855897.0/warc/CC-MAIN-20150124161055-00156-ip-10-180-212-252.ec2.internal.warc.gz"}
https://lagrida.com/Calculate_Gaps_Cardinality.html
date: 03/01/2019 Author: LAGRIDA Yassine # Calculate Gaps Cardinality In everything that follows : Let $b_1, b, \alpha_1, \alpha, p, q \in \mathbb{P}$ with $b$ is the next prime to $b_1$, $\alpha$ is next prime to $\alpha_1$, and $q$ the next prime to $p$. Suppose that a gap $g$ of $d$ consécutif elements is appear the first time in $\mathcal{B}_{b_1}$ Let $s = \#\{(\beta_1,\beta_2,\cdots,\beta_d) \in g \text{ and } (\beta_1,\beta_2,\cdots,\beta_d) \in \mathcal{B}_{b_1}^d\}$ Let $q > b_1$ : Let $C_q = \#\{(\beta_1,\beta_2,\cdots,\beta_d) \in g \text{ and } (\beta_1,\beta_2,\cdots,\beta_d) \in \mathcal{B}_{q}^d\}$ And $C_p = \#\{(\beta_1,\beta_2,\cdots,\beta_d) \in g \text{ and } (\beta_1,\beta_2,\cdots,\beta_d) \in \mathcal{B}_{p}^d\}$ Let $E_{p,k}\,\,k=1,2,\cdots,n$ a séries depende to $p$. From the article "Prime Numbers Construction", we have the existing of some series $E_{p,k}\,\,k=1,2,\cdots,n$ that : $C_q = q C_p - d C_p + \sum_{k=1}^{n} E_{p,k}$ If we desende to $b$ we have : $C_q = s {\small \left( \prod_{\substack{b \leq a \leq q \\ \text{a prime}}} {\normalsize (a-d)} \right)} + \left( \sum_{k=1}^{n} E_{p,k} \right) + \sum_{\substack{b_1 \leq \alpha_1 < p \\ \alpha_1\text{ prime}}} \left( \sum_{k=1}^{n} E_{\alpha_1,k} \right) {\small \left( \prod_{\substack{\alpha < a \leq q \\ \text{a prime}}} {\normalsize (a-d)} \right)}$ Show that if we use Chinese remainder theorem, then we have : $C_q \leq f(d) {\small \left( \prod_{\substack{a \leq q \\ \text{a prime, }a \neq d}} {\normalsize (a-d)} \right)}$ With $f(d)$ a function depend to the gap. If $d=2$ and we consider the gap $(b,b+m)$ then we have : $C_q \leq {\small \left( \prod_{\substack{a | m \\ \text{a prime}}} {\normalsize \frac{a-1}{a-2}} \right)} {\small \left( \prod_{\substack{a \leq q \\ \text{a prime}}} {\normalsize (a-2)} \right)}$ Example: Let calculate $\#\{(b,b+8) \in \mathcal{B}_q^2\}$ We have $8 = 0+2+6 = 0+6+2 = 0+4 + 4$ Show that the gap $(b,b+4;b+8)$ is impossible in $\mathcal{B}_q$ because necesserly one of $b,b+4,b+8$ is divisible by $3$. Then we should calculate $\#\{(b,b+2,b+8) \in \mathcal{B}_q^3\}$ and $\#\{(b,b+6,b+8) \in \mathcal{B}_q^3\}$. Calculate $\#\{(b,b+2,b+8) \in \mathcal{B}_q^3\}$ : First apear of the gap $(b,b+2,b+8)$ is for $q = 7$ and $s=5$ We Have : $0+2+8 = 0+2+6+8$, Then we should calculate $\#\{(b,b+2,b+6,b+8) \in \mathcal{B}_q^4\}$ We already do that, and we have $\#\{(b,b+2,b+6,b+8) \in \mathcal{B}_q^4\} = {\small \left( \prod_{\substack{5 \leq a \leq q \\ \text{a prime}}} {\normalsize (a-4)} \right)} = E_{p,1}$ Then $C_q = (q-3) C_p + E_{p,1}$ Then We have $C_q = 5$ if $q=7$ and if $q \geq 11$ : $C_q = 5 {\small \left( \prod_{\substack{11 \leq a \leq q \\ \text{a prime}}} {\normalsize (a-3)} \right)} + {\small \left( \prod_{\substack{5 \leq a \leq p \\ \text{a prime}}} {\normalsize (a-4)} \right)} + \sum_{\substack{7 \leq \alpha_1 < p \\ \alpha_1\text{ prime}}} {\small \left( \prod_{\substack{5 \leq a \leq \alpha_1 \\ \text{a prime}}} {\normalsize (a-4)} \right)} {\small \left( \prod_{\substack{\alpha < a \leq q \\ \text{a prime}}} {\normalsize (a-3)} \right)}$ Calculate $\#\{(b,b+6,b+8) \in \mathcal{B}_q^3\}$ : We have distances are symetrique to center, and this gap not passing by the center of primorial of $q$, then we have : $\#\{(b,b+6,b+8) \in \mathcal{B}_q^3\} = \#\{(b,b+2,b+8) \in \mathcal{B}_q^3\}$ Returning to calculate the number of elements verfing the gap $(b,b+8)$ in $\mathcal{B}_q$. Let $p$ be the next prime to $p_1$. Let : $E_{p,1} = 2 \times 5 = 10$ if $p=7$, and if $p \geq 11$ : $E_{p,1} = 2 \left( 5 {\small \left( \prod_{\substack{11 \leq a \leq p \\ \text{a prime}}} {\normalsize (a-3)} \right)} + {\small \left( \prod_{\substack{5 \leq a \leq p_1 \\ \text{a prime}}} {\normalsize (a-4)} \right)} + \sum_{\substack{7 \leq \alpha_1 < p_1 \\ \alpha_1\text{ prime}}} {\small \left( \prod_{\substack{5 \leq a \leq \alpha_1 \\ \text{a prime}}} {\normalsize (a-4)} \right)} {\small \left( \prod_{\substack{\alpha < a \leq p \\ \text{a prime}}} {\normalsize (a-3)} \right)} \right)$ (we adding the gaps $(b,b+6,b+8)$ and $(b,b+2,b+8)$) The gap $(b,b+8)$ appear the first time for $q=7$ and $s = C_7 = 2$ And if $q \geq 11$: $C_q = 2 {\small \left( \prod_{\substack{11 \leq a \leq q \\ \text{a prime}}} {\normalsize (a-2)} \right)} + E_{p,1} + \sum_{\substack{7 \leq \alpha_1 < p \\ \alpha_1\text{ prime}}} \left( E_{\alpha_1,1} \right) {\small \left( \prod_{\substack{\alpha < a \leq q \\ \text{a prime}}} {\normalsize (a-2)} \right)}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911136627197266, "perplexity": 411.47222155519233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822458.91/warc/CC-MAIN-20191022155241-20191022182741-00382.warc.gz"}
https://www.physicsforums.com/threads/general-problems-with-coriolis-force.60230/
General problems with coriolis force 1. Jan 18, 2005 Dracovich Ok i'm having some general problems with solving Coriolis problems. So the general way of writing up the Coriolis force is $$F_c = -2*m*(\vec{\omega} \times \vec{v})$$ But most questions i get about the Coriolis force involve some information about it's latitude position on earth, but i fail to see where this comes into the equation. The chapter discussing the Coriolis force does not seem to touch on this, and the two examples in my book both use the equator, so they are no help. Also i have some problems visualising which way the Coriolis force works, more in general i have a problem seeing which way a cross product points. Now it's perpendicular to the two vectors, but which way? (+ or - axis). Just as an example, this is the most basic problem from my book, (which i can't solve since i can't figure out the latitude thing): "A vehicle of mass 2000kg is travelling due north at 100km/h at latitude 60°. Determine the magnitude and direction of the Coriolis force on the vehicle." **edit** forgot to smack a minus sign on the formula. Last edited: Jan 18, 2005 2. Jan 18, 2005 MathStudent The right hand rule can be used to find the direction of a cross product of two vectors: I'll try to describe it as follows; in the case of $\vec{\omega} \times \vec{v}$ use your pointer finger (of your RIGHT hand) to point in the direction of $\vec{\omega}$ your middle finger should then point in the direction of $\vec{v}$ (only the part of $\vec{v}$ that is perpindicular to $\vec{\omega}$) stick your thumb out so that its perpendicular to both your pointer and middle fingers. The direction of your thumb is the direction of $\vec{\omega} \times \vec{v}$ 3. Jan 18, 2005 Dracovich Ahh thanks, that should make life easier for me :) Now just the question of latitude ... 4. Jan 18, 2005 Andrew Mason The latitude is needed in order to determine the cross product: $$\vec{\omega} \times \vec{v}$$ The velocity vector points to the north pole along the surface and the angular velocity is parallel to the earth's axis. To do the cross product you need to know that angle, which is just the latitude. AM Similar Discussions: General problems with coriolis force
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009787201881409, "perplexity": 473.5514361394196}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891791.95/warc/CC-MAIN-20180123072105-20180123092105-00221.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/fundamentals-of-physics-extended-10th-edition/chapter-34-images-problems-page-1038/15b
## Fundamentals of Physics Extended (10th Edition) $i=-4.4cm$ "The distance between the focal point and the mirror" is defined as the focal length. Since the mirror is convex, the value of the focal length is negative. Therefore, the focal length is $8.0cm$. To find the image distance, use the formula $$\frac{1}{f}=\frac{1}{p}+\frac{1}{i}$$ Solving for $i$ yields $$\frac{p}{pf}-\frac{f}{pf}=\frac{1}{i}$$ $$\frac{p-f}{pf}=\frac{1}{i}$$ $$i=\frac{pf}{p-f}$$ Substituting values of $f=-8.0cm$ and $p=+10cm$ yields $$i=\frac{(10cm)(-8.0cm)}{10cm+8cm}=-4.4cm$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.991386353969574, "perplexity": 160.1939501308273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156613.38/warc/CC-MAIN-20180920195131-20180920215531-00081.warc.gz"}
https://motls.blogspot.com/2013/08/in-defense-of-five-standard-deviations.html
## Monday, August 19, 2013 ... ///// ### In defense of five standard deviations Originally posted on August 12th. The second part was added at the end. The third part. Last, fourth part. Five standard deviations are cute. However, Tommaso Dorigo wrote the first part of his two-part "tirade against the five sigma", Demistifying The Five-Sigma Criterion I mostly disagree with his views. The disagreement begins with the first word of the title ;-) that I would personally write as "demystifying" because what we're removing is mystery rather than mist (although the two are related words for "fog"). He "regrets" that the popular science writers tried to explain the five-sigma criterion to the public – I think they should be praised for this particular thing because the very idea that the experimental data are uncertain and scientists must work hard and quantitatively to find out when the certainty is really sufficient is one of the most universal insights that people should know about the real-world science. When I was a high school kid, I mostly disliked all this science about error margins, uncertainties, standard deviations, noise. This sentiment of mine must have been a rather general symptom of a theorist. Error margins are messy. They're the cup of tea of the sloppy experimenters while the pure and saint theorist only works with perfect theories making perfect predictions about the perfectly behaving Universe. Of course, sometimes early in the college, I was forced to get dirty a little bit, too. You can't really do any empirical research without some attention paid to error margins and probabilities that the disagreements are coincidental. As far as I know, the calculations of standard deviations was one of the things that I did not learn from any self-studies – these topics just didn't previously look beautiful and important to me – and the official institutionalized education system had to improve my views. The introduction to error margins and probabilistic distributions in physics was a theoretical introduction to our experimental lab courses. It was taught by experimenters and I suppose that it was no accident because they were more competent in this business than most of the typical theorists. At any rate, I found out that the manipulations with the probability distributions were a nice and exact piece of maths by themselves – even though they were developed to describe other, real things, that were not certain or sharp – and I enjoyed finding my own derivations of the formulae (the standard deviations for the coefficients resulting from linear regression were the most complex outcomes of this fun research). At any rate, hypotheses predict that a quantity $X$ should be equal to $\bar X\pm \Delta X$ if I use simplified semi-laymen's conventions. The error margin – well, the standard deviation – $\Delta X$ is never zero because our knowledge of the theory, its implications, or the values of the parameters we have to insert to the theory are never perfect. Similarly, the experimenters measure the value to be $X_{\rm obs}$ where the subscript stands for "observed". The measurement also has its error margin. The error margin has two main components, the "statistical error" and the "systematic error". The "total error" for a single experiment may always be calculated (using the Pythagorean theorem) as the hypotenuse of the triangle whose legs are the statistical error and the systematic error, respectively. The difference between the statistical error and the systematic error is that the statistical error contains all the contributions to the error that tend to "average out" when you're repeating the measurement many times. They're averaging out because they're not correlated with each other so about one-half of the situations are higher than the mean and one-half of them are lower than the mean etc. and most of the errors cancel. In particular, if you repeat the same dose of experiments $N$ times, the statistical error decreases $\sqrt{N}$ times. For example, the LHC has to collect many collisions because the certainty of its conclusions and discoveries is usually limited by the "statistics" – by their having an insufficient number of events that can only draw a noisy caricature of the exact graphs – so it has to keep on collecting new data. If you want the relative accuracy (or the number of sigmas) to be improved $K$ times, you have to collect $K^2$ times more collisions. It's that simple. On the other hand, the systematic error is an error that always stays the same if you repeat the experiment. If the CERN folks had incorrectly measured the circumference of the LHC to be 27 kilometers rather than 24.5 kilometers, this will influence most of the calculations and the 10% error doesn't go away even after you perform quadrillions of collisions. All of them are affected in the same way. Averaging over many collisions doesn't help you. Even the opinions of two independent teams – ATLAS and CMS – are incapable of fixing the bug because the teams aren't really independent in this respect as both of them use the wrong circumference of the LHC. (This example is a joke, of course: the circumference of the LHC is known much much more accurately; but the universal message holds.) When you're adding error margins from two "independent" experiments, like from the ATLAS collisions and the CMS collisions, you may add the statistical errors for "extensive" quantities (e.g. the total number of all collisions or collisions of some kind by both detectors) by the Pythagorean theorem. It means that the statistical errors in "intensive quantities" (like fractions of the events that have a property) decreases as $1/\sqrt{N}$ where $N$ is the number of "equal detectors". However, the systematic errors have to be added linearly, so the systematic errors of "intensive" quantities don't really drop and stay constant when you add more detectors. Only once you calculate the total systematic and statistical errors in this non-uniform way, you may add them (total statistical and total systematic) via the Pythagorean theorem (physicists say "add them in quadrature"). So far, all the mean values and standard deviations are given by universal formulae that don't depend at all on the character or shape of the probabilistic distribution. For a distribution $\rho(X)$, the normalization condition, the mean value, and the standard deviation are given by$\eq{ 1 & = \int dX\,\rho(X) \\ \bar X &= \int dX\,X\cdot \rho(X) \\ (\Delta X)^2 &= \int dX\,(X-\bar X)^2\cdot\rho (X) }$ Note that the integral $\int dX\,\rho(X)$ with the extra insertion of any quadratic function of $X$ is a combination of these three quantities. The Pythagorean rules for the standard deviations may be shown to hold independently of the shape of $\rho(X)$ – it doesn't have to be Gaussian. However, we often want to calculate the probability that the difference between the theory and the experiment was "this high" (whether the probability is high enough so that it could appear by chance) – this is the ultimate reason why we talk about the standard deviations at all. And to translate the "number of sigmas" to "probabilities" or vice versa is something that requires us to know the shape of $\rho(X)$ – e.g. whether it is Gaussian. There's 32% risk that the deviation from the central value exceeds 1 standard deviation (in either direction), 5% risk that it exceeds 2 standard deviations, 0.27% that it exceeds 3 standard deviations, 0.0063% that it exceeds 4 standard deviations, and 0.000057% which is about 1 part in 1.7 million that it exceeds five standard deviation. So far, Dorigo wrote roughly four criticisms against the five-sigma criterion: • five sigma is a pure convention • the systematic errors may be underestimated which results in a dramatic exaggeration of our certainty (we shouldn't be this sure!) • the distributions are often non-Gaussian which also means that we should be less sure than we are • systematic errors don't drop when datasets are combined and some people think that they do You see that this set of complaints is a mixed bag, indeed. Concerning the first one, yes, five sigma is a pure convention but an important point is that it is damn sensible to have a fixed convention. Particle physics and a few other hardcore hard disciplines of (usually) physical sciences require 5 sigma, i.e. the risk 1 in 1.7 million that we have a false positive, and that's a reasonably small risk that allows us to build on previous experimental insights. The key point is that it's healthy to have the same standards for discoveries of anything (e.g. 1 in 1.7 million) so that we don't lower the requirements in the case of potential discoveries we would be happy about; the certainty can't be too small because the science would be flooded with wrong results obtained from noise and subsequent scientific work building on such wrong results would be ever more rotten; and the certainty can't ever be "quite" 100% because that would require an infinite number of infinitely large and accurate experiments and that's impossible in the Universe, too. We're gradually getting certain that a claim is right but this "getting certain" is a vague subjective process. Science in the sociological or institutionalized sense has formalized it so that particle physics allows you to claim a discovery once your certainty surpasses a particular thresholds. It's a sensible choice. If the convention were six sigma, many experiments would have to run for a time longer by 35% or so before they would reach the discovery levels but the qualitative character of the scientific research wouldn't be too different. However, if the standard in high-energy physics were 30 sigma, we would still be waiting for the Higgs discovery today (even though almost everyone would know that we're waiting for a silly formality). If the standard were 2 sigma, particle physics would start to resemble soft sciences such as medical research or climatology and particle physicists would melt into stinky decaying jellyfish, too. (This isn't meant to be an insulting comparison of climatology to other scientific disciplines because this comparison can't be made at all; a more relevant comparison is the comparison of AGW to other religions and psychiatric diseases.) Concerning Tommaso's second objection, namely that some people underestimate systematic errors, well, he is right and this blunder may shoot their certainty about a proposition through the roof even though the proposition is wrong. But you can't really blame this bad outcome – whenever it occurs – on the statistical methods and conventions themselves because you need some statistical methods and conventions. You must only blame it on the incorrect quantification of the systematic error. The related point is the third one, namely that the systematic errors don't have to be normally distributed (i.e. with a distribution looking like the Gaussian). When the distribution have thick tails and you have several ways to calculate the standard deviations, you should better choose the largest one. However, I need to say that Tommaso heavily underestimates the Gaussian, normal distribution. While he says that it has "some merit", he thinks that it is "just a guess". Well, this sentence of his is inconsistent and I will explain a part of the merit below – the central limit theorem that says that pretty much any sufficiently complicated quantity influenced by many factors will be normally distributed. Concerning Tommaso's last point, well, yes, some people don't understand that the systematic errors don't become any better when many more events or datasets are merged. However, the right solution is to make them learn how to deal with the systematic errors; the right solution is not to abandon the essential statistical methods just because someone didn't learn them properly. Empirical science can't really be done without them. Moreover, while one may err on the side of hype – one may underestimate the error margins and overestimate his certainty – he may err on the opposite, cautious side, too. He may overstate the error margins and $p$-values and deny the evidence that is actually already available. Both errors may turn one into a bad scientist. Now, let me return to the Gaussian, normal distribution. What I want to tell you about – if you haven't heard of it – is the central limit theorem. It says that if a quantity $X$ is a sum of many ($M\to\infty$) terms whose distribution is arbitrary (the distributions for individual terms may actually differ but I will only demonstrate a weaker theorem that assumes that the distributions coincide), then the distribution of $X$ is Gaussian i.e. normal i.e. $\rho(X) = C\exp\zav{ - \frac{(X-\bar X)^2}{2(\Delta X)^2} }$ i.e. the exponential of a quadratic function of $X$. If you need to know, the normalization factor is $C=1/(\Delta X)\sqrt{2\pi}$. Why is this central limit theorem true? Recall that we are assuming$X = \sum_{i=1}^M S_i.$ You may just add some bars (i.e. integrate both sides of the equation over $X$ with the measure $dX\,\rho(X)$: the integration is a linear operation) to see that $\bar X = \sum_{i=1}^M \bar S_i.$ It's almost equally straightforward (trivial manipulations with integrals whose measure is still $dX\,\rho(X)$ or similarly for $S_i$ and that have some extra insertions that are quadratic in $S_i$ or $X$) to prove that$(\Delta X)^2 = \sum_{i=1}^M (\Delta S_i)^2$ assuming that $S_i,S_j$ are independent of each other for $i\neq j$ i.e. that the probability distribution for all $S_i$ factorizes to the product of probability distributions for individual $S_i$ terms. Here we're assuming that the error included in $S_i$ is a "statistical error" in character. So the mean value and the standard deviation of $X$, the sum, are easily determined from the mean values and the standard deviations of the terms $S_i$. These identities don't require any distribution to be Gaussian, I have to emphasize again. Without a loss of generality, we may linearly redefine all variables $S_i$ and $X$ so that their mean values are zero and the standard deviations of each $S_i$ are one. Recall that we are assuming that all $S_i$ have the same distribution that doesn't have to be Gaussian. We want to know the shape of the distribution of $X$. An important fact to realize is that the probabilistic distribution for a sum is given by the convolution of the probability distributions of individual terms. Imagine that $X=S_1+S_2$; the arguments below hold for many terms, too. Then the probability that $X$ is between $X_K$ and $X_K+dX$ is given by the integral over $S_1$ of the probability that $S_1$ is in an infinitesimal interval and $S_2$ is in some other corresponding interval for which $S_1+S_2$ belongs to the desired interval for $X$. The overall probability distribution is given by $\rho(X_K) = \int dS_1 \rho_S(S_1) \rho_S(X_K-S_1).$ You should think why it's the case. At any rate, the integral on the right hand side is called the convolution. If you know some maths, you must have heard that there's a nice identity involving convolutions and the Fourier transform: the Fourier transform of a convolution is the product of the Fourier transforms! So instead of $\rho(X_K)$, we may calculate its Fourier transform and it will be given by a simple product (we return to the general case of $M$ terms immediately)$\tilde \rho(P) = \prod_{i=1}^M \tilde\rho(T_i).$ Here, $P$ and $T_i$ are the Fourier momentum-like dual variables to $X$ and $S_i$. However, now we're almost finished because the products of many ($M$) equal factors may be rewritten in terms of an exponential. If $\rho(T_i)=\exp(W_i)$, then the product of $M$ equal factors is just $\exp(MW_i)$ and the funny thing is that this becomes totally negligible if $MW_i\gg 1$. So we only need to know how the right hand side behaves in the vicinity of the maximum of $T_i$ or $W_i$. A generic function $W_i$ may be approximated by a quadratic function over there which means that both sides of the equation above will be well approximated by $C_1\exp(-MC_2 T_i^2)$ for $M\to\infty$. It's the Gaussian and if you make the full calculation, the Gaussian will inevitably come out as shifted, stretched or shrunk, and renormalized so that $X$, the sum, has the previously determined mean value, the standard deviation, and the probability distribution for $X$ is normalized. Just to be sure, the Fourier transform of a Gaussian is another Gaussian so the Gaussian shape rules regardless of the variables (or dual variables) we use. So there's a very important – especially in the real world – class of situations in which the quantity $X$ may be assumed to be normally distributed. The normal distribution isn't just a random distribution chosen by some people who liked its bell-like shape or wanted to praise Gauss. It's the result of normal operations we experience in the real world – that's why it's called normal. The more complicated factors influencing $X$ you consider, and they may be theoretical or experimental factors of many kinds, the more likely it is that the Gaussian distribution becomes a rather accurate approximation for the distribution for $X$. Whenever $X$ may be written as the sum of many terms with their error margin (even though the inner structure of these terms may have a different, nonlinear character etc.; and the sum itself may be replaced by a more general function because if it has many variables and the relevant vicinity of $X$ is narrow, the linearization becomes OK and the function may be well approximated by a linear combination i.e. effectively a sum, anyway), the normal distribution is probably legitimate. Only if the last operation to get $X$ is "nonlinear" – if $X$ is a nonlinear function of a sum of many terms etc. or if you have another specific reason to think that $X$ is not normally distributed, you should point this fact out and take it into account. But Tommaso's fight against the normal distribution as the "default reaction" is completely misguided because pretty much no confidence levels in science could be calculated without the – mostly justifiable – assumption that the distribution is normal. Tommaso decided to throw the baby out with the bathwater. He doesn't want an essential technique to be used. He pretty much wants to discard some key methodology but as a typical whining leftist, he has nothing constructive or sensible to offer for the science he wants to ban. Second part Dorigo's second part of the article is insightful and less controversial. He reviews a nice 1968 Arthur Rosenfeld paper showing that the number of fake discoveries pretty much agrees with the expectations – some false positives are bound to happen due to the number of histograms that people are looking at. Sometimes experimenters tend to improve their evidence by ad hoc cuts if they get excited by the idea that they have made a discovery. Too bad. Dorigo argues that the five-sigma criterion should be replaced by a floating requirement. This has various arguments backing it. One of them is that people have differing prior subjective probabilities quantifying how much plausible or almost inevitable they consider a possible result. Of course that extraordinary claims require extraordinary evidence while almost robustly known and predicted ones require a weaker one. It's clear that people get convinced by some experimental claims at a lower number of sigmas than for other claims. But I wouldn't institutionalize this variability because due to the priors' intrinsically subjective character, it's extremely hard to agree on the "right priors". He also mentions OPERA that made the ludicrous claim about the superluminal neutrinos that was called bogus on this blog from the very beginning. It was a six-sigma result (60 nanoseconds with a 10 nanoseconds error), we were told. Dorigo blames it on the five-sigma standards. But this is just silly. Whatever statistical criterion you will introduce for a "discovery", you will never fully protect physics against a silly human error that may introduce an arbitrary large discrepancy to the results – against stupid errors such as the loosened cable. I wouldn't even count it as a systematic error; it's just a stupid human error that can't really be quantified because nothing guarantees that it remains smaller than a bound. So I think it's irrational to mix the debate about the statistical standards with the debate about loosened cables and similar blunders that cripple the quality of an experimental work "qualitatively" – they have nothing to do with one another! Third part In the third part, Dorigo discusses three extra classes of effects and tricks that may lead to fake discoveries. I agree with everything he writes but none of these things implies that the 5-sigma standard is bad or that it could be replaced by something better. The first effect is mismodeling (well, a systematic error on the theoretical side); the second effect is aposterioriness, the search for bumps in places where we originally didn't want to look (which is OK for discovering new physics but such unplanned situations may heavily and uncontrollably increase the number of discrepancies i.e. false positives we observe, and we shouldn't forget about it when we're getting excited about such a discrepancy); and dishonest manipulation of the data (there's no protection here, except to shoot the culprit; if someone wants to spit on the rules, he will be willing to spit on any rules). Fourth, last part In this fourth and final part, Dorigo continues in a discussion of a bump near $150\GeV$. At the very end, he proposes ad hoc modifications of the five-sigma rule – from 3 sigmas for the B decay to two muons to 8 sigmas for gravitational waves. One could assign ad hoc requirements that differ for different phenomena but it's not clear how it would be determined for many other phenomena for which the holy oracle Dorigo hasn't specified his miraculous numbers. Moreover, the patterns in his numbers don't seem to make any sense. It is very bizarre why a certain exotic, not-guaranteed-to-exist decay of the B-mesons is OK with 3 sigmas while the gravitational waves that have to exist must pass 8 sigmas. Moreover, some other observed signatures, like "SUSY", aren't really signatures but whole frameworks that may manifest themselves in many experiments and each of them clearly requires a different assignment of the confidence levels if we decide that confidence levels should be variable. There would be some path from being sure that there's new physics to being reasonably sure that it's a SUSY effect – Dorigo seems to confuse these totally different levels of knowledge. If this table with the variable confidence levels were the goal of Dorigo's series, then I must say that the key thesis of his soap opera is crap. #### snail feedback (22) : The disagreement begins with the first word of the title ;-) that I would personally write as "demystifying" because what we're removing is mystery rather than mist (although the two are related words for "fog"). Once again demonstrating your superb grasp of the English language! ;-) 'While he says that it has "some merit", he thinks that it is "just a guess". ' But really it's just a Gauss! Thanks for this big analysis, lucretius. But I didn't understand what financial quantities are non-normal and why should one expect the normal or any sensible distribution for them in the first place. Of course that the change of log(stockprice) in 5 years has totally no reasons to be normal. Growth by 2-3 orders of magnitude is the maximum but drops may be faster, when price goes to zero, so there is an asymmetry. And there's no good reason why it should resemble the Gaussian in between, I think. It seems bizarre to describe it as a failure of the central limit theorem because the assumptions of the theorem are totally violated. The difference between your non-Gaussian examples and those in natural sciences is that the natural sciences actually do respect the assumptions of the central limit theorem in many cases so they must also respect and they do respect the conclusions of the theorem. Dear Luboš, I am a big fan of these "middlebrow" articles that many of your readers will probably skip due to the subject matter being "old hat" for them, but for me it's new and I get more out of it than I would from reading a dry, dusty textbook or Wikipedia article, due to the lively presentation. Well, the assumption of Gaussian distribution of both prices and returns was the "gold standard" in finance since Bachelier (student of Poincare) first made it until about 1970s - and several Nobel prizes in economics have been awarded for analyses that crucially relies on this assumption (including, of course, Merton and Scholes for the Black-Scholes formula). So it could not be that stupid. Normality actually holds pretty well over long periods. The scaling properties that Madelbrot suggested have been shown empirically not to hold (on the other hand, the assumption od discontinuous jumps, typical of Levy processes other than the Wiener process, has a strong statistical basis). The analysis of the departure from normality that I gave is now considered uncontorversial. The central limit theorem is only a very special case of a much more general theorems (see Jacod and Shiryayev Limit Theorems for Sochastic Processes) or http://en.wikipedia.org/wiki/Infinite_divisibility_(probability) for a simplified version. Dear lucretius, tx. Surely the normal-distribution-based standards were tried because they were so helpful elsewhere. However, the very only disclaimer against the usage that I mentioned - quantities that are nonlinearly applied at the very end - applies to vastly changing prices because the price is really exp(Y) where Y is a more natural quantity. So if Y changes by more than 1 or so, it cannot be that both delta Y and delta exp(Y) are normal-distributed. A special argument, such as the Markovian one, would be needed to suggest that one of them, namely Y, is normally distributed. But because the changes aren't really Markovian, this proof can't be made. I agree with what you are saying in general, but you have significantly misstated the central limit theorem. It does not apply to *arbitrary* distributions, only to those with finite mean and variance. Here is an example of a simplified physics experiment in which the exceptions can bite you: Suppose we have an isotropic light source (in two dimensions) and an infinite line of photon detectors. We wish to determine the location of the source in the direction parallel to the line by observing where the photons hit the line. The hitting position distribution will *not* have finite mean and variance, the sum over many photons will not have a normal distribution, and taking a sum over more photons will not give a better estimate. We would need to use the median instead of the mean. Also, note that the central limit theorem does not apply to any quantity influenced by many *factors*, only to quantities that are sums of many *terms*, how the parts combine matters. For example, there is a *different* central limit theorem, with a different limit distribution (log normal), for the product of many factors. Now, the noise in accelerator experiments is mostly a sum, so the Gaussian approximation is largely justified in that case, but that isn't true for *all* experiments, and it is important to know when it isn't. No derivatives market assumes normality. The black-scholes model which assumes that log returns are normal is just used for quoting prices in units of volatility. It is not used by any dealers (investment banks). They use stochastic and local volatility models with maybe jump diffusion processes added on. They use these models to both price and risk-manage. The crisis was caused by shitty lending practices and cheap credit. Not models. Well, I can tell you that I used to get paid ((not my main job, of course) for helping to develop a computer model based on the normality assumption (for commercial mortgage pricing). I did not really like the normality assumption but it was what the clients wanted. So such things were certainly done before 2008 (the model is still be up on the web in a demo version) but the company no longer exists (not surprisingly perhaps). By the way, jump diffusion processes (I also used to work with these) and the others you mention are not fashionable these days (in academic finance) as they lack the property of self-decomposibility. That means that their return distributions do not satisfy Sato's "class L property". A random variable has the class L property if the same distribution as the limit of some sequence of normalized sums of independent random variables - which is, of course, a generalization of the central limit theorem. Examples of processes that have this property are the Meixner, Variance Gamma, Generalized Hyperbolic and more general additive processes. I am not claiming that dealers know anything about this. Yes, but the arguments were always applied to the distribution of returns, and of course the assumption that the process was Markovian was right at the heart. (That was essentially the "Efficient Market Hypothesis"). Actually, one can show that for option pricing it is not essential since the Markov property is not preserved by change of measure. Thus even if the underlying price process i not Markovian it is quite reasonable to use a Markovian process for pricing derivatives. About Dorigo's third point about non-gaussian distributions: when the data fits a known non-gaussian distribution, I have often seen physicists calculate the appropriate p-value for their results and then convert that into a number of standard deviations for convenience (as much as maybe I shouldn't be, I'm much more comfortable with "5 sigma" than "p=0.00000057"). Thanks, Eugene, for your kind words. My guess is that the people have held misconceptions about everything. ;-) But it's hard to imagine what would the normal distribution for these quantities spanning many (dozens of) orders of magnitude would even mean. It makes sense to talk about Gaussians as a function of a "linear" variable X. But the strength of an earthquake S isn't naturally linear. You don't even know whether you should express it by the amplitude or its second power or its logarithm and such choices matter because the would-be Gaussian bell curve is wide - wider than the curvature radius. So there were some people who realized that one should talk about the logarithms of the magnitude because it's more natural, and others who figured out how the frequency of an earthquake etc. depends on this logarithm. There's no natural way to write a Gaussian here so I think that no *important* people have ever done so. ;-) Fred, I think you're wrong to say that the models didn't cause the problem, the only reason the cheap credit was available (to the masses) was because of the trading of convoluted derivative instruments based on flawed models. Hi Luboš, There are two problems I see with a heavy reliance on the normal distribution and its summary statistics. It is often necessary to make an inference or detection claim in the face of small number statistics. Generic distributions like the Γ and β can have significant skew despite high statistics. In both of these cases, using the mean and variance to define confidence intervals, whether one-sided or two sided, is wrong. Well it probably is good enough for a first internal estimate, but not something that ought to go into a final results paper. In addition, I cannot help but feel uneasy at the prominence of p-values as the arbiter of statistical significance. In the case of the Higgs, why use a test on the falsity of the null hypothesis and not one of the model selection methods from the Bayesian or Neymann/Pearson frameworks? What we ultimately care about is whether there is a Higgs boson, not whether background is well modeled by the measured data. But since I don't work on a particle detector, I shall have to wait for Tommaso's 'part 2'. This is not say I don't see the utility in null hypothesis tests, for I use them all the time in my work. I just share the concern with many others that banner headline, "5σ p-value == DETECTION" is prone to both misunderstand and misuse. Cheers, hμν Dear Hmunu, I know that what you write sounds logical to you but it really contradicts the whole logic of the scientific method, especially when it comes to your comment about the p-value. You ask why we falsify the null hypothesis instead of confirming the new hypothesis. Because it has to be so. Science may only proceed by falsification. The point is that the new hypothesis, as long as it is sufficiently accurately defined, is never *quite* correct. It will be falsified in the future. So if you ever confirm it as an absolute truth, you may be sure that you are doing something wrong. One may only confirm theories as temporary truths which always really means that some simplified models without them, the null hypotheses, may be ruled out. We don't really know that the 126 Higgs boson is "exactly" the Standard Model Higgs boson. We don't know that it is the exact boson from any other specific theory. We only know that it has approximately some properties and it's there - it means that theories that wildly deviate from the SM with the Higgs are ruled out. The definition of what we know about precise theories describing Nature is always negative. Concerning the skewed distributions etc., your mistake isn't that serious but I still think you're wrong. You may invent some complicated functions, call them distributions, and claim that they're equally relevant for probability distributions as the normal distribution. But they're not. In physics and science, we're ultimately interested in the sharp value of a quantity. As we're increasing our insights, we're converging towards the only right distribution for X which is delta(X-XR) where XR is the right value. All other functions are just temporary parametrizations of our incomplete knowledge. It makes no sense to develop a superconvoluted science with very unusual functions called distributions here because they're not real in any sense. Whenever the deviations of a quantity seem to be very small so that the quantity may be linearized in the approximate interval where it probably sits according to the experiments, i.e. if this interval is much narrower than the "curvature radius" where things change qualitatively, it's always correct to measure the mean value and standard deviation from multiple measurements and assume that the quantity is normally distributed. I forgot to add that one reason why Gaussian based models dominated before 2008 was the difficulty of using any of the other in multi-factor situation. At that time there was no real tractable way to deal with dependence. So although more realistic models could be used in single factor settings, almost all serious applications involve multiple dependent factors. Your claim that models had no role does not seem to based on any actual experience of these things; in fact the situation as it was before 2008 is described fairly accurately at the end of this article: http://en.wikipedia.org/wiki/Copula_(probability_theory) Hi Luboš, Let me first give an example of a situation where using a non-normal distribution is important for making logical and correct inferences. Suppose one wants to estimate the efficiency of an analysis code to recover a known type of event that is rare in nature. A large monte-carlo simulation is run with 10^4 points and behold all but 1 of the events are recovered. What then is the 99.8% confidence interval? If one quotes (0.9996,1.0002) which is the mean minus/plus three standard deviations you would be wrong on two counts: quoting illogical efficiencies and not correctly defining the area with 99.8% of the probability. In problems analysts regularly deal with, the β/binomial distribution (above) or the Γ/Poisson distribution are better models of the parameter in question than the normal distribution when faced with limited data. These get heavy use in particle physics within searches for dark matter, extra-galactic neutrinos and even the Higgs. On the subject of hypothesis tests and decision making, it seems there is a bit of a misunderstanding here. As scientists, we ask questions like "which is better supported by experimental evidence, the SM or the MSSM" all the time. The methods of the Neyman-Pearson and Bayesian frameworks allow one to answer these sorts of question in a quantitative fashion. If the likelihood of the null (SM) is much larger than the alternative, one can safely say there is insufficient evidence the null is incorrect and that it therefore survives falsification. One is always free to go get more data or propose a better alternative model and try again. But again,"how likely is the null true given the data" is just as reasonable and scientific a question as "how unlikely is the data given the null?" Fisher's hypothesis tests are very useful tools, but not every situation requires a hammer. A wide variety of powerful methods have been developed in the hundred years since Neyman, Fisher and Jeffreys pondered these issues. It would be folly for a researcher to ignore them, particularly if the reason was based on the prejudices one gained as an undergraduate. Cheers, H\mu\nu Dear H\mu\nu, if you only observe 1 event and you want to state what is the exact expected number of events, it's clear that the error margin is of order 100%. It could have been almost 0 (p below one with probability p or so), it could have been one, but it could have been many, too. Obviously, the normal distribution isn't good for this situation because the calculable 1-sigma interval is "nonlinear" and contains numbers for which the nonlinearities (such as forbidden negative values of the number of events) and quantization (the number is integer) are very important. If you know it's a Poisson process, it's straightforward to calculate a better distribution based on that but it won't really help you with the generic fact that when only 1 event exists, then the statistics available for your empirical data is inadequate for most quantitative tests. Unless the event was predicted to occur with some "really impossible", tiny probability, you can't say much if you only observe one event. Talking about extremely complex new distributions seems like an excessive case of "scientism" in the pejorative sense, if I use another recent topic. The detection of one event means that the true number of events is pretty much unknown - one may only eliminate hypotheses that predict 0 almost certainly and one may exclude very large numbers of events, too - but otherwise whether the right number is 1 or 3 is pretty much unknown, regardless of your attempts to replace the normal distribution by a better one. Cheers LM So, were the deviant cats for Schrodinger's birthday? Or are they not there? An example of a discovery declared and accepted by the community with one event (1) is the discovery of the omega-. "At left is a sketch of the bubble chamber photograph in which the omega-minus baryon was discovered." Summarized in http://hyperphysics.phy-astr.gsu.edu/hbase/particles/omega.html . They went with the excellent probability of the chisquare fit to the hypothesis and that it was expected to complete the baryon decouplet. They did not calculate a statistical probability for it to be background, it was assumed very small, as for all bubble chamber photos: When one can see the interaction points , and non interacting beam particles can be counted as they come in, it is not possible to imagine confusions from other events. Even in the dirty environment of the LHC , if something very unusual appeared in both experiments nobody would be waiting for five sigma. Example: suppose that one charged track enters the electromagnetic calorimeter and bursts into a shower of tracks, electromagnetic and hadron like, with a mass of 100 GeV. Even ten of these in each experiment and a discovery would be in the news. There is so me logic to rethink whether the statistical errors we calculate are relevant to the problem at hand. In the case of the omega minus discovery it was not only the probability of the chi square fit that made it indisputable. It was the very small probability that it could be background, because in a bubble chamber the background for individual events is miniscule. If one would qualify the five sigma rule one should qualify it on the side of the background. When the background is 100 events and the signal is composed of ten events on top, the statistics are absolutely significant. If the background is zero +/-.0000001 events then even one event cannot be statistical background, one has to look for systematic errors. In every day life, if an alien landed in Times Square would there be any meaning asking for a five sigma statistical existence of the alien?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8289951682090759, "perplexity": 550.1410031150322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659063.33/warc/CC-MAIN-20190117184304-20190117210304-00390.warc.gz"}
http://math.stackexchange.com/questions/41340/an-information-theory-inequality-which-relates-to-shannon-entropy
An information theory inequality which relates to Shannon Entropy For $a_1,...,a_n,b_1,...,b_n>0,\quad$ define $a:=\sum a_i,\ b:=\sum b_i,\ s:=\sum \sqrt{a_ib_i}$. Is the following inequality true?: $${\frac{\Bigl(\prod a_i^{a_i}\Bigr)^\frac1a}a \cdot \frac{\left(\prod b_i^{b_i}\right)^\frac1b}b \le\left(\frac{\prod (a_ib_i)^{ \sqrt{a_ib_i}}}{s^{2s}}\right)^\frac s{ab}}.$$ If true, how can I prove it? And if not true, is there a counter-example? An alternate formulation of this inequality, as suggested by cardinal is $$\sum_i x_i^2 \log x_i^2 + \sum_i y_i^2 \log y_i^2 \leq \rho^2 \sum_i \frac{x_iy_i}{\rho} \log \frac{x_iy_i}{\rho} \; ,$$ in which $x_i,y_i>0$, $\sum_i x_i^2 = \sum_i y_i^2 = 1$ and $\sum_i x_i y_i = \rho$. It can be found by taking the logarithm of the original expression and defining $x_i=\sqrt{a_i/a}$ and $y_i=\sqrt{b_i/b}$. - My answer is wrong. I should have done the limit by hand... –  Beni Bogosel May 26 '11 at 22:00 Let $u_i = a_i / a$, $v_i = b_i / b$ and $w_i = \sqrt{a_i b_i} / s$. Then, the inequality in the question is equivalent to the inequality $\sum u_i \log u_i + \sum v_i \log v_i \leq \frac{2 s^2}{a b} \sum w_i \log w_i$. Note that $(u_i)$, $(v_i)$ and $(w_i)$ are all in the $(n-1)$-simplex and so correspond to discrete probability measures on $\{1,\ldots,n\}$. Probabilistically, we might rewrite this as $H(u) + H(v) \geq \frac{2 s^2}{a b} H(w)$ where $H(\cdot)$ denotes the Shannon entropy. –  cardinal May 27 '11 at 13:11 Note also that there is a, perhaps, interesting element of symmetry at play here. In both the OP's question and my previous remark, the left-hand side is permutation invariant with respect to the relative ordering of the coordinates of $(a_i)$ and $(b_i)$, whereas the right-hand side is not. In particular, let $s_\pi = \sum \sqrt{a_i b_{\pi(i)}}$, $w_{i,\pi} = \sqrt{a_i b_{\pi(i)})}/s_\pi$, and $w_\pi = (w_{i,\pi})$. Then $H(u) + H(v) \geq \frac{2}{n! ab} \sum_{\pi \in S_n} s_\pi^2 H(w_\pi)$ and $H(u) + H(v) \geq \frac{2}{ab} \max_{\pi \in S_n} s_\pi^2 H(w_\pi)$. –  cardinal May 28 '11 at 6:31 Yes, exactly... May we prove it for some small values of $n$? e.g. $n=3$ or $n=4$. –  Amir Hossein May 30 '11 at 5:26 @Amir Where did this problem come from? –  JavaMan May 30 '11 at 21:11 show 8 more comments 2 Answers This is slightly too long to be comment, though this is not an answer: Edit: I am providing a few revisions, but I still have not found a proof. My previous comment was not quite correct. I "rearranged" the inequality by raising both sides of the inequality to certain powers, and I did not take into account that we do not know a priori whether any of the terms are larger or smaller than $1$. Depending on the size of each term (in particular, whether it is smaller or larger than $1$), exponentiating both sides could potentially reverse the inequality. I have returned the question to its original form as suggested by the OP, and I have added one more observation. Let me first say that I have been working on this question for a bit, and though I have not yet resolved it, I have been having fun trying! Now, to emphasize the dependence on $n$, let's set $$\alpha_n = \sum_{i=1}^n a_i \qquad \beta_n = \sum_{i=1}^n b_i, \qquad \sigma_n = \sum_{i=1}^n c_i,$$ where $c_i = \sqrt{a_i b_i}$. Further, let's put $$A_n = \prod_{i=1}^n (a_i)^{a_i}, \qquad B_n = \prod_{i=1}^n (b_i)^{b_i}, \qquad S_n = \prod_{i=1}^n (c_i)^{c_i}.$$ Our goal is to show: $$(1) \hspace{1in} \left(\frac{A_n}{(\alpha_n)^{\alpha_n}}\right)^{\frac{1}{\alpha_n}} \cdot \left(\frac{B_n}{(\beta_n)^{\beta_n}} \right)^{\frac{1}{\beta_n}} \leq \left(\frac{S_n}{(\sigma_n)^{\sigma_n}}\right)^{\frac{2\sigma_n}{\alpha_n \beta_n}}$$ A few pedestrian observations: • If $a_i = b_i$ for $i = 1, \dots , n$ (which forces $c_i = a_i = b_i$), then $A_n = B_n = S_n$, we also have $\alpha_n = \beta_n = \sigma_n$, and (1) holds in this case. • Note that $2c_i \leq a_i + b_i$ as $2xy \leq x^2 + y^2$ for all real numbers $x, y$. Hence, $2\sigma_n \leq \alpha_n + \beta_n$. Furthermore, Cauchy-Schwarz gives $\sigma_n^2 \leq \alpha_n \beta_n$. Both of these observations imply that $(\sigma_n + 1)^2 \leq (\alpha_n + 1)(\beta_n + 1)$. I would imagine that with enough creativity, one may find a proof of the inequality involving convexity or a simple application of the AM-GM inequality (which I suppose is much the same thing!). I have been unable to prove the inequality even in the case $n = 2$, when I assume $\alpha_n = \beta_n = 1$. I am not hopeful for a proof of the general case. - Thanks for the start! I'm still waiting for a solution to this problem... –  Amir Hossein May 28 '11 at 5:13 @Amir: And I'm still thinking about it (: –  JavaMan May 28 '11 at 15:10 BUMP this! Anyone? Just three days remaining... –  Amir Hossein Jun 2 '11 at 8:54 The bounty was going to be expired, so I gave it to DJC's post. Problem hasn't been solved yet, if anyone got anything, please share it with us. –  Amir Hossein Jun 4 '11 at 2:46 @Amir: I did not contribute to a solution of the problem at hand despite my efforts, and while I'm flattered to get the points, I simply don't deserve them. If there is a way to undo the points, I'm fine if they get taken away. –  JavaMan Jun 4 '11 at 3:24 show 2 more comments The inequality is true. I posted cardinal's reformulation on Math Overflow, as I was very interested in seeing the problem resolved. A complete solution was provided by fedja, and it can be found here. Using the inequality $$\sqrt{\frac{a_i b_i}{ab}}\leq \sum_{i=1}^n\sqrt{\frac{a_i b_i}{ab}}\leq \sqrt{\frac{a_i b_i}{ab}}+\sqrt{\frac{(a-a_i)(b- b_i)}{ab}},$$ which follows from Cauchy-Schwarz, fedja was able to consider the inequality entry-wise. Doing so, the problem was reduced to showing (key inequality) For $x,y,z>0$ and $xy=z^2$, we have $$(1+x)\log(1+y)+(1+y)\log(1+x)\geq 2(1+z)\log(1+z).$$ Fedja gave a proof of this inequality in his answer, and the user Babylon5 gave an alternate proof on AoPS. - add comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9474900960922241, "perplexity": 334.6414377113758}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997893881.91/warc/CC-MAIN-20140722025813-00178-ip-10-33-131-23.ec2.internal.warc.gz"}
http://www.machinedlearnings.com/2011/04/semi-supervised-lda-part-ii.html
## Sunday, April 17, 2011 ### Semi-Supervised LDA: Part II In my previous post I motivated the idea of semi-supervised LDA: constructing a topic model on a corpus where some of the documents have an associated label, but most do not. The goal is to extract topics that are informed by all the text in the corpus (like unsupervised LDA) but are also predictive of the observed label (like supervised LDA). I have some initial results that look promising. #### The Approach My approach is based upon DiscLDA (and its cousin Labeled LDA). DiscLDA modifies the original LDA such that a document class label $y_d$ can remap the original document topics $z$ into transformed topics $u$ which then control the word emission probabilities. Here's the slate representation from the DiscLDA paper, and for those who think algebraically, here's a description of the generative model. 1. Draw topic proportions $\theta | \alpha \sim \mbox{Dir} (\alpha)$. 2. Draw word emission vectors $\phi | \beta \sim \mbox{Dir} (\beta)$. 3. Draw class label $y | \pi \sim p (y | \pi)$ from some prior distribution. 4. For each word 1. Draw topic assignment $z_n | \theta \sim \mbox{Mult} (\theta)$. 2. Draw transformed topic assignment $u_n | z_n, T, y \sim \mbox{Mult} (T^y_{u_n, z_n})$. 3. Draw word $w_n | z_n, \phi_{1:K} \sim \mbox{Mult} (\phi_{z_n})$. In what follows the $T$ matrices are fixed to be a mixture of block zeroes and block diagonals which basically arranges for 1) some number of topics $k_1$ shared across all class labels and 2) some number $|Y| k_0$ of topics of which each class label gets $k_0$ topics reserved for use with that particular class label, $T^y = \begin{pmatrix} 0 & I_{k_0} 1_{y = 1} \\ \vdots & \vdots \\ 0 & I_{k_0} 1_{y = |Y|} \\ I_{k_1} & 0 \end{pmatrix}.$ Note that there are $(k_0 + k_1)$ possible $z$ and $(|Y| k_0 + k_1)$ possible $u$. Furthermore a single $z_i$ will either be associated with a single $u$ independent of $y$ or a set of $|Y|$ possible $u$ which are indexed by $y$. When implementing I found it convenient to think in terms of a mapping $u^y_z = \begin{cases} z & z < k_1 \\ z + k_0 (y - 1) & z \geq k_1 \end{cases},$ which mechanically suggests that changing $y$ causes a set of $k_0$ topics to be switched out''. Although online variational Bayes is currently king of the hill in terms of learning algorithm speed and scalability, I found Gibbs sampling relatively easy to reason about and implement so I went with that. Assuming the label $y_{d_i}$ for document $d_i$ is observed, the Gibbs sampler is essentially the same as for unsupervised LDA, $p (z_i = j | w, z_{-i}, y) \propto \frac{n^{(w_i)}_{-i,u^{y_{d_i}}_j} + \beta}{n^{(\cdot)}_{-i, u^{y_{d_i}}_j} + W \beta} \frac{n^{(d_i)}_{-i,u^{y_{d_i}}_j} + \alpha}{n^{(d_i)}_{-i,\cdot} + (k_0 + k_1) \alpha}.$ If sampling is conceptualized as done on the $u$ variables, this is the result from Labeled LDA in the case of a single-label: the Gibbs sampler is the same as original LDA except that transitions are only allowed between feasible topics given the document label. If $y_{d_i}$ is unobserved, I could integrate it out. However for ease of implementation I decided to Gibbs sample it as well, $p (y_d = k | w, z, y_{-d}) \propto p (w_d | w_{-d}, y_d = k, z, y_{-d}) p (y_d = k | z, y_{-d}).$ The first term is the likelihood of the word sequence for document $d$ given all the topic and label assignments for all the documents and all the words in the other documents. This is given by $p (w_d | w_{-d}, y_d = k, z, y_{-d}) \propto \prod_{\{i | d_i = d, z_i \geq k_1\}} \frac{n^{(w_i)}_{-d, u^k_{z_i}} + \beta}{n^{(\cdot)}_{-d, u^k_{z_i}} + W \beta}.$ That reads: for each word in the document $d$, count how many times it was assigned to topic $u^k_{z_i}$ in all other documents $n^{(w_i)}_{-d, u^k_{z_i}}$, count how many times any word was assigned to topic $u^k _{z_i}$ in all other documents $n^{(\cdot)}_{-d, u^k_{z_i}}$, and apply Dirichlet-Multinomial conjugacy. Basically, in terms of the $u$ variables: how well does each set of per-label topics explain the words in the document? Note common topics contribute the same factor to the likelihood so as an optimization the product can be taken over label-specific topics only. In particular if the document only uses common topics there is no preference for any label in this term. The second term, $p (y_d = k | z, y_{-d}) = p (y_d = k | y_{-d})$ is the prior (to observing the document text) distribution of the document label. In an online setting having the prior distribution over document labels depend upon previously observed document labels would be reasonable, but in a batch Gibbs sampler it is sufficient to ask the user to specify a fixed multinomial distribution over the class labels, i.e., $p (y_{d_i} = k | y_{-d_i}) = p_k$. #### Does it Work? The original motivation was to extract features to build a better classifier, so the best way to formulate this question is, does using semi-supervised LDA result in features that are better for classification than either the features obtained 1) by running unsupervised LDA on all the data or 2) running supervised LDA on the labeled subset of the data.'' I'm still working on answering that question. However there is another sense of the technique working''. Since unlabeled data vastly outnumbers labeled data, one fear I had was that the semi-supervised approach would essentially degenerate into the unsupervised approach. I have some data that suggests this is not the case. Here's the setup: I have circa 2 million documents'', where each document is the set of people that a particular Twitter profile is following. I have labeled roughly 1% of these documents as either male or female based upon Mechanical Turk results. I've got an initial Gibbs sampler implementation as outlined above which I've run while looking at two statistics: average log probability of each sampled topic, and average log probability of each observed document label. Generally I would expect the topic probability to improve over time as the system learns coherent sets of words. However, what happens to the label probability? Possibly, observed labels could become less likely over time if the pressure to explain the document text overwhelms the pressure to conform to the observed labels. I was especially worried about this since I took the short cut of Gibbs sampling the unobserved labels rather than integrating them out. Here is what a typical run looks like on my data. (Sadly it takes half a day on my laptop to generate this output.) On the first pass through the data, labels are assigned essentially randomly to unlabeled data and the documents seem to cancel each other out, consequently the observed labels are aggressively fit (average $\log(p) \approx -0.02$ which is like $p \approx 98\%$). The average label log probability is only computed on observed labels; so at this point the topics aren't very good at predicting the tokens in the documents, but they are very good at predicting the observed labels. As the sampler initially proceeds topic probability increases but label probability decreases. When I was watching the output of a large run for the first time I figured I was screwed and eventually the labels would get completely washed out. However eventually the sampler starts explaining both the tokens and the observed labels fairly well. After 100 passes here I ended up with observed labels still well explained with average $\log (p) \approx -0.08$ which is like $p \approx 92\%$. While the unsupervised data has degraded the model's explanatory power with respect to the observed labels, this might turn out to be a good thing in terms of generalization (i.e., built in regularization''). So the next step is to compare this semi-supervised LDA with unsupervised LDA and supervised LDA in the context of my original classification problem to see if there is any point to all of this beyond my own amusement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9079926013946533, "perplexity": 783.5605009168959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824319.59/warc/CC-MAIN-20160723071024-00212-ip-10-185-27-174.ec2.internal.warc.gz"}
https://openscience.fr/A-paraitre
# Vol 13 - À paraître ## Articles parus [Forthcoming] Existence et multiplicité de solutions d’un problème elliptique non-local à exposant variable avec conditions aux limites non-lineaires Using the Nehari manifold approach and some variational techniques, we discuss the multiplicity of positive solutions for the p(x)-Laplacian elliptic systems with Nonlinear boundary conditions. [Forthcoming] Une nouvelle géneralisation des nombres de Genocchi et conséquence sur les polynômes de Bernoulli This paper presents a new generalization of the Genocchi numbers and the Genocchi theorem. As consequences, we obtain some important families of integer-valued polynomials those are closely related to the Bernoulli polynomials. Denoting by ${(B_n)}_{n \in \mathbb{N}}$ the sequence of the Bernoulli numbers and by ${(B_n(X))}_{n \in \mathbb{N}}$ the sequence of the Bernoulli polynomials, we especially obtain that for any natural number $n$, the reciprocal polynomial of the polynomial $\big(B_n(X) - B_n\big)$ is integer-valued. [Forthcoming] Transformation de Shearlet modulée continue We generalize the well-known transforms such as short-time Fourier transform, wavelet transform and shearlet transform and refer it as Continuous Modulated Shearlet Transform. Important properties like Plancherel formula and inversion formula have been investigated. Uncertainty inequalities associated with this transform are presented. [Forthcoming] Approches de points critiques pour les problèmes discrets anisotropes de type Kirchhoff In this paper, we study a discrete anisotropic Kirchho type problem using variational methods and critical point theory, and we discuss the existence of two solutions for the problem. A example is presented to demonstrate the application of our main results.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9073953628540039, "perplexity": 1013.6818518260912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00393.warc.gz"}
https://unapologetic.wordpress.com/2009/12/03/jordan-content/
# The Unapologetic Mathematician ## Jordan Content We’ve got Riemann’s condition, which is necessary and sufficient to say that a given function is integrable on a given interval. In one dimension, this really served us well, because it let us say that continuous functions were integrable. We could break a piecewise continuous function up into intervals on which it was continuous, and thus integrable, and this got us almost every function we ever cared about. But in higher dimensions it’s not quite so nice. The region on which a function is continuous may well be irregular, and it often is for many functions we’re going to be interested. We need a more robust necessary and sufficient condition than Riemann’s. And towards this end we need to introduce a few concepts from measure theory. I want to say at the outset, though, that this will not be a remotely exhaustive coverage of measure theory (yet). Mostly we need to build up the concept of a set in a Euclidean space having Lebesgue measure zero, and the related notion of Jordan content. So let’s say we’ve got a set bounded $S\subseteq\mathbb{R}^n$. Put it inside an $n$-dimensional box — an interval $[a,b]$. Then partition this interval with some partition $P\in\mathcal{P}[a,b]$ like we did when we set up higher-dimensional Riemann sums. Then we just count up all the subintervals in the partition $P$ that are completely contained in the interior $\mathrm{int}(S)$ and add their volumes together. This is the volume of some collection of boxes which is completely contained within $S$, and we call this volume $\underline{J}(P,S)$. Any reasonable definition of “volume” would have to say that since $S$ contains this collection of boxes, the volume of $S$ must be greater than the volume $\underline{J}(P,S)$. Further, as we refine $P$ any box which was previously contained in $S$ will still have all of its subdivisions contained in $S$, but boxes which were only partially contained in $S$ may now have subdivisions completely contained in $S$. And so if $P'$ is a refinement of $P$, then $\underline{J}(P',S)\geq\underline{J}(P,S)$. That is, we have a monotonically increasing net. We then define the “lower Jordan content” of $S$ by taking the supremum $\displaystyle\underline{c}(S)=\sup\limits_{P\in\mathcal{P}[a,b]}\underline{J}(P,S)$ sort of like how the lower integral is the supremum of all the lower Riemann sums. In fact, this is the lower integral of the characteristic function $\chi_S$, which is defined to be ${1}$ on elements of $S$ and ${0}$ elsewhere. If a subinterval contains any points not in $S$ the lowest sample will be ${0}$, but otherwise the sample must be ${1}$, and the lower Riemann sum for the subdivision $P$ is $L_P(\chi_S)=\underline{J}(P,S)$. It should be clear, by the way, that this doesn’t really depend on the interval $[a,b]$ we start with. Indeed, if we started with a different interval $[a',b']$ that also contains $S$, then we can immediately partition each one so that one subinterval is the intersection with the other one, and this intersection contains all of $S$ anyway. Then we can throw away all the other subintervals in these initial partitions because they don’t touch $S$ at all, and the rest of the calculations proceed exactly as before, and we get the same answer in either case. Similarly, given a partition $P$ of an interval $[a,b]$ containing a set $S$ we could count up the volumes of all the subintervals that contain any point of the closure $\mathrm{Cl}(S)$ at all. This gives the volume of a collection of boxes that together contains all of $S$, and we call this total volume $\overline{J}(P,S)$. Again, since the whole of $S$ is contained in this collection of boxes, the volume of $S$ will be less than $\overline{J}(P,S)$. And this time as we refine the partition we may throw out subdivisions which no longer touch any point of $S$, so this net is monotonically decreasing. We define the “upper Jordan content” of $S$ by taking the infimum $\displaystyle\overline{c}(S)=\inf\limits_{P\in\mathcal{P}[a,b]}\overline{J}(P,S)$ Again, this doesn’t depend on the original interval $[a,b]$ containing $S$, and for the same reason. As before, we find that an upper Riemann sum for the characteristic function of $S$ is $U_P(\chi_S)=\overline{J}(P,S)$, so the upper Jordan content is the upper integral of $\chi_S$. If $S$ is well-behaved, we will find $\overline{c}(S)=\underline{c}(S)$. In this case we say that $S$ is “Jordan measurable”, and we define the Jordan content $c(S)$ to be this common value. By Riemann’s condition, we find that $\displaystyle c(S)=\int\limits_{[a,b]}\chi_S(x)\,dx$ December 3, 2009 - Posted by | Analysis, Measure Theory 1. […] Content and Boundaries As a first exercise working with Jordan content, let’s consider how it behaves at the boundary of a […] Pingback by Jordan Content and Boundaries « The Unapologetic Mathematician | December 4, 2009 | Reply 2. […] now for our condition. The function is integrable if and only if the Jordan content for every […] Pingback by Jordan Content Integrability Condition « The Unapologetic Mathematician | December 9, 2009 | Reply 3. […] to a very usable condition to determine integrability, so we need to further refine the idea of Jordan content. We want to get away from two restrictions in our definition of Jordan […] Pingback by Outer Lebesgue Measure « The Unapologetic Mathematician | December 10, 2009 | Reply 4. […] is, as we might expect, a relationship between outer Lebesgue measure and Jordan content, some aspects of which we will flesh out […] Pingback by Outer Lebesgue Measure and Content « The Unapologetic Mathematician | December 11, 2009 | Reply 5. […] mask we’ll use is the characteristic function of , which I’ve mentioned before. I’ll actually go more deeply into them in a bit, but for now we’ll recall that […] Pingback by Integrals Over More General Sets « The Unapologetic Mathematician | December 22, 2009 | Reply 6. […] bounds on the integral in terms of the Jordan content of . Incidentally, here is serving a similar role to the integrator in the integral mean value […] Pingback by The Mean Value Theorem for Multiple Integrals « The Unapologetic Mathematician | December 29, 2009 | Reply 7. […] but only at boundary points. Since the regions we’re interested in for integrals are Jordan measurable, and their boundaries have zero Jordan content, so we know changing things along these boundaries […] Pingback by Integrals are Additive Over Regions « The Unapologetic Mathematician | December 30, 2009 | Reply 8. […] so I’m hoping an example will help illustrate what I mean. Let’s calculate the Jordan content of a three-dimensional sphere of radius , centered at the origin. This will also help cement the […] Pingback by An Example of an Iterated Integral « The Unapologetic Mathematician | January 4, 2010 | Reply 9. […] let be a Jordan measurable subset of , and let be defined and continuous on . Then we have the change of variables […] Pingback by Change of Variables in Multiple Integrals II « The Unapologetic Mathematician | January 6, 2010 | Reply 10. […] is injective and continuously differentiable on an open region , and is a compact, connected, Jordan measurable subset of , then we […] Pingback by The Geometric Interpretation of the Jacobian Determinant « The Unapologetic Mathematician | January 8, 2010 | Reply 11. I need to show that a ball of radius 1 centered at 0 is a Jordan region. Any suggestions? I know that I can just show that it can be covered by a rectangle and I need to show that the boundary has volume zero. But I don’t know how to do this! Thanks Comment by Brianne | March 24, 2010 | Reply
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 57, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.976652979850769, "perplexity": 268.8124067153709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00052-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.cuemath.com/simple-equations-class-7-maths-chapter-4-ncert-solutions/
# Simple Equations - NCERT Class 7 Maths Go back to  'Class 7th Maths Solutions' The chapter 4 begins with an introduction to Simple Equations by explaining the basic concept of setting up of an equation and recalling the concepts learnt earlier classes such as variables.Then the concept of equation and equality is discussed in detail.Now, the procedure to solve an equation and the steps involved in it is explained elaborately.Now, the reverse process i.e. to obtain the equation from the solution is explained.In the last section of the chapter, applications of simple equations to practical situations is dealt in detail.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.869443953037262, "perplexity": 857.3056345093462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330968.54/warc/CC-MAIN-20190826042816-20190826064816-00463.warc.gz"}
https://math.stackexchange.com/questions/805019/distributivity-of-tensor-products-over-direct-sums-for-group-representations
# Distributivity of tensor products over direct sums for group representations I'm sure that tensor products for group representations are defined such that the typical properties are satisfied, but it would be nice to have an explicit proof, for group representations that: $$A \otimes (B \oplus C) = (A \otimes B) \oplus (A \otimes C)$$ Where $A,B,C$ are group representations. I would accept: 1. A direct, explicit proof of this for group representations 2. Alternatively, point out which properties in the definition for tensor product and direct sum for group representations ensures that this property holds, and how. ## migrated from physics.stackexchange.comMay 22 '14 at 6:26 This question came from our site for active researchers, academics and students of physics. • I would strongly recommend these short set of notes. – Hunter May 22 '14 at 1:41 • The tensor product is left adjoint to the internal hom, so it preserves not only finite direct sums but all colimits, in both variables (in particular, infinite direct sums and cokernels). – Qiaochu Yuan May 22 '14 at 16:47 • It might be easier to give a comprehensive answer if you include the definition you have been given of the tensor product. – Tobias Kildetoft May 23 '14 at 8:14 For the best argument, see the comment by Qiaochu Yuan. Here is an alternative argument: Let $R$ be a comutative ring. I assume that you already know that the tensor product of $R$-modules commutes with direct sums (see below for the isomorphism). Now let $G$ be a group and let $A,B,C$ be representations of $G$ over $R$, i.e. $R$-modules with an action of $G$. Then the isomorphism of $R$-modules $f : (A \otimes B) \oplus (A \otimes C) \to A \otimes (B \oplus C), ~ f(a \otimes b)=a \otimes (b,0),~ f(a \otimes c)=a \otimes (0,c)$ is also $G$-equivariant, because $f(g(a \otimes b))=f(ga \otimes gb)=ga \otimes (gb,0)=ga \otimes g(b,0) = g f(a \otimes b)$ and likewise $f(g(a \otimes c))=g f(a \otimes c)$. Actually, $f$ commutes with the $G$-actions simply because $f$ is a natural isomorphism. Even more abstractly, such a morphism $f$ exists in any monoidal category with coproducts, which is here the category ${}^G \mathsf{Mod}(R)$ of representations of $G$ over $R$. Therefore, actually no computation is needed at all. Here is a generalization: Let $\mathcal{C}$ be a monoidal category and let $G$ be an arbitrary small category. Then the category ${}^G \mathcal{C}$ of functors $G \to \mathcal{C}$ is again monoidal, the tensor product is defined objectwise. If $\mathcal{C}$ has direct sums which commute with $\otimes$, then the same is true for ${}^G \mathcal{C}$, for trivial reasons: For $A,B,C \in {}^G \mathcal{C}$ there is a canonical morphism $(A \otimes B) \oplus (A \otimes C) \to A \otimes (B \oplus C)$ in ${}^G \mathcal{C}$. That it is an isomorphism, may be checked objectwise, and thereby reduced to $\mathcal{C}$. • Always good to see the generalizations (since it seems I will need to start learning about categorification and 2-representations soon, it is always good to see these categorical versions). – Tobias Kildetoft May 23 '14 at 8:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789171814918518, "perplexity": 257.86923493314794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256163.40/warc/CC-MAIN-20190520222102-20190521004102-00528.warc.gz"}
https://soar.wichita.edu/handle/10057/119/browse?type=subject&value=Schoenberg-L%C3%A9vy+kernel
Now showing items 1-1 of 1 • #### Variogram matrix functions for vector random fields with second-order increments  (Springer, 2012-05) The variogram matrix function is an important measure for the dependence of a vector random field with second-order increments, and is a useful tool for linear predication or cokriging. This paper proposes an efficient ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8558435440063477, "perplexity": 2320.39392198452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00498.warc.gz"}
http://www.thefullwiki.org/Branch_(graph_theory)
•   Wikis # Branch (graph theory): Wikis Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles. # Encyclopedia In mathematics, especially mathematical logic and set theory, a branch of a tree T is a map f such that ∀n : f (n) ∈ T. By a tree on a product set κ1 ×···× κp we mean here a subset of the union of κ1i×···×κp i for all i < ω, $T\subseteq\bigcup_{i<\omega} \kappa_1^i\times\cdots\times\kappa_p^i ~,$ closed under initial segments, and the set of branches of such a tree is then more explicitly the set $[T]=\{ (f_1,...,f_p)\mid\forall n\in\omega: (f_1(n),...,f_p(n))\in T\} ~.$ This is a closed set for the usual product topology (see AD plus).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8886620998382568, "perplexity": 1449.3319623685043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513009.81/warc/CC-MAIN-20181020163619-20181020185119-00471.warc.gz"}
http://mathhelpforum.com/discrete-math/189356-explanation-given-result-permutations.html
# Math Help - explanation of the given result of permutations 1. ## explanation of the given result of permutations the number of permutations of n different things raken r at a time where k particular things always occur together in an assigned order is given by (n-kpr-k) *(n-k+1). could you kindly explain me how this result is obtained 2. ## Re: explanation of the given result of permutations Originally Posted by anigeo the number of permutations of n different things raken r at a time where k particular things always occur together in an assigned order is given by (n-kpr-k) *(n-k+1). could you kindly explain me how this result is obtained I am not really clear on the setup. Suppose we have $abcdefghijklm$, the first thirteen letters of the alphabet. It seems that you are asking for the number of permutations of eight of those letters BUT among those eight must be the block $def$ in that order. So we are going to permute $8-3=5$ of these $13-3=10$ letters $abcghijklm$. Those 5 letter and the block make 6 things to permute. $_{10}\mathcal{C}_5(6!)=_{10}\mathcal{P}_5(6)$. You can make the generalizations.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8409308791160583, "perplexity": 657.310881210387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463165.18/warc/CC-MAIN-20150226074103-00165-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/general-parameterisation-of-the-geodesic-equation.743300/
# General parameterisation of the geodesic equation 1. Mar 14, 2014 ### victorvmotti Hello all, In Carroll's on page 109 it is pointed out that for derivation of the geodesic equation, 3.44, a "hidden" assumption is that we have used an affine parameter. Some few lines below we see that "any other parametrization" could be used, called alpha, but in that case the general form of geodesic equation will be 3.58 and not 3.44. However, when plugging 3.59 into 3.58 and rearranging I can only recover the geodesic equation, 3.44, provided that alpha is itself a linear function of λ, and not "some general parameter alpha." So unlike what is said below 3.59, we cannot "always" find an affine parameter λ. Also, related to this confusion is what we call "a null geodesic." To make it clear for myself I need to see an example of a null path which is NOT a geodesic. Can someone please introduce or describe such a null path? 2. Mar 14, 2014 ### Bill_K How about a null spiral? A particle travels endlessly in a circular path, with tangential velocity c. x = R cos Ωt y = R sin Ωt z = 0 with RΩ = c 3. Mar 14, 2014 ### stevendaryl Staff Emeritus I don't have Carroll's book, but unless I made a mistake, here's the general equation: $g_{\mu \nu} A^\nu + \Gamma_{\mu \nu \lambda} U^\lambda U^\nu = g_{\mu \nu} U^\nu \dfrac{d}{ds} (ln \mathcal{L})$ where $s =$ the parameter, $g_{\mu \nu} =$ the metric tensor, $U^\mu = \dfrac{d}{ds} x^\mu(s)$ $A^\mu = \dfrac{d}{ds} U^\mu(s)$ $\mathcal{L} = \sqrt{g_{\mu \nu} U^\mu U^\nu}$ $\Gamma_{\mu \nu \lambda} =$ the connection coefficients, computed from derivatives of $g_{\mu \nu}$ If you choose the parameter $s$ so that $\mathcal{L}$ is constant, then the right-hand side is zero, and you get the usual form of the geodesic equation. To make $\mathcal{L}$ constant, you just choose $s$ to satisfy: $ds = \sqrt{g_{\mu \nu} dx^\mu dx^\nu}$ which is proper time (or any other linearly related parameter). This only works for non-null geodesics. If we're talking about a null geodesic, then $\mathcal{L}$ is identically zero, so I'm not sure what the condition is on the parametrization. 4. Mar 14, 2014 ### pervect Staff Emeritus I'm not quite seeing an equation # match to the online version of Caroll at http://arxiv.org/pdf/gr-qc/9712019v1.pdf or http://ned.ipac.caltech.edu/level5/March01/Carroll3/Carroll3.html This online version seems much clearer than I remember it being, perhaps Caroll has revised it? Anyway, a quote to enable one to find the section in question, which might be helpful. and 5. Mar 14, 2014 ### stevendaryl Staff Emeritus My equation for a general parametrization is basically the same as Carroll's equation 3.59, except that the arbitrary function is explicitly written as $\dfrac{d}{ds} (ln \mathcal{L}$. In that form, we can easily see that the right-hand side vanishes if you choose a linear function of proper time. The function $\mathcal{L}$ is just $\dfrac{d \tau}{ds}$ where $\tau$ is proper time. So if $\tau$ is a linear function of $s$, then that's a constant, and so is $ln (\dfrac{d \tau}{ds})$, and so its derivative is zero. 6. Mar 15, 2014 ### victorvmotti Hello Again, Thanks for the feedback. What is still unclear for me is where we have made the implicit assumption of an affine parameter to derive the usual form of the geodesic equation. We start with parallel transport of the tangent vector to the path and demand that its directional covariant derivative vanishes. This gives the usual form of the geodesic equation. So the question is that where we have derived from the general parametrization of the geodesic equation in the first place? Also, Carroll says that for any parameter in the general form you can always find an affine parameter to satisfy the usual form of the geodesic equation. But how? About null paths that are not geodesic is it possible to illustrate that an endlessly circular path does not satisfy the geodesic equation? But then what is the geodesic equation for a null path? Should we use the form based on the four-momentum that is derived and used for timelike paths? 7. Mar 15, 2014 ### pervect Staff Emeritus I thought the section I quoted from the online version of caroll was pretty clear. A geodesic can be regarded as a curve that parallel transports itself (and this appears to be the preferred definition for mathemeticians). If you only require that a curve parallel transport itself in the same direction, allowing parallel transport to change the length of a vector, there is no requirement on how you parameterize the curve doing the transporting. When you require that parallel transport not change the length of a transported vector, there is an implied requirement as to how the transporting curve must be parameterized. This requirement of unchanging length of the tangent vector can be used for timelike, spacelike, or null geodesics. Parameterizing by proper time is a special case that applies only to timelike geodesics. To go a bit beyond Caroll and provide a physical intuitive reaso for the affine parameteriztion of null geodesics (rather than a formal mathematical one) If you consider an actual plane electromagnetic wave following a null geodesic, the EM wave will have regularly spaced nulls where the electric and magnetic fields both vanish. I.e. for a plane EM wave the electric field is E sin (omega t) and the magnetic field B sin( omega t). At certain times E and B are both zero, creating a null. These nulls must be equally spaced if the geodesic is parameterized affinely. 8. Mar 15, 2014 ### WannabeNewton The geodesic equation under any parametrization reads $\xi^{\nu}\nabla_{\nu}\xi^{\mu} = \alpha \xi^{\mu}$ for some smooth scalar field $\alpha$ defined on the geodesic. So what the geodesic equation under any parametrization really says is the tangent vector's direction is parallel transported along the geodesic but the length isn't. This is easy to see as $\xi^{\nu}\nabla_{\nu}\frac{\xi^{\mu}}{(-\xi_{\gamma}\xi^{\gamma})^{1/2}} = \frac{\alpha \xi^{\mu}}{(-\xi_{\gamma}\xi^{\gamma})^{1/2}} + \frac{\xi_{\gamma}\xi^{\nu}\nabla_{\nu}\xi^{\gamma}}{(-\xi_{\gamma}\xi^{\gamma})^{3/2}} = 0$. However it is also easy to show that given the above, one can always reparametrize the geodesic so as to satisfy $\xi^{\nu}\nabla_{\nu}\xi^{\mu} = 0$. This is called an affine parametrization. See here: https://www.physicsforums.com/showpost.php?p=4381515&postcount=4 9. Mar 15, 2014 ### stevendaryl Staff Emeritus There are two ways to go about deriving the geodesic equation. They are equivalent for the connection used in General Relativity. One approach is to maximize proper time: $\tau = \int \mathcal{L} ds$ where $\mathcal{L} = \dfrac{d\tau}{ds} = \sqrt{g_{\mu \nu} U^\mu U^\nu}$ and where $U^\mu = \dfrac{dx^\mu}{ds}$ Using the calculus of variations, we maximize proper time for a path $x^\mu(s)$ satifying: $\dfrac{d}{ds} (\dfrac{\partial \mathcal{L}}{\partial U^\mu})= \dfrac{\partial}{\partial x^\mu} \mathcal{L}$ This doesn't assume anything about the parameter $s$ being affine, but the resulting equation is only the usual geodesic equation in the case where the parameter is affine. The other approach to deriving the geodesic equation is to start with the vector equation $\dfrac{D}{D s} U = \alpha U$ where $U$ is the velocity 4-vector, and where $\alpha$ is an arbitrary function of $s$ and where $\dfrac{D}{D\tau}$ is the operator defined by the following procedure: (I can never remember what the name of it is--it's not a covariant derivative, but it's related to it) $(\dfrac{D}{D s} A)^\mu = \dfrac{d A^\mu}{ds} + \Gamma^\mu_{\nu \lambda} A^\nu U^\lambda$ The two approaches give the same equation, except in the first approach, the function $\alpha$ is not arbitrary, but is equal to $\dfrac{d}{ds}(ln \mathcal{L})$. These are actually two different things: an extremal path, versus a path whose velocity vector is unrotated by parallel transport, but they turn out to be equal in GR. 10. Mar 15, 2014 ### stevendaryl Staff Emeritus $\dfrac{d^2}{ds^2} x^\mu + \Gamma^\mu_{\nu \lambda} \dfrac{dx^\nu}{ds} \dfrac{dx^\lambda}{ds} = \alpha \dfrac{dx^\mu}{ds}$ Now, change parameters to $\tau$. Let $Q = \dfrac{d \tau}{d s}$ The equation becomes (after canceling factors of $Q$ from both sides): $\dfrac{1}{Q} \dfrac{d Q}{d\tau} \dfrac{dx^\mu}{d \tau} + \dfrac{d^2}{d\tau^2} x^\mu + \Gamma^\mu_{\nu \lambda} \dfrac{dx^\nu}{d\tau} \dfrac{dx^\lambda}{d\tau} = \dfrac{\alpha}{Q} \dfrac{dx^\mu}{d\tau}$ So if you choose $Q$ to satisfy $\dfrac{d Q}{d \tau} = \alpha$ or, in terms of $s$ $\dfrac{1}{Q} \dfrac{d Q}{d s} = \dfrac{d (ln Q)}{ds} = \alpha$ then you get the usual form of the geodesic equation. Sure, you just plug it into the geodesic equation, and show that it doesn't work. The second approach to deriving a geodesic equation works exactly the same whether or not the path is null. So the same equation applies. The first approach (extremizing proper time) doesn't work for null geodesics, since every null path has the same proper time, zero. 11. Mar 15, 2014 ### victorvmotti Seems that Carroll calls it directional covariant derivative. 12. Mar 15, 2014 ### victorvmotti Doesn't this imply that alpha should be zero given how we defined Q above? You seem to be so close to what Carroll has written in the book, that is relating the general parameter to an affine parameter, using your equation above I can recover the equation he suggests to derive a usual form of the geodesic from the general form, but a difference here is that he says alpha, used in your general form, should be a function of s. Last edited: Mar 15, 2014 13. Mar 15, 2014 ### victorvmotti This was particularly helpful to make sense of a parallel transport. Because Carroll, when introducing the parallel transport, demands the norm of the vector, or in general "the components" of an arbitrary rank tensor, should not change which means that its "directional covariant derivative" should vanish. When you say that fundamental to parallel transport notion is that "velocity vector is unrotated" the general parametrization of the geodesic equation makes a lot more clear sense. 14. Mar 16, 2014 ### victorvmotti If we are in locally inertial coordinates then connection coefficients vanishes and therefore both null and timelike geodesic paths should be described by coordinates that are exponential functions of the general parameter s. Am I correct? If we are in general coordinates then on null paths the term involving connection coefficients in the general form of the geodesic equation vanishes because the inner product of vectors on a null path is zero. So again the null geodesic path is described by coordinates that are exponential functions of the general parameter s. Am I correct? 15. Mar 16, 2014 ### stevendaryl Staff Emeritus Oh, I didn't actually describe the procedure, I just gave the equation. The procedure is this, in a little more detail: 1. Start with a parametrized path $\mathcal{P}(s)$. 2. Let $A(s)$ be an arbitrary 4-vector that is defined along the path $\mathcal{P}(s)$ 3. Let $\tilde{A}(s + \delta s)$ be the result of parallel-transporting $V(s + \delta s)$ back along the path $\mathcal{P}$ from the point $\mathcal{P}(s + \delta s)$ to the point $\mathcal{P}(s)$ 4. Let $\delta A = \tilde{A}(s+\delta s) - A$ 5. Then $\dfrac{D}{Ds} A = {lim}(\delta s \rightarrow 0) \dfrac{\delta A}{\delta s}$ 16. Mar 16, 2014 ### stevendaryl Staff Emeritus The connection terms don't vanish for a null path. Let's look at a specific simple example: cylindrical coordinates: $(t, z, \rho, \phi)$ The metric components are: $g_{tt} = 1$ $g_{zz} = g_{\rho \rho} = -1$ $g_{\phi \phi} = - \rho^2$ The non-zero connection coefficients are (if I computed them correctly) $\Gamma^\rho_{\phi \phi} = \rho$ $\Gamma^\phi_{\rho \phi} = \Gamma^\phi_{\phi \rho} = \dfrac{1}{\rho}$ So the geodesic equation is (in components): $\dfrac{d^2 \phi}{ds^2} + \dfrac{2}{\rho}\dfrac{d \phi}{ds} \dfrac{d \rho}{ds} = \alpha \dfrac{d \phi}{ds}$ $\dfrac{d^2 \rho}{ds^2} + \rho (\dfrac{d \phi}{ds})^2 = \alpha \dfrac{d \rho}{ds}$ $\dfrac{d^2 z}{ds^2} = \alpha \dfrac{d z}{ds}$ $\dfrac{d^2 t}{ds^2} = \alpha \dfrac{d t}{ds}$ Being a null geodesic means that $(\dfrac{dt}{ds})^2 - (\dfrac{dz}{ds})^2 - (\dfrac{d\rho}{ds})^2 - \rho^2 (\dfrac{d\phi}{ds})^2 = 0$ 17. Mar 16, 2014 ### victorvmotti Actually I meant this, not sure if I am correct: $g_{\nu \lambda}\dfrac{d^2}{ds^2} x^\mu + \Gamma^\mu_{\nu \lambda}g_{\nu \lambda} \dfrac{dx^\nu}{ds} \dfrac{dx^\lambda}{ds} = \alpha g_{\nu \lambda}\dfrac{dx^\mu}{ds}$ Null path, then $g_{\nu \lambda} \dfrac{dx^\nu}{ds} \dfrac{dx^\lambda}{ds} = 0$ $g_{\nu \lambda}\dfrac{d^2}{ds^2} x^\mu = \alpha g_{\nu \lambda}\dfrac{dx^\mu}{ds}$ $\dfrac{d^2}{ds^2} x^\mu = \alpha\dfrac{dx^\mu}{ds}$ $x^\mu = \alpha\exp{(s\alpha)}$ 18. Mar 16, 2014 ### stevendaryl Staff Emeritus That is not the correct equation. The geodesic equation is: $\dfrac{d^2}{ds^2} x^\mu + \Gamma^\mu_{\nu \lambda} \dfrac{dx^\nu}{ds} \dfrac{dx^\lambda}{ds} = \alpha \dfrac{d x^\mu}{ds}$ Typically in equations involving indices, there should never be repetitions of indices in a term except in the case of one raised index and the corresponding lowered index. In your equation, the indices $\nu$ and $\lambda$ on $\Gamma^\mu_{\nu \lambda} \dfrac{dx^\nu}{ds} \dfrac{dx^\lambda}{ds}$ are dummy indices. They should be replaced by $\nu'$ and $\lambda'$ to avoid being confused with the indices on $g_{\nu \lambda}$. If you do that, you'll see that $g_{\nu \lambda}$ has no effect, and you can just divide through by it. Last edited: Mar 16, 2014 19. Mar 16, 2014 ### victorvmotti Thanks, you know I multiplied both sides of the general form the geodesic equation by the metric to cancel the connection term. I learned here that it is not allowed if we do not change the indices of the metric tensor. But still the general form of the geodesic equation, I mean when we are not using an affine parameter, makes me confused. Suppose when we are in Riemann normal coordinate, or locally inertial coordinates, For both timelike and null geodesic paths we have $\dfrac{d^2}{ds^2} x^\mu + \Gamma^\mu_{\nu \lambda} \dfrac{dx^\nu}{ds} \dfrac{dx^\lambda}{ds} = \alpha \dfrac{dx^\mu}{ds}$ Assuming a Riemann normal coordinate then $\Gamma^\mu_{\nu \lambda}$ vanishes and then $\dfrac{d^2}{ds^2} x^\mu = \alpha \dfrac{dx^\mu}{ds}$ $x^\mu = \alpha\exp{(s\alpha)}$ Right? 20. Mar 16, 2014 ### stevendaryl Staff Emeritus Sort of. I think you mean: $x^\mu = a^\mu \exp{(s\alpha)} + b^\mu$ where $a^\mu, b^\mu$ are constant vectors. That's the general solution to the equation $\dfrac{d^2}{ds^2} x^\mu = \alpha \dfrac{dx^\mu}{ds}$ But locally inertial coordinates are only good locally. So that solution is only valid for small values of $s$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9666036367416382, "perplexity": 510.49893700524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647649.70/warc/CC-MAIN-20180321121805-20180321141805-00070.warc.gz"}
https://planetmath.org/mutualinformation
# mutual information Let $(\Omega,\mathcal{F},\mu)$ be a discrete probability space, and let $X$ and $Y$ be discrete random variables on $\Omega$. The mutual information $I[X;Y]$, read as “the mutual information of $X$ and $Y$,” is defined as $\displaystyle I[X;Y]$ $\displaystyle=\sum_{x\in\Omega}\sum_{y\in\Omega}\mu(X=x,Y=y)\log\frac{\mu(X=x,% Y=y)}{\mu(X=x)\mu(Y=y)}$ $\displaystyle=D(\mu(x,y)||\mu(x)\mu(y)).$ where $D$ denotes the relative entropy. Mutual information, or just information, is measured in bits if the logarithm is to the base 2, and in “nats” when using the natural logarithm. ## Discussion The most obvious characteristic of mutual information is that it depends on both $X$ and $Y$. There is no information in a vacuum—information is always about something. In this case, $I[X;Y]$ is the information in $X$ about $Y$. As its name suggests, mutual information is symmetric, $I[X;Y]=I[Y;X]$, so any information $X$ carries about $Y$, $Y$ also carries about $X$. The definition in terms of relative entropy gives a useful interpretation of $I[X;Y]$ as a kind of “distance” between the joint distribution $\mu(x,y)$ and the product distribution $\mu(x)\mu(y)$. Recall, however, that relative entropy is not a true distance, so this is just a conceptual tool. However, it does capture another intuitive notion of information. Remember that for $X,Y$ independent, $\mu(x,y)=\mu(x)\mu(y)$. Thus the relative entropy “distance” goes to zero, and we have $I[X;Y]=0$ as one would expect for independent random variables. A number of useful expressions, most apparent from the definition, relate mutual information to the entropy $H$: $\displaystyle 0\leq I[X;Y]$ $\displaystyle\leq H[X]$ (1) $\displaystyle I[X;Y]$ $\displaystyle=H[X]-H[X|Y]$ (2) $\displaystyle I[X;Y]$ $\displaystyle=H[X]+H[Y]-H[X,Y]$ (3) $\displaystyle I[X;X]$ $\displaystyle=H[X]$ (4) Recall that the entropy $H[X]$ quantifies our uncertainty about $X$. The last line justifies the description of entropy as “self-information.” ## Historical Notes Mutual information, or simply information, was introduced by Shannon in his landmark 1948 paper “A Mathematical Theory of Communication.” Title mutual information Canonical name MutualInformation Date of creation 2013-03-22 12:37:35 Last modified on 2013-03-22 12:37:35 Owner drummond (72) Last modified by drummond (72) Numerical id 4 Author drummond (72) Entry type Definition Classification msc 94A17 Synonym information Related topic RelativeEntropy Related topic Entropy Related topic ShannonsTheoremEntropy Related topic DynamicStream Defines information Defines mutual information
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 38, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9306835532188416, "perplexity": 543.0642607150515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00047.warc.gz"}
http://www.ipam.ucla.edu/abstract/?tid=11973&pcode=CCG2014
## An application of the upper semicontinuity theorem #### Kaloyan SlavovAmerican University in BulgariaMathematics Given an integral curve C of degree d in the projective space P^n over an algebraically closed field (for example, a plane conic), we discuss how one can give an upper bound on the dimension of the space of homogeneous polynomials F(x_0,...,x_n) of fixed degree, which are singular along C. This is achieved through a degeneration technique and is based on the upper semicontinuity theorem. This discussion will serve as motivation for working with general schemes and not just with classical varieties. Only minimal prior knowledge of algebraic geometry will be assumed. If time permits, we will discuss a second application of the upper semicontinuity theorem, which allows one to compare the dimension of an incidence correspondence in characteristic 0 and in positive characteristic. Back to Algebraic Techniques for Combinatorial and Computational Geometry
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9161044359207153, "perplexity": 265.46107332098615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948511435.4/warc/CC-MAIN-20171210235516-20171211015516-00038.warc.gz"}
https://comp.text.tex.narkive.com/nBFPtmTQ/package-framed-shaded-environment-can-not-keep-the-color-when-page-break-occurs
Discussion: package framed, shaded environment can not keep the color when page break occurs Jinsong Zhao 2017-09-22 05:49:54 UTC Hi there, The minimal example is below. when page break occurs in an alltt environment, the color is different on the two pages. How to fix it? Best, Jinsong \documentclass{article} \usepackage{color} \definecolor{hlstr}{rgb}{0.192,0.494,0.8} \usepackage{framed} \usepackage{alltt} \begin{document} In the following example, color is different in the two pages. If you typeset all the contents in one page, the color is same. \vskip 400pt \color{hlstr} \begin{alltt} textConnection con,temp,times,content 31,a,I,B 54,b,I,A 38,c,I,C 53,a,II,A 49,b,II,C 42,c,II,B 57,a,III,C 62,b,III,B 64,c,III,A \end{alltt} \end{document} jfbu 2017-09-22 07:40:46 UTC Post by Jinsong Zhao Hi there, The minimal example is below. when page break occurs in an alltt environment, the color is different on the two pages. How to fix it? Best, Jinsong \documentclass{article} \usepackage{color} \definecolor{hlstr}{rgb}{0.192,0.494,0.8} \usepackage{framed} \usepackage{alltt} \begin{document} In the following example, color is different in the two pages. If you typeset all the contents in one page, the color is same. \vskip 400pt \color{hlstr} \begin{alltt} textConnection con,temp,times,content 31,a,I,B 54,b,I,A 38,c,I,C 53,a,II,A 49,b,II,C 42,c,II,B 57,a,III,C 62,b,III,B 64,c,III,A \end{alltt} \end{document} Hi Jinsong try \begingroup \color{hlstr} \begin{alltt} textConnection con,temp,times,content 31,a,I,B 54,b,I,A 38,c,I,C 53,a,II,A 49,b,II,C 42,c,II,B 57,a,III,C 62,b,III,B 64,c,III,A \end{alltt} \endgroup Jean-François Ulrike Fischer 2017-09-22 08:01:11 UTC Post by Jinsong Zhao Hi there, The minimal example is below. when page break occurs in an alltt environment, the color is different on the two pages. How to fix it? This is rather difficult (and not related to alltt). When the complete text of your box should have this color you can use the code of Jean-François (jfbu). Or e.g. \usepackage[many]{tcolorbox} and then \begin{tcolorbox}[breakable,coltext=hlstr] But if there should be also black text in the box, then you have a problem as color commands often don't survive when a box is broken, see e.g. https://tex.stackexchange.com/questions/150780/color-bleeding-when-using-vsplit-for-a-box-with-colored-text An alternative here is to use lualatex where you can color the font, this would work. https://tex.stackexchange.com/questions/220118/maintaining-text-colour-change-in-a-breakable-tcolorbox -- Ulrike Fischer http://www.troubleshooting-tex.de/
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9775350689888, "perplexity": 4510.10755865764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514708.24/warc/CC-MAIN-20181022050544-20181022072044-00260.warc.gz"}
http://mathgardenblog.blogspot.com/2012/05/similar-triangles.html
## Pages ### Similar triangles Today, we are going to learn about similar triangles. We will use similar triangles to give a proof of Pythagorean Theorem. As suggested by the name, two similar triangles simply look similar as in the figure below Two similar triangles have the following two important properties: Three pairs of equal angles $$\angle A = \angle A', ~~~ \angle B = \angle B', ~~~ \angle C = \angle C'$$ Three pairs of sides have equal ratios $$\frac{AB}{A'B'} = \frac{BC}{B'C'} = \frac{CA}{C'A'}$$ So how do we prove two given triangles are similar? There are three common ways. Angle-Angle Case: If two triangles have two pairs of equal angles then they are two similar triangles In the figure below, if it can be shown that $$\angle A = \angle A' ~~~\mathrm{ and }~~~ \angle B = \angle B'$$ then we can conclude that the two triangles $ABC$ and  $A'B'C'$ are similar. Side-Side-Side Case: If two triangles have three pairs of sides of equal ratios then they are two similar triangles In the figure below, if it can be shown that $$\frac{AB}{A'B'} = \frac{BC}{B'C'} = \frac{CA}{C'A'}$$ then we can conclude that the two triangles $ABC$ and  $A'B'C'$ are similar. Side-Angle-Side Case: If two triangles have two pairs of sides of equal ratios and the two angles between these two side pairs are equal then they are two similar triangles In the figure below, if it can be shown that $$\frac{AB}{A' B'} = \frac{BC}{B' C'} ~~~\mathrm{ and }~~~ \angle B = \angle B'$$ then we can conclude that the two triangles $ABC$ and  $A'B'C'$ are similar. If the two given triangles are right-angled triangles then it is even simpler to prove that they are similar. We have the following cases. Acute Angle Case: If two right-angled triangles have one pair of equal acute angles then they are similar triangles In the figure below, if it can be shown that $$\angle A = \angle A'$$ then we can conclude that the two right-angled triangles $ABC$ and  $A'B'C'$ are similar. Side-Side Case: If two right-angled triangles have two pairs of sides of equal ratios then they are two similar triangles In the above figure, if it can be shown that either $$\frac{AB}{A' B'} = \frac{BC}{B' C'}, ~~~\mathrm{ or }~~~ \frac{BC}{B' C'} = \frac{CA}{C' A'}, ~~~\mathrm{ or }~~~ \frac{CA}{C' A'} = \frac{AB}{A' B'}$$ then we can conclude that the two right-angled triangles $ABC$ and  $A'B'C'$ are similar. Now we are going to use similar triangles to prove a basic geometry theorem -- the Pythagorean theorem. Pythagorean theorem Given a triangle $ABC$ with a right angle $A$. Prove that $$BC^2 = AB^2 + AC^2.$$ Proof: Draw the altitude $AH$ onto the side $BC$. Consider the two right-angled triangles $ABC$ and $HBA$. These two right-angled triangles have a pair of equal angles at the vertex $B$, thus, they are similar triangles. It follows that $$\frac{AB}{HB} = \frac{BC}{BA},$$ and hence, $$AB^2 = HB \times BC.$$ Similarly, consider the two right-angled triangles $ABC$ and $HAC$. These two right-angled triangles have a pair of equal angles at the vertex $C$, thus, they are similar triangles. It follows that $$\frac{AC}{HC} = \frac{BC}{AC},$$ and hence, $$AC^2 = HC \times BC.$$ Therefore, we have $$AB^2 + AC^2 = HB \times BC + HC \times BC = BC^2. \blacksquare$$ You can find a proof of Morley theorem that uses similar triangles here http://mathgardenblog.blogspot.com/2012/05/morley-theorem.html. Homework: Prove the angle bisector theorem below Given a triangle ABC. Draw the bisector AD of the angle A. Prove that $$\frac{AB}{AC} = \frac{DB}{DC}.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9801349639892578, "perplexity": 618.2123765016763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123549.87/warc/CC-MAIN-20170423031203-00301-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/standard-gibbs-of-a-reaction.552150/
# Homework Help: Standard Gibbs of a reaction 1. Nov 19, 2011 ### Puchinita5 1. The problem statement, all variables and given/known data For the reaction N2(g) + 3H2(g) <---> 2NH3(g) I am supposed to determine the equilibrium constant at 298K and 1 bar. We have been given a table that states that for NH3(g): $\Delta_{r}G^{\Theta}= -16.45 kJ/mol$ I know that $\Delta_{r}G^{\Theta}= \Sigma v G^{\Theta}_{product} - \Sigma v G^{\Theta}_{reactant}$ where v is the stoichiometric coefficient. But what is confusing me is that you could also write the equation as 1/2 N2(g) + (3/2)(g) <----> NH3(g) so in the first case, the answer would be (2*-16.45 kJ/mol) - ((1*0)+(3*0)) = -32.90 kJ/mol But in the second case, the answer would be (-16.45 kJ/mol) - (1/2*0 + 3/2 *0) = -16.45 kJ/mol Since both equations should be valid, what answer is correct? I don't understand which one I should choose because they both look like the should be right. 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Nov 20, 2011 ### Ygggdrasil In the two cases, the expression for the equilibrium constant will be different. In the first case K = [NH3]^2/[N2][H2]^3 whereas in the second case K = [NH3]/[N2]^(1/2)[H2]^(3/2). This is why you always need to include a balanced reaction along with any equilibrium constant you provide. 3. Nov 20, 2011 ### jtruth914 Hey, I had the same concern. They are both right; it depends on the stoichiometry. The professor would have to specify which one he is looking for like he did last time. 4. Nov 20, 2011 ### Puchinita5 Thanks guys! Did he specify last time? I must not have understood him. This was really confusing me, good to know that both are right! I feel better now. :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9032500386238098, "perplexity": 1334.6149940675275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158205.30/warc/CC-MAIN-20180922064457-20180922084857-00038.warc.gz"}
http://mathhelpforum.com/calculus/227277-help-trying-find-horizontal-asymptotes.html
# Math Help - Help trying to find horizontal Asymptotes. 1. ## Help trying to find horizontal Asymptotes. $y=\frac{x}{x^2+5x+4x}$ I am stuck on finding the horizontal asymptotes. I know for sure that there is one at y=0. I have graphed this on my calculator and it seems that there is another between 0 and 1. Am i on the right track? I did take the inverse of this function and solved it and it came up with 3 answers. 2. ## Re: Help trying to find horizontal Asymptotes. Originally Posted by johnsy123 $y=\frac{x}{x^2+5x+4x}$ I am stuck on finding the horizontal asymptotes. I know for sure that there is one at y=0. I have graphed this on my calculator and it seems that there is another between 0 and 1. Am i on the right track? I did take the inverse of this function and solved it and it came up with 3 answers. Is that x after the 4 in the denominator correct? Is this actually $y=\dfrac{x}{x^2+9x}$ or do you mean $y=\dfrac{x}{x^2+5x+4}$ 3. ## Re: Help trying to find horizontal Asymptotes. Originally Posted by romsek Is that x after the 4 in the denominator correct? Is this actually $y=\dfrac{x}{x^2+9x}$ or do you mean $y=\dfrac{x}{x^2+5x+4}$ Hey, yes the one on the right is correct......x/(x^2+5x+4). Apologies for the poorly inputed text. 4. ## Re: Help trying to find horizontal Asymptotes. Originally Posted by johnsy123 Hey, yes the one on the right is correct......x/(x^2+5x+4). Apologies for the poorly inputed text. You find horizontal asymptotes by letting $x \to \pm \infty$. In this case 0 is the only horizontal asymptote. This is seen by noting that as $x \to \pm \infty$ the $x^2$ term in the denominator overwhelms the others and you are left with $\dfrac{x}{x^2}=\dfrac{1}{x}$ Clearly $\dfrac{1}{x} \to 0$ as $x \to \pm \infty$ Do you also need to find the vertical asymptotes?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796386361122131, "perplexity": 347.5745248459208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167592.45/warc/CC-MAIN-20150827031247-00351-ip-10-171-96-226.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/63337-calculus-work-integration-problems.html
## Calculus Work Integration Problems Hey, I need some help with a few word problems. Any help on them would be great, thanks. 1) A tank in the shape of an inverted right circular cone has height meters and radius meters. It is filled with meters of hot chocolate. Find the work required to empty the tank by pumping the hot chocolate over the top of the tank. Note: the density of hot chocolate is 2) A trough is 8 meters long, 2.5 meters wide, and 3 meters deep. The vertical cross-section of the trough parallel to an end is shaped like an isoceles triangle (with height 3 meters, and base, on top, of length 2.5 meters). The trough is full of water (density ). Find the amount of work in joules required to empty the trough by pumping the water over the top. (Note: Use as the acceleration due to gravity.) 3) A circular swimming pool has a diameter of 10 m, the sides are 4 m high, and the depth of the water is 2.5 m. How much work (in Joules) is required to pump all of the water over the side? (The acceleration due to gravity is 9.8 and the density of water is 1000 .) Thanks!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8903928995132446, "perplexity": 146.47108117183677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541317.69/warc/CC-MAIN-20161202170901-00304-ip-10-31-129-80.ec2.internal.warc.gz"}
http://cs.stackexchange.com/questions/7393/k-clique-in-connected-graphs
# K-Clique in Connected Graphs A question about the clique problem (specifically k-clique). Is there any algorithm that takes advantage of the properties of connected graphs to find cliques of a given size k, if such cliques exist? - Is a graph without an island a connected graph? –  Merbs Dec 14 '12 at 1:04 @Merbs Yes, I edited the question. –  Paul Manta Dec 14 '12 at 6:22 Does this Wikipedia subsection on cliques of fixed size answer your question? –  Merbs Dec 14 '12 at 6:33 @Merbs Not really, but I suppose you mean to say that there's nothing about connected graphs that makes this problem easier? –  Paul Manta Dec 14 '12 at 6:48 As pointed out in the previous answer, you cannot do very well if you simply assume that your input is a connected graph. However, you can do better if you make certain assumptions on your graph's structure. That is, one can ask: given a class of graphs $\mathcal G$, will the clique problem be solvable in polynomial time, assuming that all inputs are contained in $\mathcal G$? The answer turns out to be positive for a nice class (or should I say classes) of graphs: graphs whose treewidth is bounded by a constant. Without getting into too much detail (see http://en.wikipedia.org/wiki/Tree_decomposition for more details), the tree-width of a graph is a measure of how much it "resembles" a tree. Trees have a treewidth of 1, whereas cliques have a treewidth of $n-1$ (where $n$ is the number of vertices). The clique problem is $\mathcal O(t^t n)$ on graphs whose treewidth is at most $t$. Treewidth is believed to be the "right" measure for graphs problems (although more complex measures do exist): Courcelle's Theorem (see http://en.wikipedia.org/wiki/Courcelle's_theorem) states that any graph property that can be succinctly expressed via some logical parameters can be solved in polynomial time on graphs with treewidth bounded by a constant. This includes vertex cover, clique, independent set, and many others. This means that many NP-hard graph problems are fixed parameter tractable, with treewidth being the parameter. Hope this helps. - In bounded tree-width graphs is not $n^t$, it's $O(t^t n)$. –  user742 Dec 14 '12 at 20:46 @SaeedAmiri, yes, you're of course correct. I'll correct it. However, it should be mentioned that deciding whether a graph has treewidth of $t$ is $\mathcal O(n^t)$. –  Yair Zick Dec 15 '12 at 9:48 This seems like a fair/good question to me, but I think the answer must be negative. Here's my argument. Suppose you had an algorithm $A$ for solving CLIQUE on connected graphs (somehow taking advantage of the connectivity property). Then I can construct an algorithm $B$ for solving CLIQUE on general graphs that is essentially no slower than $A$. I'll just find the connected components of the graph in linear time, then call $A$ on each connected component. The running time will be linear in the input plus the time of all the calls I make to $A$. In particular, if my input happens to be a connected graph already, then my general algorithm $B$ runs in time $O(|V| + |E| + Time(A))$. This argument tells us two things. First, the best possible algorithm $A$ for your problem gets no more than an additive linear speedup over the best possible algorithm $B$ for CLIQUE applied to your problem, so we've bounded the "advantage" of knowing the input is connected to at most additive linear. Second, your problem is NP-hard because we can reduce CLIQUE in polynomial time to a polynomial number of calls to your problem. This unfortunately means that additive linear speedup is probably not much of an advantage in the scheme of things. Edit - This only referred to CLIQUE and not $k$-CLIQUE (where we fix $k$ and only let the input vary with the size of the graph). The answer depends slightly on $k$ because for e.g. $k=2$ the connected-graph algorithm runs in constant time (the answer is yes). However, the answer is largely the same -- for moderately large values of $k$, your running time of algorithm $A$ will be $O(g(k)(|V|+|E|))$ for some very large constant $g(k)$ (unless P\poly=NP); and then it doesn't matter much if algorithm $B$ loses another $|V|+|E|$ for scanning the input first. - if there is an algorithm with running time $O(f(k)n^{O(1)})$, it doesn't solves general clique problem, but it's good for fixed k, but $k$-clique belongs to $W[1]$:en.wikipedia.org/wiki/Parameterized_complexity#W.5B1.5D. –  user742 Dec 14 '12 at 10:17 @Saeed Good point, my answer is only about clique and not $k$-clique. I think the general answer should be the same (a linear pass to identify connected components does not greatly affect the time complexity), but I'll edit my answer. –  usul Dec 14 '12 at 16:22 Good point. Do you know, are there any (nontrivial) types of graphs for which the clique problem, where k is an input variable, can be solved in polynomial time? –  Paul Manta Dec 14 '12 at 17:35 I don't know, but I'm not that well-versed in graph algorithms. I think that's a great question -- I'm very curious about the answer. Perhaps this is worth posting a separate question for? Intuitively, such a graph would have to rule out the things that make clique hard, which seems tough to do , so my gut guess would be there aren't any, but I don't know! –  usul Dec 14 '12 at 17:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8360630869865417, "perplexity": 387.37858674162214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645348533.67/warc/CC-MAIN-20150827031548-00002-ip-10-171-96-226.ec2.internal.warc.gz"}
http://www.lsv.fr/Publis/publis.php?filename=dahu&onlykey=CJLS-lics17
# Selected publications at LSV Abstract: We introduce perfect half space games, in which the goal of Player 2 is to make the sums of encountered multi-dimensional weights diverge in a direction which is consistent with a chosen sequence of perfect half spaces (chosen dynamically by Player 2). We establish that the bounding games of Jurdzinski et al. (ICALP 2015) can be reduced to perfect half space games, which in turn can be translated to the lexicographic energy games of Colcombet and Niwinski, and are positionally determined in a strong sense (Player 2 can play without knowing the current perfect half space). We finally show how perfect half space games and bounding games can be employed to solve multi-dimensional energy parity games in pseudo-polynomial time when both the numbers of energy dimensions and of priorities are fixed, regardless of whether the initial credit is given as part of the input or existentially quantified. This also yields an optimal 2EXP complexity with given initial credit, where the best known upper bound was non-elementary. @inproceedings{CJLS-lics17, address = {Reykjavik, Iceland}, author = {Colcombet, {\relax Th}omas and Jurdzi{\'n}ski, Marcin and Lazi{\'c}, Ranko and Schmitz, Sylvain}, booktitle = {{P}roceedings of the 32nd {A}nnual {ACM\slash IEEE} {S}ymposium on {L}ogic {I}n {C}omputer {S}cience ({LICS}'17)}, DOI = {10.1109/LICS.2017.8005105}, editor = {Ouaknine, Jo{\"e}l}, month = jun, pages = {1--11}, publisher = {{IEEE} Press}, title = {Perfect Half Space Games}, url = {http://arxiv.org/abs/1704.05626}, year = {2017}, }
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9465028643608093, "perplexity": 4633.286325859701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824357.3/warc/CC-MAIN-20171020211313-20171020231313-00564.warc.gz"}
https://carlocomin.wordpress.com/2015/03/
# An Improved Upper Bound on Time Complexity of Value Problem and Optimal Strategy Synthesis in MPGs. We prove the existence of a pseudo-polynomial $O(|V|^2 |E|\, W)$ time algorithm for the Value Problem and Optimal Strategy Synthesis in Mean Payoff Games. This improves by a factor $log(|V|\, W)$ the best previously known pseudo-polynomial upper bound due to Brim et al. (2011). The improvement hinges on a suitable characterization of values and optimal strategies in terms of reweightings.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.986627995967865, "perplexity": 388.2145625978805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805649.7/warc/CC-MAIN-20171119134146-20171119154146-00241.warc.gz"}
http://mathhelpforum.com/differential-geometry/112276-bounded-variation-print.html
# bounded variation Show that if $f$ is of bounded variation on $[a, b]$ then $\int^b_a |f'| dm = T_a^b(f)$ if and only if $f$ is absolutely continuous. (where $T$ denotes the total variation)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997265934944153, "perplexity": 187.2692046954753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510266597.23/warc/CC-MAIN-20140728011746-00169-ip-10-146-231-18.ec2.internal.warc.gz"}
http://www.ck12.org/algebra/Algebra-Expressions-with-Exponents/lesson/Algebra-Expressions-with-Exponents-ALG-I/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # Algebra Expressions with Exponents ## Calculate values of numbers with exponents Estimated16 minsto complete % Progress Practice Algebra Expressions with Exponents MEMORY METER This indicates how strong in your memory this concept is Progress Estimated16 minsto complete % Algebra Expressions with Exponents ### Exponents Many formulas and equations in mathematics contain exponents. Exponents are used as a short-hand notation for repeated multiplication. For example: \begin{align} 2 \cdot 2 & = 2^2\\ 2 \cdot 2 \cdot 2 & = 2^3\\ \end{align} The exponent stands for how many times the number is used as a factor (multiplied). When dealing with integers, it's usually easiest to simplify the expression. For example, as shown here, we commonly just write 4 instead of \begin{align*}2^2,\end{align*} and 8 instead of \begin{align*}2^3.\end{align*} \begin{align} 2^2 & = 4\\ 2^3 & = 8\\ \end{align} However, we generally keep the exponents when we work with variables, because it is much easier to write \begin{align*}x^8\end{align*} than \begin{align*}x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x \cdot x.\end{align*} To evaluate expressions with exponents, substitute the values you are given for each variable and simplify. It is especially important when working with exponents to substitute using parentheses in order to make sure that the simplification is done correctly. #### Evaluating an Expression Containing Exponents The area of a circle is given by the formula \begin{align*}A = \pi r^2.\end{align*} Find the area of a circle with radius \begin{align*}r = 17 \ inches.\end{align*} Substitute values into the equation. \begin{align} A & = \pi r^2 \qquad \text{Substitute 17 for }r.\\ & = \pi (17)^2\\ & = 3.14 (17)(17)\\ A & \approx 907.92 \text{ Rounded to 2 decimal places.}\\ \end{align} The area of the circle is approximately 907.92 square inches. #### Evaluating a Multi-Variable Expression Containing Exponents \begin{align*}\text{Find the value of }\frac { x^2y^3 } { x^3 + y^2 }, \text{ when }x=2 \text{ and }y=-4.\end{align*} Substitute the given values for \begin{align*}x\end{align*} and \begin{align*}y\end{align*}. \begin{align} \frac { x^2y^3 } { x^3 + y^2 } & = \frac { (2)^2 (-4)^3 } { (2)^3 + (-4)^2 } \qquad \text{Substitute} \ 2 \ \text{for} \ x \ \text{and} \ -4 \ \text{for} \ y.\\ & = \frac { 4(-64) } { 8 + 16 } \qquad \qquad (2)^2 = 4, \ (-4)^3 = -64, \ (2)^3 = 8, \ (-4)^2 = 16.\\ & = \frac { - 256 } { 24 }\\ & = \frac{-32}{3} \\ \end{align} #### Real World Application The height \begin{align*}(h)\end{align*} of a ball in flight is given by the formula \begin{align*}h = - 32t^2 + 60t + 20\end{align*}, where the height is given in feet and the time \begin{align*}(t)\end{align*} is given in seconds. Find the height of the ball at time \begin{align*}t = 2 \text{ seconds.}\end{align*} \begin{align} h & = -32t^2 + 60t + 20\\ & = -32(2)^2 + 60(2) + 20 \qquad \text{Substitute} \ 2 \ \text{for} \ t.\\ & = -32(4) + 60(2) + 20\\ & = 12\\ \end{align} The height of the ball is 12 feet. ### Example #### Example 1 Find the value of \begin{align*}\frac { a^2+b^2 } { a^2-b^2 }\end{align*}, for \begin{align*}a = -1\end{align*} and \begin{align*}5\end{align*}. Substitute the values of \begin{align*}x\end{align*} and \begin{align*}y\end{align*} in the following. \begin{align*} \frac { a^2+b^2 } { a^2-b^2 } = \frac { (-1)^2+(5)^2 } { (-1)^2-(5)^2 } \qquad \text{Substitute} \ -1 \ \text{for} \ a \ \text{and} \ 5 \ \text{for} \ b.\\ \frac { 1+25 } { 1-25 } = \frac { 26 } { 24 }=\frac{13}{12} \qquad \qquad \text{Evaluate and simplify expressions.} \end{align*} ### Review Evaluate 1-8 using \begin{align*}x = -1, \ y = 2, \ z = -3,\end{align*} and \begin{align*}w = 4\end{align*}. 1. \begin{align*} 8x^3\end{align*} 2. \begin{align*}\frac { 5x^2 } { 6z^3 } \end{align*} 3. \begin{align*} 3z^2 - 5w^2\end{align*} 4. \begin{align*} x^2 - y^2 \end{align*} 5. \begin{align*} \frac { z^3 + w^3 } { z^3 - w^3 } \end{align*} 6. \begin{align*} 2x^3 - 3x^2 + 5x - 4\end{align*} 7. \begin{align*} 4w^3 + 3w^2 - w + 2 \end{align*} 8. \begin{align*}3 + \frac{ 1 } { z^2 }\end{align*} For 9-10, use the fact that the volume of a box without a lid is given by the formula \begin{align*} V = 4x(10 - x)^2\end{align*}, where \begin{align*}x\end{align*} is a length in inches and \begin{align*}V\end{align*} is the volume in cubic inches. 1. What is the volume when \begin{align*}x = 2\end{align*}? 2. What is the volume when \begin{align*}x = 3\end{align*}? ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes ### Vocabulary Language: English Evaluate To evaluate an expression or equation means to perform the included operations, commonly in order to find a specific value. Exponent Exponents are used to describe the number of times that a term is multiplied by itself. Expression An expression is a mathematical phrase containing variables, operations and/or numbers. Expressions do not include comparative operators such as equal signs or inequality symbols. Integer The integers consist of all natural numbers, their opposites, and zero. Integers are numbers in the list ..., -3, -2, -1, 0, 1, 2, 3... Parentheses Parentheses "(" and ")" are used in algebraic expressions as grouping symbols. substitute In algebra, to substitute means to replace a variable or term with a specific value. Volume Volume is the amount of space inside the bounds of a three-dimensional object.
{"extraction_info": {"found_math": true, "script_math_tex": 39, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 10, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9898800253868103, "perplexity": 1582.0960512066006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00237-ip-10-171-10-70.ec2.internal.warc.gz"}
https://moodleucl.uclouvain.be/course/info.php?id=12419&lang=pt
### LPHYS1241 - Quantum Physics 1 This teaching unit consists of an introduction to the conceptual and physical bases of quantum physics, which governs the microscopic world.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8160463571548462, "perplexity": 2237.7544748885616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668699.77/warc/CC-MAIN-20191115171915-20191115195915-00139.warc.gz"}
https://www.gradesaver.com/the-wave/q-and-a/do-you-think-that-mr-ross-will-continue-the-experiment--why--132651
# Do you think that Mr. Ross will continue the experiment ? Why ? At the beginning of the book ( chapters 1-5 ) do you think that he will continue the experiment ( without reading the rest of the book ). ( hypothesis)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9844652414321899, "perplexity": 673.7294525446873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589237.16/warc/CC-MAIN-20180716080356-20180716100356-00256.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=14&t=31439&p=99573
## Homework 1.15 $c=\lambda v$ alexagreco1A Posts: 34 Joined: Fri Apr 06, 2018 11:03 am ### Homework 1.15 In the ultraviolet spectrum of atomic hydrogen, a line is observed at 102.6 nm. Determine the values of n for the initial and final energy levels of the electron. I understand in this problem that hydrogen gives us n=1, but to find the other energy levels in the solution it converts the wavelength into frequency and then sets this value equal to the Rydberg equation. Why is the frequency substituted for En in this case? Chem_Mod Posts: 17839 Joined: Thu Aug 04, 2011 1:53 pm Has upvoted: 406 times ### Re: Homework 1.15 I am a bit confused by your question. Can you elaborate by what you mean by frequency is substituted for En since En does not have any frequency in its equation? Joshua Yang 1H Posts: 22 Joined: Mon Jan 08, 2018 8:19 am ### Re: Homework 1.15 alexagreco1A wrote:In the ultraviolet spectrum of atomic hydrogen, a line is observed at 102.6 nm. Determine the values of n for the initial and final energy levels of the electron. I understand in this problem that hydrogen gives us n=1, but to find the other energy levels in the solution it converts the wavelength into frequency and then sets this value equal to the Rydberg equation. Why is the frequency substituted for En in this case? I honestly think this is only because the Rydberg Equation is written as v = R( 1/n1^2 - 1/n2^2 ) and so we use the frequency instead of using En directly.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566571116447449, "perplexity": 1101.380623438994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540491871.35/warc/CC-MAIN-20191207005439-20191207033439-00062.warc.gz"}
http://en.wikibooks.org/wiki/Circuit_Theory/Transform_Domain
# Circuit Theory/Transform Domain ## Impedance Let's recap: In the transform domain, the quantities of resistance, capacitance, and inductance can all be combined into a single complex value known as "Impedance". Impedance is denoted with the letter Z, and can be a function of s or jω, depending on the transform used (Laplace or Fourier). This impedance is very similar to the phasor concept of impedance, except that we are in the complex domain (laplace or fourier), and not the phasor domain. Impedance is a complex quantity, and is therefore comprised of two components: The real component (resistance), and the complex component (reactance). Resistors, because they do not vary with time or frequency, have real values. Capacitors and inductors however, have imaginary values of impedance. The resistance is denoted (as always) with a capital R, and the reactance is denoted with an X (this is common, although it is confusing because X is also the most common input designator). We have therefore, the following relationship between resistance, reactance, and impedance: [Complex Laplace Impedance] $Z = R + jX$ The inverse of resistance is a quantity called "Conductance". Similarly, the inverse of reactance is called "Susceptance". The inverse of impedance is called "Admittance". Conductance, Susceptance, and Admittance are all denoted by the variables Y or G, and are given the units Siemens. This book will not use any of these terms again, and they are just included here for completeness. ## Parallel Components Once in the transform domain, all circuit components act like basic resistors. Components in parallel are related as follows: $Z_1 || Z_2 = \frac{Z_1 Z_2}{Z_1 + Z_2}$ ## Series Components Series components in the transform domain all act like resistors in the time domain as well. If we have two impedances in series with each other, we can combine them as follows: $Z_1 \mbox{ in series with } Z_2 = Z_1 + Z_2$ ## Solving Circuits This section of the Circuit Theory wikibook is a stub. You can help by expanding this section.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8874291181564331, "perplexity": 1077.0087663846377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648891.34/warc/CC-MAIN-20141024030048-00137-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.csplib.org/Problems/prob055/
Proposed by Peter Nightingale Informally, the problem is to find a set (optionally of maximal size) of codewords, such that any pair of codewords are Hamming distance $d$ apart. Each codeword (which may be considered as a sequence) is made up of symbols from the alphabet $\{1, \ldots, q\}$, with each symbol occurring a fixed number $\lambda$ of times per codeword. More precisely, the problem has parameters $v$, $q$, $\lambda$, $d$ and it is to find a set $E$ of size $v$, of sequences of length $q\lambda$, such that each sequence contains $\lambda$ of each symbol in the set $\{1, \ldots, q\}$. For each pair of sequences in $E$, the pair are Hamming distance $d$ apart (i.e. there are $d$ places where the sequences disagree). For the parameters $v=5$, $q=3$, $\lambda =2$, $d=4$, the following table shows a set $E=\{c_1, c_2, c_3, c_4, c_5 \}$. Symbol Codeword $c_1$ 0 0 1 1 2 2 $c_2$ 0 1 0 2 1 2 $c_3$ 0 1 2 0 2 1 $c_4$ 0 2 1 2 0 1 $c_5$ 0 2 2 1 1 0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9782677292823792, "perplexity": 114.70018915148093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303385.49/warc/CC-MAIN-20220121131830-20220121161830-00002.warc.gz"}
https://asmedigitalcollection.asme.org/turbomachinery/article-abstract/doi/10.1115/1.4044608/975250/Investigation-of-Flow-Field-at-the-Inlet-of-a?redirectedFrom=fulltext
## Abstract The flow field at the inlet of a turbocharger compressor has been studied through Stereoscopic Particle Image Velocimetry (SPIV) experiments under different operating conditions. It is found that the flow field is quite uniform at high mass flow rates; but as mass flow rate is reduced, flow reversal from the impeller is observed as an annular ring at the periphery of the inlet duct. The inception of flow reversal is observed to occur in the mid-flow operating region, near peak efficiency, and corresponds to an incidence angle of about 15.5 degrees at the inducer blade tips at all tested speeds. This reversed flow region is marked with high tangential velocity and rapid fluctuations. It grows in strength with reducing mass flow rate and imparts some of its angular momentum to the forward flow due to mixing. The penetration depth of the reversed flow upstream from the inducer plane is found to increase quadratically with decreasing flow rate. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9383030533790588, "perplexity": 953.1097452533174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573098.0/warc/CC-MAIN-20190917161045-20190917183045-00248.warc.gz"}
https://www.physicsforums.com/threads/what-is-the-magnitude-of-the-impulse.285759/
# What is the magnitude of the impulse 1. Jan 18, 2009 ### magma_saber 1. The problem statement, all variables and given/known data A proton has mass 1.7 x 10-27 kg. What is the magnitude of the impulse required in the direction of motion to increase its speed from 0.995c to 0.998c? 2. Relevant equations gamma=1/sqrt(1-(v/c)2) p=gamma*mass*velocity 3. The attempt at a solution (1.7 x 10-27 * 15.82 * 0.998 * 3x108) - (1.7 x 10-27 * 15.82 * 0.995 * 3x108) 2. Jan 18, 2009 ### Dick Re: Impulse Doesn't the gamma factor change quite a bit as well when you move from 0.995c to 0.998c? 3. Jan 19, 2009 ### magma_saber Re: Impulse ya i accidentally typed 15.82 for the .995c but i ment to type 10.01. i tried that and its still wrong. edit: never mind, i just put the values in the calculator wrong lol. Similar Discussions: What is the magnitude of the impulse
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8898298144340515, "perplexity": 1618.3604970044769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806856.86/warc/CC-MAIN-20171123180631-20171123200631-00667.warc.gz"}
http://www.dpmms.cam.ac.uk/person/bgs25
Department of Pure Mathematics and Mathematical Statistics Research Interests: High-dimensional statistics and large-scale data analysis FL.03 01223 337857
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761075735092163, "perplexity": 3835.8358684149425}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00370.warc.gz"}
http://math.stackexchange.com/users/61734/mwoua
mwoua less info reputation 211 bio website location Brussels, Belgium age member for 1 year, 5 months seen yesterday profile views 29 4 Basic proof by Mathematical Induction (Towers of Hanoi) 3 One sided limits for $\displaystyle\lim_{x\to\frac{\pi}{4}} \frac{1}{\cot{x}-1}$ 2 What is the most elementary proof that $\lim_{n \to \infty} (1+1/n)^n$ exists? 2 Math question partial derivatives 2 Where does this inequality come from for $e^{-2}$? 480 Reputation +5 Calculation of the Inverse Laplace Transform of $\frac{1}{p}$ by contour integration. +10 Basic proof by Mathematical Induction (Towers of Hanoi) +5 What is the solution of this differential equation? / How to solve it? +5 Probability and correlation function, interpretation of a result 9 Questions 3 Calculate an integral involving Hermite polynomials 1 What is the solution of this differential equation? / How to solve it? 1 Probability and correlation function, interpretation of a result 1 Demonstration with inequalities 1 Calculation of the Inverse Laplace Transform of $\frac{1}{p}$ by contour integration. 35 Tags 7 calculus × 6 4 induction 6 real-analysis × 3 3 integration × 5 5 limits × 2 3 trigonometry 4 proof-writing 2 inequality × 2 4 discrete-mathematics 2 sequences-and-series 9 Accounts Mathematics 480 rep 211 TeX - LaTeX 258 rep 210 Stack Overflow 232 rep 39 Spanish Language 143 rep 4 Physics 130 rep 7
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9857162237167358, "perplexity": 1897.9244808380702}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274866.27/warc/CC-MAIN-20140728011754-00245-ip-10-146-231-18.ec2.internal.warc.gz"}
https://homework.zookal.com/questions-and-answers/pls-help-me-solve-these-two-questions--thank-you-716453056
1. Math 2. Prealgebra 3. pls help me solve these two questions thank you... # Question: pls help me solve these two questions thank you... ###### Question details pls help me solve these two questions . thank you
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931347966194153, "perplexity": 2313.8688009967445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00183.warc.gz"}
http://book.caltech.edu/bookforum/showthread.php?s=bec9f9a9dcdbf88ced349954dda54c38&p=12486
LFD Book Forum Overfitting with Polynomials : deterministic noise #1 08-17-2012, 12:38 PM Kevin Junior Member Join Date: Jul 2012 Posts: 1 Overfitting with Polynomials : deterministic noise I have some difficulty to understand the idea of deterministic noise. I think there are some disturbing contradiction with what we've seen with the bias-variance tradeoff, particularly with 50th order noiseless target exemple. Chapter 4 state that: - A 2nd order polynomial could be better than a 10th order polynomial to fit a 50th order polynomial target and it's due to the deterministic noise. --> So I conclude that there is more deterministic noise with 10th order than with the 2nd order. - Deterministic noise=bias But we've seen with the bias-variance tradeoff, that a more complex model than an other have a lower bias. Obvioulsy, I'm wrong somewhere, but where ? #2 08-17-2012, 02:26 PM yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,475 Re: Overfitting with Polynomials : deterministic noise Quote: Originally Posted by Kevin Chapter 4 state that: - A 2nd order polynomial could be better than a 10th order polynomial to fit a 50th order polynomial target and it's due to the deterministic noise. --> So I conclude that there is more deterministic noise with 10th order than with the 2nd order. The confusion is justified, since there are two opposing factors here (see Exercise 4.3). There is more deterministic noise with the 2nd order model than with the 10th order model (which would suggest more overfitting with the 2nd order), but the model itself is simpler so that would suggest less overfitting. It turns out that the latter factor wins here. If you want to isolate the impact of deterministic noise on overfitting without interference from the model complexity, you can fix the model and change the complexity of the target function. __________________ Where everyone thinks alike, no one thinks very much #3 11-03-2016, 12:45 PM CountVonCount Member Join Date: Oct 2016 Posts: 17 Re: Overfitting with Polynomials : deterministic noise I think the confusion comes from Figure 4.4 compared to the figures of the stochastic noise. Here you write the shading is the deterministic noise, since this is the difference between the best fit of the current model and the target function. Exactly this shading is from the bias-variance analyses. Thus the value of the deterministic noise is directly related to the bias. When you talk about stochastic noise you say that the out-of-sample error will increase with the model-complexity and this is related to the area between the final hypothesis and the target . Thus the reader might think the bias is increasing with the complexity of the model. However the bias depends on and not on . And the reason why this area increases is due to the stochastic noise. If there isn't any noise the final hypothesis will have a better chance to fit (depending on the position of the samples). In fact (and this is not really clear form the text, but from Exercise 4.3) on a noiseless target the shaded area in Figure 4.4 will decrease when the model complexity increases and thus the bias decreases. My suggestion is to make is more clear, that in case of stochastic noise you talk about the actual final hypothesis and in case of deterministic noise you talk about the best fitting hypothesis, that is related to . From my understanding I would say: Overfitting does not apply to the best fit of the model () but to the real hypothesis (). In the bias-variance-analyses we saw the variance will increase together with the model complexity (at the same number of samples). So I think Overfitting is a major part of the variance, either due to the stochastic noise or due to the deterministic noise. Thread Tools Display Modes Linear Mode Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices All times are GMT -7. The time now is 12:14 PM.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8239664435386658, "perplexity": 1932.2976786764045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313996.39/warc/CC-MAIN-20190818185421-20190818211421-00109.warc.gz"}
https://numericalshadow.org/doku.php?id=numerical-shadow:examples:6x6
The web resource on numerical range and numerical shadow # Hermitian matrices In the case of Hermitian matrices the numerical shadow is a one dimensional distribution. ## Diagonal matrices ### Example 1 The matrix is $\mathrm{diag}(1, 2, 3, 4 ,5, 6)$. The dashed lines mark the eigenvalues. The upped plot shows the derivative of the distribution. Standard numerical shadow with respect to complex states. Numerical shadow with respect to real states. ### Example 2 The matrix is $\mathrm{diag}(1, 2, 2, 4 ,5, 6)$. The dashed lines mark the eigenvalues. The upped plot shows the derivative of the distribution. Standard numerical shadow with respect to complex states. Numerical shadow with respect to real states. ### Example 3 The matrix is $\mathrm{diag}(1, 2, 2, 4 ,4, 6)$. The dashed lines mark the eigenvalues. The upped plot shows the derivative of the distribution. Standard numerical shadow with respect to complex states. Numerical shadow with respect to real states. ### Example 4 The matrix is $\mathrm{diag}(1, 2, 2, 2 ,5, 6)$. The dashed lines mark the eigenvalues. The upped plot shows the derivative of the distribution. Standard numerical shadow with respect to complex states. Numerical shadow with respect to real states. ### Example 5 The matrix is $\mathrm{diag}(1, 2, 2, 2 ,2, 6)$. The dashed lines mark the eigenvalues. The upped plot shows the derivative of the distribution. Standard numerical shadow with respect to complex states. Numerical shadow with respect to real states. ### Example 6 The matrix is $\mathrm{diag}(2, 2, 2, 4 ,4, 4)$. The dashed lines mark the eigenvalues. The upped plot shows the derivative of the distribution. Standard numerical shadow with respect to complex states. Numerical shadow with respect to real states.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9105258584022522, "perplexity": 906.5646653137461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00181.warc.gz"}