url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://math.stackexchange.com/questions/283397/the-image-of-a-morphism-between-affine-algebraic-varieties/283436
# The image of a morphism between affine algebraic varieties. Suppose F is an morphism between algebraic variety V and W. Prove that the pull back F# between the coordinate ring C[W] and C[V] is surjective if and only if the morphism F is an isomorphism between V and some algebraic subvariety of W. This is an exercise from Karen.E.Smith 'An invitation to algebraic geometry.' I can show without difficulty that if F# is surjective, then F is injective. But how to prove that the image of F is a subvariety of W? Generally it's not true that the image of an injective morphism is an algebraic variety. - ## 1 Answer I will assume varieties are irreducible. Let $I$ be the kernel of $f^\sharp : \mathbb{C}[W] \to \mathbb{C}[V]$. Since $f^\sharp$ is surjective, $I$ must be a prime ideal, so corresponds to some closed subvariety $Y \subseteq W$. Thus the morphism $f : V \to W$ must factor through the inclusion $Y \hookrightarrow W$. We may assume without loss of generality that $Y = W$. But then $f^\sharp$ is an isomorphism, say with two-sided inverse $g^\sharp : \mathbb{C}[V] \to \mathbb{C}[W]$, and the fundamental theorem regarding morphisms of varieties implies the morphism $g : W \to V$ corresponding to $g^\sharp$ must be a two-sided inverse for $f : V \to W$. You are right that the image of a morphism of affine varieties need not be closed: for example, if $V = \{ (x, y) \in \mathbb{C}^2 : x y = 1 \}$, $W = \mathbb{C}$, and $f(x, y) = x$, then the image of $f$ is a not closed in $W$. But what about $f^\sharp$? Well, $\mathbb{C}[V] = \mathbb{C}[x, y] / (x y - 1)$ and $\mathbb{C}[W] = \mathbb{C}[x]$, so $f^\sharp : \mathbb{C}[V] \to \mathbb{C}[W]$ is not surjective. (In fact, it is injective!) Thus the argument in the previous paragraph does not apply. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376686215400696, "perplexity_flag": "head"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/G05/g05intro.html
# NAG Library Chapter Introductiong05 – Random Number Generators ## 1  Scope of the Chapter This chapter is concerned with the generation of sequences of independent pseudorandom and quasi-random numbers from various distributions, and the generation of pseudorandom time series from specified time series models. ## 2  Background to the Problems ### 2.1  Pseudorandom Numbers A sequence of pseudorandom numbers is a sequence of numbers generated in some systematic way such that they are independent and statistically indistinguishable from a truly random sequence. A pseudorandom number generator (PRNG) is a mathematical algorithm that, given an initial state, produces a sequence of pseudorandom numbers. A PRNG has several advantages over a true random number generator in that the generated sequence is repeatable, has known mathematical properties and can be implemented without needing any specialist hardware. Many books on statistics and computer science have good introductions to PRNGs, for example Knuth (1981) or Banks (1998). PRNGs can be split into base generators, and distributional generators. Within the context of this document a base generator is defined as a PRNG that produces a sequence (or stream) of variates (or values) uniformly distributed over the interval $\left(0,1\right)$. Depending on the algorithm being considered, this interval may be open, closed or half-closed. A distribution generator is a function that takes variates generated from a base generator and transforms them into variates from a specified distribution, for example a uniform, Gaussian (Normal) or gamma distribution. The period (or cycle length) of a base generator is defined as the maximum number of values that can be generated before the sequence starts to repeat. The initial state of the base generator is often called the seed. There are six base generators currently available in the NAG C Library, these are; a basic linear congruential generator (LCG) (referred to as the NAG basic generator) (see Knuth (1981)), two sets of Wichmann–Hill generators (see Maclaren (1989) and Wichmann and Hill (2006)), the Mersenne Twister (see Matsumoto and Nishimura (1998)), the ACORN generator (see Wikramaratna (1989)) and L'Ecuyer generator (see L'Ecuyer and Simard (2002)). #### 2.1.1  NAG Basic Generator The NAG basic generator is a linear congruential generator (LCG) and, like all linear congruential generators, has the form: where the ${u}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots $, form the required sequence. The NAG basic generator uses ${a}_{1}={13}^{13}$ and ${m}_{1}={2}^{59}$, which gives a period of approximately ${2}^{57}$. This generator has been part of the NAG Library since Mark 6 and as such has been widely used. It suffers from no known problems, other than those due to the lattice structure inherent in all linear congruential generators, and, even though the period is relatively short compared to many of the newer generators, it is sufficiently large for many practical problems. The performance of the NAG basic generator has been analysed by the Spectral Test, see Section 3.3.4 of Knuth (1981), yielding the following results in the notation of Knuth (1981). | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|--------------------------------------------------------------------------------------------| | $n$ | ${\nu }_{n}$ | Upper bound for ${\nu }_{n}$ | | $2$ | $3.44×{10}^{8}$ | $4.08×{10}^{8}$ | | $3$ | $4.29×{10}^{5}$ | $5.88×{10}^{5}$ | | $4$ | $1.72×{10}^{4}$ | $2.32×{10}^{4}$ | | $5$ | $1.92×{10}^{3}$ | $3.33×{10}^{3}$ | | $6$ | $593$ | $939$ | | $7$ | $198$ | $380$ | | $8$ | $108$ | $197$ | | $9$ | $67$ | $120$ | The right-hand column gives an upper bound for the values of ${\nu }_{n}$ attainable by any multiplicative congruential generator working modulo ${2}^{59}$. An informal interpretation of the quantities ${\nu }_{n}$ is that consecutive $n$-tuples are statistically uncorrelated to an accuracy of $1/{\nu }_{n}$. This is a theoretical result; in practice the degree of randomness is usually much greater than the above figures might support. More details are given in Knuth (1981), and in the references cited therein. Note that the achievable accuracy drops rapidly as the number of dimensions increases. This is a property of all multiplicative congruential generators and is the reason why very long periods are needed even for samples of only a few random numbers. #### 2.1.2  Wichmann–Hill I Generator This series of Wichmann–Hill base generators (see Maclaren (1989)) use a combination of four linear congruential generators and has the form: (1) where the ${u}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots $, form the required sequence. The NAG C Library implementation includes 273 sets of parameters, ${a}_{\mathit{j}},{m}_{\mathit{j}}$, for $\mathit{j}=1,2,3,4$, to choose from. The constants ${a}_{i}$ are in the range 112 to 127 and the constants ${m}_{j}$ are prime numbers in the range $16718909$ to $16776971$, which are close to ${2}^{24}=16777216$. These constants have been chosen so that each of the resulting 273 generators are essentially independent, all calculations can be carried out in 32-bit integer arithmetic and the generators give good results with the spectral test, see Knuth (1981) and Maclaren (1989). The period of each of these generators would be at least ${2}^{92}$ if it were not for common factors between $\left({m}_{1}-1\right)$, $\left({m}_{2}-1\right)$, $\left({m}_{3}-1\right)$ and $\left({m}_{4}-1\right)$. However, each generator should still have a period of at least ${2}^{80}$. Further discussion of the properties of these generators is given in Maclaren (1989). #### 2.1.3  Wichmann–Hill II Generator This Wichmann–Hill base generator (see Wichmann and Hill (2006)) is of the same form as that described in Section 2.1.2, i.e., a combination of four linear congruential generators. In this case ${a}_{1}=11600$, ${m}_{1}=2147483579$, ${a}_{2}=47003$, ${m}_{2}=2147483543$, ${a}_{3}=23000$, ${m}_{3}=2147483423$, ${a}_{4}=33000$, ${m}_{4}=2147483123$. Unlike in the original Wichmann–Hill generator, these values are too large to carry out the calculations detailed in (1) using 32-bit integer arithmetic, however, if then setting gives $wi = Wi ​ if ​ Wi≥0 2147483579+Wi ​ otherwise$ and ${W}_{i}$ can be calculated in 32-bit integer arithmetic. Similar expressions exist for ${x}_{i}$, ${y}_{i}$ and ${z}_{i}$. The period of this generator is approximately ${2}^{121}$. Further details of implementing this algorithm and its properties are given in Wichmann and Hill (2006). This paper also gives some useful guidelines on testing PRNGs. #### 2.1.4  Mersenne Twister Generator The Mersenne Twister (see Matsumoto and Nishimura (1998)) is a twisted generalized feedback shift register generator. The algorithm underlying the Mersenne Twister is as follows: (i) Set some arbitrary initial values ${x}_{1},{x}_{2},\dots ,{x}_{r}$, each consisting of $w$ bits. (ii) Letting $A= 0 Iw-1 aw aw-1⋯a1 ,$ where ${I}_{w-1}$ is the $\left(w-1\right)×\left(w-1\right)$ identity matrix and each of the ${a}_{i},i=1$ to $w$ take a value of either $0$ or $1$ (i.e., they can be represented as bits). Define $x i+r = x i+s ⊕ x i ω : l+1 | x i+1 l:1 A ,$ where ${x}_{i}^{\left(\omega :\left(l+1\right)\right)}|{x}_{i+1}^{\left(l:1\right)}$ indicates the concatenation of the most significant (upper) $w-l$ bits of ${x}_{i}$ and the least significant (lower) $l$ bits of ${x}_{i+1}$. (iii) Perform the following operations sequentially: $z = xi+r ⊕ xi+r ≫ t1 z = z ⊕ z ≪ t2 ​ AND ​ m1 z = z ⊕ z ≪ t3 ​ AND ​ m2 z = z ⊕ z ≫ t4 u i+r = z/ 2w - 1 ,$ where ${t}_{1}$, ${t}_{2}$, ${t}_{3}$ and ${t}_{4}$ are integers and ${m}_{1}$ and ${m}_{2}$ are bit-masks and ‘$\gg t$’ and ‘$\ll t$’ represent a $t$ bit shift right and left respectively, $\oplus $ is bit-wise exclusively or (xor) operation and ‘AND’ is a bit-wise and operation. The ${u}_{\mathit{i}+r}$, for $\mathit{i}=1,2,\dots $, form the required sequence. The supplied implementation of the Mersenne Twister uses the following values for the algorithmic constants: $w = 32 a = 0x9908b0 df l = 31 r = 624 s = 397 t1 = 11 t2 = 7 t3 = 15 t4 = 18 m1 = 0x9d2c5680 m2 = 0xefc60000$ where the notation 0xDD $\dots $ indicates the bit pattern of the integer whose hexadecimal representation is DD $\dots $. This algorithm has a period length of approximately ${2}^{19,937}-1$ and has been shown to be uniformly distributed in 623 dimensions (see Matsumoto and Nishimura (1998)). #### 2.1.5  ACORN Generator The ACORN generator is a special case of a multiple recursive generator (see Wikramaratna (1989) and Wikramaratna (2007)). The algorithm underlying ACORN is as follows: (i) Choose an integer value $k\ge 1$. (ii) Choose an integer value $M$, and an integer seed ${Y}_{0}^{\left(0\right)}$, such that $0<{Y}_{0}^{\left(0\right)}<M$ and ${Y}_{0}^{\left(0\right)}$ and $M$ are relatively prime. (iii) Choose an arbitrary set of $k$ initial integer values, ${Y}_{0}^{\left(1\right)},{Y}_{0}^{\left(2\right)},\dots ,{Y}_{0}^{\left(k\right)}$, such that $0\le {Y}_{0}^{\left(m\right)}<M$, for all $m=1,2,\dots ,k$. (iv) Perform the following sequentially: for $m=1,2,\dots ,k$. (v) Set ${u}_{i}={Y}_{i}^{\left(k\right)}/M$. The ${u}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots $, then form a pseudorandom sequence, with ${u}_{i}\in \left[0,1\right)$, for all $i$. Although you can choose any value for $k$, $M$, ${Y}_{0}^{\left(0\right)}$ and the ${Y}_{0}^{\left(m\right)}$, within the constraints mentioned in (i) to (iii) above, it is recommended that $k\ge 10$, $M$ is chosen to be a large power of two with $M\ge {2}^{60}$ and ${Y}_{0}^{\left(0\right)}$ is chosen to be odd. The period of the ACORN generator, with the modulus $M$ equal to a power of two, and an odd value for ${Y}_{0}^{\left(0\right)}$ has been shown to be an integer multiple of $M$ (see Wikramaratna (1992)). Therefore, increasing $M$ will give a series with a longer period. #### 2.1.6  L'Ecuyer MRG32k3a Combined Recursive Generator The base generator L'Ecuyer MRG32k3a (see L'Ecuyer and Simard (2002)) combines two multiple recursive generators: where ${a}_{11}=0$, ${a}_{12}=1403580$, ${a}_{13}=-810728$, ${m}_{1}={2}^{32}-209$, ${a}_{21}=527612$, ${a}_{22}=0$, ${a}_{23}=-1370589$, ${m}_{2}={2}^{32}-22853$, and ${u}_{i},i=1,2,\dots $ form the required sequence. If $d={m}_{1}$ then ${u}_{i}\in \left(0,1\right]$ else if $d={m}_{1}+1$ then ${u}_{i}\in \left(0,1\right)$. Combining the two multiple recursive generators (MRG) results in sequences with better statistical properties in high dimensions and longer periods compared with those generated from a single MRG. The combined generator described above has a period length of approximately ${2}^{191}$. ### 2.2  Quasi-random Numbers Low discrepancy (quasi-random) sequences are used in numerical integration, simulation and optimization. Like pseudorandom numbers they are uniformly distributed but they are not statistically independent, rather they are designed to give more even distribution in multidimensional space (uniformity). Therefore they are often more efficient than pseudorandom numbers in multidimensional Monte–Carlo methods. The quasi-random number generators implemented in this chapter generate a set of points ${x}^{1},{x}^{2},\dots ,{x}^{N}$ with high uniformity in the $S$-dimensional unit cube ${I}^{S}={\left[0,1\right]}^{S}$. One measure of the uniformity is the discrepancy which is defined as follows: • Given a set of points ${x}^{1},{x}^{2},\dots ,{x}^{N}\in {I}^{S}$ and a subset $G\subset {I}^{S}$, define the counting function ${S}_{N}\left(G\right)$ as the number of points ${x}^{i}\in G$. For each $x=\left({x}_{1},{x}_{2},\dots ,{x}_{S}\right)\in {I}^{S}$, let ${G}_{x}$ be the rectangular $S$-dimensional region $G x = 0, x 1 × 0, x 2 ×⋯× 0, x S$ with volume ${x}_{1},{x}_{2},\dots ,{x}_{S}$. Then the discrepancy of the points ${x}^{1},{x}^{2},\dots ,{x}^{N}$ is $DN* x1,x2,…,xN = sup x∈IS SN Gx - N ∑ k=1 S xk .$ The discrepancy of the first $N$ terms of such a sequence has the form $DN*x1,x2,…,xN≤CSlog⁡NS+Olog⁡NS-1 for all N≥2.$ The principal aim in the construction of low-discrepancy sequences is to find sequences of points in ${I}^{S}$ with a bound of this form where the constant ${C}_{S}$ is as small as possible. Three types of low-discrepancy sequences are supplied in this library, these are due to Sobol, Faure and Niederreiter. Two sets of Sobol sequences are supplied, the first is based on work of Joe and Kuo (2008) and the second on the work of Bratley and Fox (1988). More information on quasi-random number generation and the Sobol, Faure and Niederreiter sequences in particular can be found in Bratley and Fox (1988) and Fox (1986). ### 2.3  Scrambled Quasi-random Numbers Scrambled quasi-random sequences are an extension of standard quasi-random sequences that attempt to eliminate the bias inherent in a quasi-random sequence whilst retaining the low-discrepancy properties. The use of a scrambled sequence allows error estimation of Monte–Carlo results by performing a number of iterates and computing the variance of the results. This implementation of scrambled quasi-random sequences is based on TOMS algorithm 823 and details can be found in the accompanying paper, Hong and Hickernell (2003). Three methods of scrambling are supplied; the first a restricted form of Owen's scrambling (Owen (1995)), the second based on the method of Faure and Tezuka (2000) and the last method combines the first two. Scrambled versions of both Sobol sequences and the Niederreiter sequence can be obtained. The efficiency of a simulation exercise may often be increased by the use of variance reduction methods (see Morgan (1984)). It is also worth considering whether a simulation is the best approach to solving the problem. For example, low-dimensional integrals are usually more efficiently calculated by functions in Chapter d01 rather than by Monte–Carlo integration. ### 2.4  Non-uniform Random Numbers Random numbers from other distributions may be obtained from the uniform random numbers by the use of transformations and rejection techniques, and for discrete distributions, by table based methods. (a) Transformation Methods For a continuous random variable, if the cumulative distribution function (CDF) is $F\left(x\right)$ then for a uniform $\left(0,1\right)$ random variate $u$, $y={F}^{-1}\left(u\right)$ will have CDF $F\left(x\right)$. This method is only efficient in a few simple cases such as the exponential distribution with mean $\mu $, in which case ${F}^{-1}\left(u\right)=-\mu \mathrm{log}u$. Other transformations are based on the joint distribution of several random variables. In the bivariate case, if $v$ and $w$ are random variates there may be a function $g$ such that $y=g\left(v,w\right)$ has the required distribution; for example, the Student's $t$-distribution with $n$ degrees of freedom in which $v$ has a Normal distribution, $w$ has a gamma distribution and $g\left(v,w\right)=v\sqrt{n/w}$. (b) Rejection Methods Rejection techniques are based on the ability to easily generate random numbers from a distribution (called the envelope) similar to the distribution required. The value from the envelope distribution is then accepted as a random number from the required distribution with a certain probability; otherwise, it is rejected and a new number is generated from the envelope distribution. (c) Table Search Methods For discrete distributions, if the cumulative probabilities, ${P}_{i}=\mathrm{Prob}\left(x\le i\right)$, are stored in a table then, given $u$ from a uniform $\left(0,1\right)$ distribution, the table is searched for $i$ such that ${P}_{i-1}<u\le {P}_{i}$. The returned value $i$ will have the required distribution. The table searching can be made faster by means of an index, see Ripley (1987). The effort required to set up the table and its index may be considerable, but the methods are very efficient when many values are needed from the same distribution. ### 2.5  Copulas A copula is a function that links the univariate marginal distributions with their multivariate distribution. Sklar's theorem (see Sklar (1973)) states that if $f$ is an $m$-dimensional distribution function with continuous margins ${f}_{1},{f}_{2},\dots ,{f}_{m}$, then $f$ has a unique copula representation, $c$, such that $f x1 , x2 ,…, xm = c f1 x1 , f2 x2 ,…, fm xm$ The copula, $c$, is a multivariate uniform distribution whose dependence structure is defined by the dependence structure of the multivariate distribution $f$, with $c u1 , u2 ,…, um = f f1-1 u1 , f2-1 u2 ,… , fm-1 um$ where ${u}_{i}\in \left[0,1\right]$. This relationship can be used to simulate variates from distributions defined by the dependence structure of one distribution and each of the marginal distributions given by another. For additional information see Nelsen (1998) or Boye (Unpublished manuscript) and the references therein. ### 2.6  Other Random Structures In addition to random numbers from various distributions, random compound structures can be generated. These include random time series, random matrices and random samples. ### 2.7  Multiple Streams of Pseudorandom Numbers It is often advantageous to be able to generate variates from multiple, independent, streams (or sequences) of random variates. For example when running a simulation in parallel on several processors. There are four ways of generating multiple streams using the functions available in this chapter: (i) using different initial values (seeds); (ii) using different generators; (iii) skip ahead (also called block-splitting); (iv) leap-frogging. #### 2.7.1  Multiple Streams via Different Initial Values (Seeds) A different sequence of variates can be generated from the same base generator by initializing the generator using a different set of seeds. The statistical properties of the base generators are only guaranteed within, not between sequences. For example, two sequences generated from two different starting points may overlap if these initial values are not far enough apart. The potential for overlapping sequences is reduced if the period of the generator being used is large. In general, of the four methods for creating multiple streams described here, this is the least satisfactory. The one exception to this is the Wichmann–Hill II generator. The Wichmann and Hill (2006) paper describes a method of generating blocks of variates, with lengths up to ${2}^{90}$, by fixing the first three seed values of the generator (${w}_{0}$, ${x}_{0}$ and ${y}_{0}$), and setting ${z}_{0}$ to a different value for each stream required. This is similar to the skip-ahead method described in Section 2.7.3, in that the full sequence of the Wichmann–Hill II generator is split into a number of different blocks, in this case with a fixed length of ${2}^{90}$. But without the computationally intensive initialization usually required for the skip-ahead method. #### 2.7.2  Multiple Streams via Different Generators Independent sequences of variates can be generated using a different base generator for each sequence. For example, sequence 1 can be generated using the NAG basic generator, sequence 2 using Mersenne Twister, sequence 3 the ACORN generator and sequence 4 using L'Ecuyer generator. The Wichmann–Hill I generator implemented in this chapter is, in fact, a series of 273 independent generators. The particular sub-generator to use is selected using the subid variable. Therefore, in total, 278 independent streams can be generated with each using a different generator (273 Wichmann–Hill I generators, and 5 additional base generators). #### 2.7.3  Multiple Streams via Skip-ahead Independent sequences of variates can be generated from a single base generator through the use of block-splitting, or skipping-ahead. This method consists of splitting the sequence into $k$ non-overlapping blocks, each of length $n$, where $n$ is no smaller than the maximum number of variates required from any of the sequences. For example, $x1 , x2 , … , xn block 1 , xn+1 , xn+2 , … , x2n block 2 , x2n+1 , x2n+2 , … , x3n block 3 , etc.$ where ${x}_{1},{x}_{2},\dots $ is the sequence produced by the generator of interest. Each of the $k$ blocks provide an independent sequence. The skip-ahead algorithm therefore requires the sequence to be advanced a large number of places, as to generate values from say, block $b$, you must skip over the $\left(b-1\right)n$ values in the first $b-1$ blocks. Due to their form this can be done efficiently for linear congruential generators and multiple congruential generators. A skip-ahead algorithm is also provided for the Mersenne Twister generator. Although skip-ahead requires some additional computation at the initialization stage (to ‘fast forward’ the sequence) no additional computation is required at the generation stage. This method of producing multiple streams can also be used for the Sobol and Niederreiter quasi-random number generator via the argument iskip in nag_quasi_init (g05ylc). #### 2.7.4  Multiple Streams via Leap-frog Independent sequences of variates can also be generated from a single base generator through the use of leap-frogging. This method involves splitting the sequence from a single generator into $k$ disjoint subsequences. For example: $Subsequence 1: x1 , xk+1 , x 2k+1 ,… Subsequence 2: x2 , xk+2 , x 2k+2 ,… ⋮ Subsequence ​k: xk , x2k , x3k ,… ,$ where ${x}_{1},{x}_{2},\dots $ is the sequence produced by the generator of interest. Each of the $k$ subsequences then provides an independent stream of variates. The leap-frog algorithm therefore requires the generation of every $k$th variate from the base generator. Due to their form this can be done efficiently for linear congruential generators and multiple congruential generators. A leap-frog algorithm is provided for the NAG Basic generator, both the Wichmann–Hill I and Wichmann–Hill II generators and L'Ecuyer generator. It is known that, dependent on the number of streams required, leap-frogging can lead to sequences with poor statistical properties, especially when applied to linear congruential generators. In addition, leap-frogging can increase the time required to generate each variate. Therefore leap-frogging should be avoided unless absolutely necessary. #### 2.7.5  Skip-ahead and Leap-frog for a Linear Congruential Generator (LCG): An Example As an illustrative example, a brief description of the algebra behind the implementation of the leap-frog and skip-ahead algorithms for a linear congruential generator is given. A linear congruential generator has the form . The recursive nature of a linear congruential generator means that The sequence can therefore be quickly advanced $v$ places by multiplying the current state (${x}_{i}$) by , hence skipping the sequence ahead. Leap-frogging can be implemented by using ${a}_{1}^{k}$, where $k$ is the number of streams required, in place of ${a}_{1}$ in the standard linear congruential generator recursive formula, in order to advance $k$ places, rather than one, at each iteration. In a linear congruential generator the multiplier ${a}_{1}$ is constructed so that the generator has good statistical properties in, for example, the spectral test. When using leap-frogging to construct multiple streams this multiplier is replaced with ${a}_{1}^{k}$, and there is no guarantee that this new multiplier will have suitable properties especially as the value of $k$ depends on the number of streams required and so is likely to change depending on the application. This problem can be emphasized by the lattice structure of linear congruential generators. Similiarly, the value of ${a}_{1}$ is often chosen such that the computation  can be performed efficiently. When ${a}_{1}$ is replaced by ${a}_{1}^{k}$, this is often no longer the case. Note that, due to rounding, when using a distributional generator, a sequence generated using leap-frogging and a sequence constructed by taking every $k$ value from a set of variates generated without leap-frogging may differ slightly. These differences should only affect the least significant digit. #### 2.7.6  Skip-ahead and Leap-frog for the Mersenne Twister: An Example Skipping ahead with the Mersenne Twister generator is based on the definition of a $k×k$ (where $k=19937$) transition matrix, $A$, over the finite field ${𝔽}_{2}$ (with elements 0 and 1). Multiplying $A$ by the current state ${x}_{n}$, represented as a vector of bits, produces the next state vector ${x}_{n+1}$: $x n + 1 = A ⁢ x n .$ Thus, skipping ahead $v$ places in a sequence is equivalent to multiplying by ${A}^{v}$: $x n + v = A v x n .$ Since calculating ${A}^{v}$ by a standard square and multiply algorithm is $\mathit{O}\left({k}^{3}\mathrm{log}v\right)$ and requires over 47MB of memory (see Haramoto et al. (2008)), an indirect calculation is performed which relies on a property of the characteristic polynomial $p\left(z\right)$ of $A$, namely that $p\left(A\right)=0$. We then define and observe that $gz = z v + qz ⁢ p z$ for a polynomial $q\left(z\right)$. Since $p\left(A\right)=0$, we have that $g\left(A\right)={A}^{v}$ and $A v ⁢ x n = a k - 1 ⁢ A k - 1 + … + a 1 A + a 0 I ⁢ x n .$ This polynomial evaluation can be performed using Horner's method: $A v ⁢ x n = A ⁢ … A ⁢ A ⁢ A ⁢ a k - 1 ⁢ x n + a k - 2 ⁢ x n + a k - 3 ⁢ x n + ⋯ + a 1 ⁢ x n + a 0 ⁢ x n ,$ which reduces the problem to advancing the generator $k-1$ places from state ${x}_{n}$ and adding (where addition is as defined over ${𝔽}_{2}$) the intermediate states for which ${a}_{i}$ is nonzero. There are therefore two stages to skipping the Mersenne Twister ahead $v$ places: (i) Calculate the coefficients of the polynomial ; (ii) advance the sequence $k-1$ places from the starting state and add the intermediate states that correspond to nonzero coefficients in the polynomial calculated in the first step. The resulting state is that for position $v$ in the sequence. The cost of calculating the polynomial is $\mathit{O}\left({k}^{2}\mathrm{log}v\right)$ and the cost of applying it to state is constant. Skip ahead functionality is typically used in order to generate $n$ independent pseudorandom number streams (e.g., for separate threads of computation). There are two options for generating the $n$ states: (i) On the master thread calculate the polynomial for a skip ahead distance of $v$ and apply this polynomial to state $n$ times, after each iteration $j$ saving the current state for later usage by thread $j$. (ii) Have each thread $j$ independently and in parallel with other threads calculate the polynomial for a distance of $\left(j+1\right)v$ and apply to the original state. Since $\underset{v\to \infty }{\mathrm{lim}}\phantom{\rule{0.25em}{0ex}}\mathrm{log}v=\mathrm{log}nv$, then for large $v$ the cost of generating the polynomial for a skip ahead distance of $nv$ (i.e., the calculation performed by thread $n-1$ in option (ii) above) is approximately the same as generating that for a distance of $v$ (i.e., the calculation performed by thread $0$). However, only one application to state need be made per thread, and if $n$ is sufficiently large the cost of applying the polynomial to state becomes the dominant cost in option (i), in which case it is desirable to use option (ii). Tests have shown that as a guideline it becomes worthwhile to switch from option (i) to option (ii) for approximately $n>30$. Leap frog calculations with the Mersenne Twister are performed by computing the sequence fully up to the required size and discarding the redundant numbers for a given stream. ## 3  Recommendations on Choice and Use of Available Functions ### 3.1  Pseudorandom Numbers Prior to generating any pseudorandom variates the base generator being used must be initialized. Once initialized, a distributional generator can be called to obtain the variates required. No interfaces have been supplied for direct access to the base generators. If a sequence of random variates from a uniform distribution on the open interval $\left(0,1\right)$, is required, then the uniform distribution function (nag_rand_basic (g05sac)) should be called. #### 3.1.1  Initialization Prior to generating any variates the base generator must be initialized. Two utility functions are provided for this, nag_rand_init_repeatable (g05kfc) and nag_rand_init_nonrepeatable (g05kgc), both of which allow any of the base generators to be chosen. nag_rand_init_repeatable (g05kfc) selects and initializes a base generator to a repeatable (when executed serially) state: two calls of nag_rand_init_repeatable (g05kfc) with the same argument-values will result in the same subsequent sequences of random numbers (when both generated serially). nag_rand_init_nonrepeatable (g05kgc) selects and initializes a base generator to a non-repeatable state in such a way that different calls of nag_rand_init_nonrepeatable (g05kgc), either in the same run or different runs of the program, will almost certainly result in different subsequent sequences of random numbers. No utilities for saving, retrieving or copying the current state of a generator have been provided. All of the information on the current state of a generator (or stream, if multiple streams are being used) is stored in the integer array state and as such this array can be treated as any other integer array, allowing for easy copying, restoring, etc. #### 3.1.2  Repeated initialization As mentioned in Section 2.7.1, it is important to note that the statistical properties of pseudorandom numbers are only guaranteed within sequences and not between sequences produced by the same generator. Repeated initialization will thus render the numbers obtained less rather than more independent. In a simple case there should be only one call to nag_rand_init_repeatable (g05kfc) or nag_rand_init_nonrepeatable (g05kgc) and this call should be before any call to an actual generation function. #### 3.1.3  Choice of Base Generator If a single sequence is required then it is recommended that the Mersenne Twister is used as the base generator (${\mathbf{genid}}=3$). This generator is fast, has an extremely long period and has been shown to perform well on various test suites, see Matsumoto and Nishimura (1998), L'Ecuyer and Simard (2002) and Wichmann and Hill (2006) for example. When choosing a base generator, the period of the chosen generator should be borne in mind. A good rule of thumb is never to use more numbers than the square root of the period in any one experiment as the statistical properties are impaired. For closely related reasons, breaking numbers down into their bit patterns and using individual bits may also cause trouble. #### 3.1.4  Choice of Method for Generating Multiple Streams If the Wichmann–Hill II base generator is being used, and a period of ${2}^{90}$ is sufficient, then the method described in Section 2.7.1 can be used. If a different generator is used, or a longer period length is required then generating multiple streams by altering the initial values should be avoided. Using a different generator works well if less than 277 streams are required. Of the remaining two methods, both skip-ahead and leap-frogging use the sequence from a single generator, both guarantee that the different sequences will not overlap and both can be scaled to an arbitrary number of streams. Leap-frogging requires no a-priori knowledge about the number of variates being generated, whereas skip-ahead requires you to know (approximately) the maximum number of variates required from each stream. Skip-ahead requires no a-priori information on the number of streams required. In contrast leap-frogging requires you to know the maximum number of streams required, prior to generating the first value. Of these two, if possible, skip-ahead should be used in preference to leap-frogging. Both methods required additional computation compared with generating a single sequence, but for skip-ahead this computation occurs only at initialization. For leap-frogging additional computation is required both at initialization and during the generation of the variates. In addition, as mentioned in Section 2.7.4, using leap-frogging can, in some instances, change the statistical properties of the sequences being generated. #### 3.1.5  Copulas After calling any of the copula functions the g01t functions in Chapter g01 can be used to convert the uniform marginal distributors into a different form as required. ### 3.2  Quasi-random Numbers Prior to generating any quasi-random variates the generator being used must be initialized via nag_quasi_init (g05ylc) or nag_quasi_init_scrambled (g05ync). Of these, nag_quasi_init (g05ylc) can be used to initialize a standard Sobol, Faure or Niederreiter sequence and nag_quasi_init_scrambled (g05ync) can be used to initialize a scrambled Sobol or Niederreiter sequence. Due to the random nature of the scrambling, prior to calling the initialization function nag_quasi_init_scrambled (g05ync) one of the pseudorandom initialization functions, nag_rand_init_repeatable (g05kfc) or nag_rand_init_nonrepeatable (g05kgc), must be called. Once a quasi-random generator has been initialized, using either nag_quasi_init (g05ylc) or nag_quasi_init_scrambled (g05ync), one of three generation functions can be called to generate uniformly distributed sequences (nag_quasi_rand_uniform (g05ymc)), Normally distributed sequences (nag_quasi_rand_normal (g05yjc)) or sequences with a log-normal distribution (nag_quasi_rand_lognormal (g05ykc)). For example, for a repeatable sequence of scrambled quasi-random variates from the Normal distribution, nag_rand_init_repeatable (g05kfc) must be called first (to initialize a pseudorandom generator), followed by nag_quasi_init_scrambled (g05ync) (to initialize a scrambled quasi-random generator) and then nag_quasi_rand_normal (g05yjc) can be called to generate the sequence from the required distribution. Sequences from other distributions can be obtained by calling the ‘deviate’ functions supplied in Chapter g01 on the results from nag_quasi_rand_uniform (g05ymc). However, care should be taken when doing this as some of these ‘deviate’ functions are only accurate up to a limited number of significant figures which may effect the statistical properties of the resulting sequence of variates. ### 3.3  Programming Advice Take care when programming calls to those functions in this chapter which are functions. The reason is that different calls with the same arguments are intended to give different results. For example, if you wish to assign to z the difference between two successive random numbers generated by nag_rngs_basic (g05kac), beware of writing ```z = g05kac(igen,iseed) - g05kac(igen,iseed) ``` It is quite legitimate for a C compiler to compile zero, one or two calls to nag_rngs_basic (g05kac); if two calls, they may be in either order (if zero or one calls are compiled, z would be set to zero). A safe method to program this would be ```x = g05kac(igen,iseed); y = g05kac(igen,iseed); z = x-y ``` ## 4  Functionality Index Generating samples, matrices and tables, random correlation matrix nag_rand_corr_matrix (g05pyc) random orthogonal matrix nag_rand_orthog_matrix (g05pxc) random permutation of an integer vector nag_rand_permute (g05ncc) random sample from an integer vector unequal weights, without replacement nag_rand_sample_unequal (g05nec) unweighted, without replacement nag_rand_sample (g05ndc) random table nag_rand_2_way_table (g05pzc) Generation of time series, asymmetric GARCH Type II nag_rand_agarchII (g05pec) asymmetric GJR GARCH nag_rand_garchGJR (g05pfc) EGARCH nag_rand_egarch (g05pgc) exponential smoothing nag_rand_exp_smooth (g05pmc) type I AGARCH nag_rand_agarchI (g05pdc) univariate ARMA nag_rand_arma (g05phc) vector ARMA nag_rand_varma (g05pjc) Pseudorandom numbers, array of variates from multivariate distributions, Dirichlet distribution nag_rand_dirichlet (g05sec) multinomial distribution nag_rand_gen_multinomial (g05tgc) Normal distribution nag_rand_matrix_multi_students_t (g05ryc) Student's t distribution nag_rand_matrix_multi_normal (g05rzc) copulas Clayton/Cook–Johnson copula (bivariate) nag_rand_bivariate_copula_clayton (g05rec) Clayton/Cook–Johnson copula (multivariate) nag_rand_copula_clayton (g05rhc) Frank copula (bivariate) nag_rand_bivariate_copula_frank (g05rfc) Frank copula (multivariate) nag_rand_copula_frank (g05rjc) Gaussian copula nag_rand_copula_normal (g05rdc) Gumbel–Hougaard copula nag_rand_copula_gumbel (g05rkc) Plackett copula nag_rand_bivariate_copula_plackett (g05rgc) Student's t copula nag_rand_copula_students_t (g05rcc) initialize generator, multiple streams, leap-frog nag_rand_leap_frog (g05khc) skip-ahead nag_rand_skip_ahead (g05kjc) skip-ahead (power of 2) nag_rand_skip_ahead_power2 (g05kkc) nonrepeatable sequence nag_rand_init_nonrepeatable (g05kgc) repeatable sequence nag_rand_init_repeatable (g05kfc) vector of variates from discrete univariate distributions, binomial distribution nag_rand_binomial (g05tac) geometric distribution nag_rand_geom (g05tcc) hypergeometric distribution nag_rand_hypergeometric (g05tec) logarithmic distribution nag_rand_logarithmic (g05tfc) logical value Nag_TRUE or Nag_FALSE nag_rand_logical (g05tbc) negative binomial distribution nag_rand_neg_bin (g05thc) Poisson distribution nag_rand_poisson (g05tjc) uniform distribution nag_rand_discrete_uniform (g05tlc) user-supplied distribution nag_rand_gen_discrete (g05tdc) variate array from discrete distributions with array of parameters, Poisson distribution with varying mean nag_rand_compd_poisson (g05tkc) vectors of variates from continuous univariate distributions, beta distribution nag_rand_beta (g05sbc) Cauchy distribution nag_rand_cauchy (g05scc) exponential mix distribution nag_rand_exp_mix (g05sgc) F-distribution nag_rand_f (g05shc) gamma distribution nag_rand_gamma (g05sjc) logistic distribution nag_rand_logistic (g05slc) log-normal distribution nag_rand_lognormal (g05smc) negative exponential distribution nag_rand_exp (g05sfc) Normal distribution nag_rand_normal (g05skc) real number from the continuous uniform distribution nag_rand_basic (g05sac) Student's t-distribution nag_rand_students_t (g05snc) triangular distribution nag_rand_triangular (g05spc) uniform distribution nag_rand_uniform (g05sqc) von Mises distribution nag_rand_von_mises (g05src) Weibull distribution nag_rand_weibull (g05ssc) χ2 square distribution nag_rand_chi_sq (g05sdc) Quasi-random numbers, array of variates from univariate distributions, log-normal distribution nag_quasi_rand_lognormal (g05ykc) Normal distribution nag_quasi_rand_normal (g05yjc) uniform distribution nag_quasi_rand_uniform (g05ymc) initialize generator, scrambled Sobol or Niederreiter nag_quasi_init_scrambled (g05ync) Sobol, Niederreiter or Faure nag_quasi_init (g05ylc) ## 6  References Banks J (1998) Handbook on Simulation Wiley Boye E (Unpublished manuscript) Copulas for finance: a reading guide and some applications Financial Econometrics Research Centre, City University Business School, London Bratley P and Fox B L (1988) Algorithm 659: implementing Sobol's quasirandom sequence generator ACM Trans. Math. Software 14(1) 88–100 Faure H and Tezuka S (2000) Another random scrambling of digital (t,s)-sequences Monte Carlo and Quasi-Monte Carlo Methods Springer-Verlag, Berlin, Germany (eds K T Fang, F J Hickernell and H Niederreiter) Fox B L (1986) Algorithm 647: implementation and relative efficiency of quasirandom sequence generators ACM Trans. Math. Software 12(4) 362–376 Haramoto H, Matsumoto M, Nishimura T, Panneton F and L'Ecuyer P (2008) Efficient jump ahead for F2-linear random number generators INFORMS J. on Computing 20(3) 385–390 Hong H S and Hickernell F J (2003) Algorithm 823: implementing scrambled digital sequences ACM Trans. Math. Software 29:2 95–109 Joe S and Kuo F Y (2008) Constructing Sobol sequences with better two-dimensional projections SIAM J. Sci. Comput. 30 2635–2654 Knuth D E (1981) The Art of Computer Programming (Volume 2) (2nd Edition) Addison–Wesley L'Ecuyer P and Simard R (2002) TestU01: a software library in ANSI C for empirical testing of random number generators Departement d'Informatique et de Recherche Operationnelle, Universite de Montreal http://www.iro.umontreal.ca/~lecuyer Maclaren N M (1989) The generation of multiple independent sequences of pseudorandom numbers Appl. Statist. 38 351–359 Matsumoto M and Nishimura T (1998) Mersenne twister: a 623-dimensionally equidistributed uniform pseudorandom number generator ACM Transactions on Modelling and Computer Simulations Morgan B J T (1984) Elements of Simulation Chapman and Hall Nelsen R B (1998) An Introduction to Copulas. Lecture Notes in Statistics 139 Springer Owen A B (1995) Randomly permuted (t,m,s)-nets and (t,s)-sequences Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, Lecture Notes in Statistics 106 Springer-Verlag, New York, NY 299–317 (eds H Niederreiter and P J-S Shiue) Ripley B D (1987) Stochastic Simulation Wiley Sklar A (1973) Random variables: joint distribution functions and copulas Kybernetika 9 499–460 Wichmann B A and Hill I D (2006) Generating good pseudo-random numbers Computational Statistics and Data Analysis 51 1614–1622 Wikramaratna R S (1989) ACORN - a new method for generating sequences of uniformly distributed pseudo-random numbers Journal of Computational Physics 83 16–31 Wikramaratna R S (1992) Theoretical background for the ACORN random number generator Report AEA-APS-0244 AEA Technology, Winfrith, Dorest, UK Wikramaratna R S (2007) The additive congruential random number generator a special case of a multiple recursive generator Journal of Computational and Applied Mathematics
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 273, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7968294620513916, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/33887/how-to-generate-nice-summary-table
# How to generate nice summary table? I want to have R display the data it gives me from the `summary()` function in a table so I can easily share this. I am currently just doing `summary()` in the console and then taking a screenshot, but I would rather have this generated as a nice table just like all of my graphs are. Any ideas? - 1 – chl♦ Aug 8 '12 at 1:12 You may want to read this awesome post on tables: Some notes on making effective tables by CV contributor @AndyW. Much of it is general information about tables (albeit awesome general information), but there is some specific to making tables in R w/ $\LaTeX$ as well. – gung Aug 8 '12 at 3:52 ## 1 Answer If you have the R package `Hmisc` and a working latex installation you can do: ````x=rnorm(1000) y=rnorm(1000) lm1=lm(y~x) slm1=summary(lm1) latex(slm1) ```` It works the same with datasets, ````latex(summary(cars)) ```` - 1 An even better solution is to directly use `summary.formula`, e.g. `print(summary(~ x + y), prmsd=TRUE, digits=1)`. The `latex` command assumes the OP has a working $\TeX$ installation. It is often convenient to just output plain text, like this: `capture.output(print(summary(~ x + y), prmsd=TRUE, digits=1), file="out.txt")`. – chl♦ Aug 8 '12 at 1:01 Another approach to outputting plain text that works well is to use ?sink. Eg, `sink(file=<name>, type="output"); <functions>; sink()`, then a text file will exist in the working directory with whatever would have been the output of the functions used. – gung Aug 8 '12 at 3:57 default
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9061439633369446, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/waves
# Tagged Questions Waves are disturbances that propagate throush space and time. Classically, they travelled through a medium, disturbing the particles but not changing their mean position. Electromagnetic waves/particle-waves need no medium; they are disturbances in their respective fields. 1answer 46 views ### Boundary conditions on wave equation I am having trouble understanding the boundary conditions. From the solutions, the first is that $D_1(0, t) = D_2(0, t)$ because the rope can't break at the junction. The second is that ... 1answer 41 views ### How large of a solar sail would be needed to travel to mars in under a year? I'm attempting to approach this using the identity $$F/A = I/c$$ I can solve for Area easily enough $$A = F(c/I)$$ and I know the distance $d$ is $$d=1/2(at^2)$$ But I'm having difficulty trying to ... 0answers 35 views ### Standing Waves in Flutes How do the waves produced in flutes have a wave characteristic while maintaining a velocity that allows them to travel to out ear? If it were simply a standing wave I'd imagine that they would ... 1answer 28 views ### Relation of color and frequency for the visible spectrum In this question the OP is looking for a way to see light that is outside of the visible spectrum without using electronic sensors. This got me wondering about the visible spectrum itself. Typically ... 1answer 53 views ### The second resonance of string? What is the relationship between "the second resonance " and string and the wavelength. Like in this question: if the length of the string is 2cm with second resonance, then what is wavelength? 1answer 75 views ### Why do sound waves travel at the same speed moleculewise? (Same medium) I don't understand what happens in reality (outside of wave theories). If I clap my hands I invest energy in the nearby air molecules, which move and transfer their energy to nearby molecules which ... 1answer 50 views ### Will changing amplitude change the frequency? Will changing the amplitude change the frequency of a wave, or is it possible for a specific frequency (50 Hz. for example) to generate from shifting amplitude patterns? 0answers 61 views ### Slinky reverb: the origin of the iconic Star Wars blaster sound This is a fun problem that I came across recently, which I'm posting here for your delectation. We all love a good slinky: they can be used for all sorts of fun demos in physics. One example is the ... 2answers 81 views ### Is there a way to create a flickering frequency to be dependent on speed of the person looking at it? Is there a way to make a screen or a road sign flash at different rates, depending on the velocity of the observer looking at it? I would like to achieve a state where two observers going at ... 0answers 13 views ### Minimum thickness of bubble ensuring max reflectance A soap bubble has index of refraction of 1.33. What minimum thickness of this bubble will ensure max reflectance of normally incident 530 nm light? Ans is 99.6, but how do I get that? I am ... 1answer 27 views ### Phasor representation of voltage in frequency domain In a text on application of electromagnetism in transmission line, there introduces a phasor for the voltage (in frequency domain) $$\tilde{V}(x) = V^+e^{-i\beta x} + V^-e^{i\beta x.}$$ Here $V^+$ ... 3answers 71 views ### How to determine the direction of medium's displacement vectors of a standing wave? Consider the following problem taken from a problem booklet. My questions are: What is displacement vector? And how to determine the direction of displacement vector at a certain point? Where is the ... 3answers 74 views ### Lethality of sounds and extreme “loudness” In theory, could pure sound be lethal? How loud would it have to be? Also, which events are the loudest in the universe, and how loud are they? I'm confining attention to events which occur regularly, ... 1answer 45 views ### EM Waves Energy Loss Where does the energy go when two photons interfere destructively at a point on a screen in Young's double slit experiment ? 0answers 16 views ### Implementing Explicit formulation of 1D wave equation in Matlab #FiniteElements #FiniteDifferences [migrated] so the theory is straight forward. we have: $\frac{d^2U}{(\Delta t)^2}=c^2 \frac{d^2U}{(\Delta x)^2}$ discretizing it gives: \$\frac{U(i+1,j)- 2U(i,j) + U(i-1,j)}{(\Delta t)^2} = c^2 ... 0answers 43 views ### Longitudinal EMAG wave? I'm reading about optical waveguide analysis, and often come across the terms "transverse electric mode" vs. "transverse magnetic mode". As I unerstand, it means that the electric/magnetic field has ... 2answers 137 views ### Standing Waves Energy transfer And In a standing wave , how does energy travel past a node in a string ? It should just get reflected . Assume the case of first overtone and you strike the string at a place . How will energy ... 2answers 135 views ### De Broglie wavelength, frequency and velocity - interpretation Two fundamental equations regarding wave-particle duality are: $$\lambda = \frac{h}{p}, \\ \nu = E/h .$$ We talk about de Broglie wavelength, is it meaningful to talk about de Broglie frequency ... 0answers 56 views ### Double Slit Problem Involving Superposition of Wave Equation [closed] Here's my question: To be clear it's part (iv) that's unclear to me. I can see that the important bit is that the exposure is over a LONG time. Hence, this must have some implication on the manner ... 1answer 28 views ### Why frequency and tension doesn't change in the two medium? I am reading a book about wave mechanics. There are two different cord (one light and one heavy) connected together, one person waving the lighter one, the wave transverse to the right from the ... 2answers 80 views ### In terms of the Doppler effect, what happens when the source is moving faster than the wave? I'm just trying to understand this problem from a qualitative perspective. The Doppler effect is commonly explained in terms of how a siren sounds higher in pitch as it is approaching a particular ... 1answer 36 views ### Nodes and Antinodes for standing wave In the arrangement shown in the figure below, an object of mass m can be hung from a string (linear mass density $\mu$ = 2.00 g/m) that passes over a light (massless) pulley. The string is connected ... 0answers 50 views ### Could anyone help me to interpret the wave geometrically? [closed] This is the problem: A 180-MHz wave travels in medium characterized by $\mu_r = 1$, $\epsilon_r = 25$, and $\sigma = 2.5$ mS/m. The electric field intensity is given by \$\widetilde{E} = ... 0answers 46 views ### Fourier Transform of ribbon's beam Electric Field I have a monochromatic ribbon beam with $E(x)e^{i(kz-\omega t)}$ being the electric field's amplitude. I want to show that the lowest order approximation in terms of plane waves is ... 3answers 107 views ### Waveguides (in the ocean?) The speed of sound in the ocean is given by $$c_s(\theta,z) = 1450 + 4.6\theta - 0.055\theta^2 + 0.016z$$ $\theta$ is the temperature in degrees celcius, and $z$ is the depth. In a simplified model, ... 1answer 49 views ### Power radiated by the sun at different locations I am wondering can someone help to solve second part which extends first part; The power radiated by the sun is ${3.9*10^{26}}_{watt}$. The earth orbits the sun in a nearly circular orbit of radius ... 1answer 69 views ### Standing Waves: finding the number of antinodes A string with a fixed frequency vibrator at one end forms a standing wave with 4 antinodes when under tension T1. When the tension is slowly increased, the standing wave disappears until tension T2 is ... 2answers 87 views ### Calculating phase difference of sound waves An observer stands 3 m from speaker A and 5 m from speaker B. Both speakers, oscillating in phase, produce waves with a frequency of 250 Hz. The speed of sound in air is 340 m/s. What is the phase ... 1answer 71 views ### Can you “fold” EM or light waves? (i.e) long wave that is reflected by mirror in fragments - like in the game “Snake” So, I was reading about the Casimir effect. Two mirrors facing each other attract to each other in a vacuum. The reason is due to pressure exerted on those mirrors from the multitude of EM waves (like ... 1answer 62 views ### Phase shift of resonance For resonance to occur, is it true that the force lags behind the motion by $\pi/2$? I saw some notes written that the motion lags behind the force by $\pi/2$ which makes no sense to me. As I watched ... 0answers 28 views ### The interference of waves and factors that affect cancellation? If you had two repeated disturbances on the surface of a water, I know interference will occur. However, if I move the two sources of disturbances closer together, why would the 'gaps' between each ... 1answer 95 views ### How does one find the wave velocity and the phase speed? While I was studying beats, I tried to find a displacement function of any particle in the most generalized form. I ended up with $$y=2A\sin(\pi(t-x/v)(f_1+f_2))\cos(\pi(t-x/v)(f_1-f_2)).$$ Now, ... 2answers 176 views ### Are there “gaps” in light, or will it hit everywhere? Not sure how to word my question. Picture a light source in vacuum, so nothing disturbs the light (or similar conditions), 2d. If I move very, very far away, will it happen that some of the light ... 1answer 112 views ### Eddy current losses in electric steel by harmonics of a magnetic field I am working on an model of a permanent magnet synchronous machine. Right now I am stuck with calculating the eddy current losses caused by the harmonics of the stator magnetic field in the electrical ... 3answers 229 views ### Validity of naively computing the de Broglie wavelength of a macroscopic object Many introductory quantum mechanics textbooks include simple exercises on computing the de Broglie wavelength of macroscopic objects, often contrasting the results with that of a proton, etc. For ... 2answers 195 views ### Can someone explain how water from a garden hose can propagate in a sine/cosine wave? A video posted on Youtube : http://www.youtube.com/watch?feature=player_embedded&v=uENITui5_jU How does this phenomenon work? I know he is using frequency to propagate water in a sine/cosine ... 2answers 75 views ### Mechanical Waves In the book Young and Freedman 13th edition, the wave equation is $y(x, t) = A\,\text{cos}(kx-wt)$ The problem is, I find it hard to console with the fact that $y(x, t) = A\,\text{sin}(wt-kx)$. ... 3answers 184 views ### Are two waves being in phase is the same as saying that the two waves are coherent? If two waves are coherent, is it the same as them being in phase? Please correct if I'm wrong. 1answer 307 views ### how to determine the direction of a wave propagation? In the textbook, it said a wave in the form $y(x, t) = A\cos(\omega t + \beta x + \varphi)$ propagates along negative $x$ direction and $y(x, t) = A\cos(\omega t - \beta x + \varphi)$ propagates along ... 2answers 217 views ### Can light waves cause beats? My question is pretty brief. When two sound waves of nearly same frequencies interfere, we get beats. But, I have not observed something like that happening in the case of light. In fact, most of the ... 1answer 229 views ### Wave propagation in an incompressible flow Incompressible approximation of fluid flow is usually known to be lame in modeling the propagation of any disturbance in it, predicting a speed equal to infinity for the propagation of the ... 1answer 74 views ### Acoustic wave equation for a closed sphere I am looking to model the nodal surfaces in a resonating closed sphere. The sound source is external. What sort of wave equation will reveal the spherical harmonics depending on the frequency, speed ... 1answer 52 views ### what is the difference between constant and changing magnetic and electric fields? How do they occur? How do they form an electromagnetic wave? what is the difference between constant and changing magnetic and electric fields? How do they occur? How do they form an electromagnetic wave? 1answer 114 views ### Cylindrical wave I know that a wave dependent of the radius (cylindrical symmetry), has a good a approximations as $$u(r,t)=\frac{a}{\sqrt{r}}[f(x-vt)+f(x+vt)]$$ when $r$ is big. I would like to know how to deduce ... 1answer 80 views ### Eikonal approximation for wave optics. Why follow the unit vector parallel to the Pointing vector? The description of the passage from wave optics to geometrical optics claims that light rays are the integral curves of a certain vector field (the Pointing vector direction, normalized to 1). Here ... 2answers 178 views ### What is light, and how can it travel in a vacuum forever in all directions at once without a medium? I know there are many questions that are similar (maybe identical?). I am not a physicist nor a student - I am just interested in physics and have been watching many physics channels on youtube ... 1answer 212 views ### De broglie equation What is the de Broglie wavelength? Also, does the $\lambda$ sign in the de Broglie equation stand for the normal wavelength or the de Broglie wavelength? If $\lambda$ is the normal wavelength of a ... 6answers 634 views ### What is the meaning of phase difference? What do you mean by phase of a wave? And phase difference? Waves have always confused me as it's too difficult to visualize them. I am no good at waves mechanics, so if anyone could explain in simpler ... 1answer 42 views ### References on wave solutions in continuum mechanics [closed] I am interested in literature on known wave solutions in continnum mechanics, precisely the following mechanical equation: $$\rho\partial_t^2u_i = C_{ijkl}\nabla_j\nabla_ku_{l}$$ My interest is spread ... 0answers 197 views ### How to calculate the intensity of the interference of two waves in a given point? There are two different point sources which produce spherical waves with the same power, amplitude, ω, wavenumber and phase. I can calculate the intensity of each wave in a point: I_1 = P / (4 ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9270045161247253, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/07/30/the-determinant-of-the-adjoint/?like=1&source=post_flair&_wpnonce=55e0e6033e
# The Unapologetic Mathematician ## The Determinant of the Adjoint It will be useful to know what happens to the determinant of a transformation when we pass to its adjoint. Since the determinant doesn’t depend on any particular choice of basis, we can just pick one arbitrarily and do our computations on matrices. And as we saw yesterday, adjoints are rather simple in terms of matrices: over real inner product spaces we take the transpose of our matrix, while over complex inner product spaces we take the conjugate transpose. So it will be convenient for us to just think in terms of matrices over any field for the moment, and see what happens to the determinant of a matrix when we take its transpose. Okay, so let’s take a linear transformation $T:V\rightarrow V$ and pick a basis $\left\{\lvert i\rangle\right\}_{i=1}^n$ to get the matrix $\displaystyle t_i^j=\langle j\rvert T\lvert i\rangle$ and the formula for the determinant reads $\displaystyle\det(T)=\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\prod\limits_{k=1}^nt_k^{\pi(k)}=\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\prod\limits_{k=1}^n\langle\pi(k)\rvert T\lvert k\rangle$ and the determinant of the adjoint is $\displaystyle\det(T^*)=\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\prod\limits_{k=1}^n\langle\pi(k)\rvert T^*\lvert k\rangle=\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\prod\limits_{k=1}^n\langle k\rvert T\lvert\pi(k)\rangle$ where we’ve now taken the transpose. Now for the term corresponding to the permutation $\pi$ we can rearrange the multiplication. Instead of multiplying from ${1}$ to $n$, we multiply from $\pi^{-1}(1)$ to $\pi^{-1}(n)$. All we’re doing is rearranging factors, and our field multiplication is commutative, so this doesn’t change the result at all: $\displaystyle\det(T^*)=\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\prod\limits_{k=1}^n\langle\pi^{-1}(k)\rvert T\lvert\pi(\pi^{-1}(k))\rangle=\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\prod\limits_{k=1}^n\langle\pi^{-1}(k)\rvert T\lvert k\rangle$ But as $\pi$ ranges over the symmetric group $S_n$, so does its inverse $\pi^{-1}$. So we relabel to find $\displaystyle\det(T^*)=\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\prod\limits_{k=1}^n\langle\pi(k)\rvert T\lvert k\rangle$ and we’re back to our formula for the determinant of $T$ itself! That is, when we take the transpose of a matrix we don’t change its determinant at all. And since the transpose of a real matrix corresponds to the adjoint of the transformation on a real inner product space, taking the adjoint doesn’t change the determinant of the transformation. What about over complex inner product spaces, where adjoints correspond to conjugate transposes? Well, all we have to do is take the complex conjugate of each term in our calculation when we take the transpose of our matrix. Then carrying all these through to the end as we juggle indices around we’re left with $\displaystyle\det(T^*)=\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\prod\limits_{k=1}^n\overline{\langle\pi(k)\rvert T\lvert k\rangle}=\overline{\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\prod\limits_{k=1}^n\langle\pi(k)\rvert T\lvert k\rangle}=\overline{\det(T)}$ The determinant of the adjoint is, in this case, the complex conjugate of the determinant. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 7 Comments » 1. I’ve always liked the definition of determinants via taking alternating products- can we prove this fact using that? For instance, do we know that taking adjoints commutes with exterior products? Comment by | July 31, 2009 | Reply 2. I’m not sure offhand what you mean by “by taking alternating products”. You mean the way I defined it originally? It should be possible to prove that in a basis-free manner, but this is the quick and dirty method that I’m going to need before tomorrow’s post. Comment by | July 31, 2009 | Reply 3. Yes. I was thus curious if $\bigwedge^n T^* = (\bigwedge^n T)^*$ or something like that? The problem is that although we can get a canonical basis of $\bigwedge^n V$, given one of $V$, we don’t necessarily have a canonical bilinear form on $\bigwedge^n V$ (as far as I know, at least). Still, a functorial/basis-free proof would be nice. Comment by | July 31, 2009 | Reply 4. [...] of matrices over more general fields). So now that we’ve got information about how the determinant and the adjoint interact, we can see what happens when we restrict the determinant homomorphism to these subgroups [...] Pingback by | July 31, 2009 | Reply 5. I think it’s enough to know that the sign representation of $S_n$ is isomorphic to its dual representation. First, for any finite-dimensional space $V$ over a ground field $k$, we have a natural isomorphism $\Lambda^n V \cong V^{\otimes n} \otimes_{k S_n} sgn$ (which we could also write as $\Lambda^n V \cong sgn \otimes_{k S_n} V^{\otimes n}$ by taking advantage of the anti-automorphism on the group algebra $k S_n$). Generally speaking, if $G$ is a finite group and $U, W$ are $G$-reps with $W$ isomorphic to its contragredient dual representation, then there are canonical isomorphisms $U^* \otimes_{k G} W \cong \hom_{k G}(U, W) \cong \hom(W^* \otimes_{k G} U, k) \cong \hom(W \otimes_{k G} U, k) = (W \otimes_{k G} U)^*$ natural in $U$. Now take $G = S_n$, $W$ equal to the sign representation, and $U = V^{\otimes n}$ as an $S_n$-representation. This gives the second natural isomorphism in $(V^*)^{\otimes n} \otimes_{k S_n} sgn \cong (V^{\otimes n})^* \otimes_{k S_n} sgn \cong (sgn \otimes_{k S_n} V^{\otimes n})^*$ where the first isomorphism is easily seen to be natural in $V$. In other words, we have an isomorphism $\Lambda^n (V^*) \cong (\Lambda^n V)^*$ natural in $V$. This naturality is I believe the statement you are after (it applies to $V$ of any dimension, even though the determinant identity refers to the case where dim$(V) = n$). Comment by | August 1, 2009 | Reply 6. Yes, that’s the type of functoriality I was looking for. Thanks for the explanation. Comment by | August 1, 2009 | Reply 7. [...] we know that the determinant of the adjoint of a transformation is the complex conjugate of the determinant of the original transformation (or [...] Pingback by | August 3, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 43, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9042780995368958, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/21206/how-does-blochs-theorem-generalize-to-a-finite-sized-crystal
# How does Bloch's theorem generalize to a finite sized crystal? I would be fine with a one dimensional lattice for the purpose of answering this question. I am trying to figure out what more general theorem (if any) gives Bloch's theorem as the number of unit cells-->infinity. - 1 Welcome to Physics.SE! For the benefit of users who read a question just because the title sounded interesting it is useful to link to a source for, in this case, Bloch's theorem. I'm going to add a Wikipedia link for you. – dmckee♦ Feb 19 '12 at 20:53 ## 2 Answers Bloch's theorem generalizes nicely to a finite size crystal if we take periodic boundary conditions (pbc). If we have pbc than the a translation by one unit cell is still a symmetry of the system and so Bloch's theorem will apply. The only difference will be that the quasimomentum $q$ will only be allowed to take certain discrete values since the wavefunciton must periodic over the entire system. Specifically $qL = 2\pi$ where $L$ is the length of the system. This is all exactly analogous to taking a free particle and putting it in a periodic box. Periodic boundary conditions are not physical of course (except for rings of atoms) but as with the particle in a box they give the essential structure. In this case the point is that the quasimomentum is still a good quantum number locally, but the energy spectrum is now discrete with energy spacing like $1/L$. - 2 That is what I thought at first, but a college brought up the fact that periodic boundary conditions are not a good approximation near edges. So I was wondering whether there was a further generalization that naturally treated edges different (or the states changed as you approach the edges.) – Feynman Feb 20 '12 at 15:05 To understand the detailed behavior near the edges you're going to have solve the Schrodinger equation near the edges. No way around it that I know of nor do I suspect there can be any. This is is done all the time in the literature. You can still apply Bloch's theorem in the directions perpendicular to the surface of course. – BebopButUnsteady Feb 20 '12 at 15:37 1 Then at what point does Bloch's theorem stop applying? How many unit cells is "enough" for Bloch's theorem to apply? – Feynman Feb 20 '12 at 21:33 You may think this way: take a perfect infinite crystal where Bloch theorem perfectly work and add potential which makes real crystal finite. Next question you may ask how this potential is "seen" by quasiparticles which have been obtained from infinite crystal consideration. This procedure is perfectly self-consistent and is applicable in all cases. Also, using this approach you may get some idea when Bloch theorem stops working. However, there is no definite answer to this question: it depends on which value you are interested in. If you are interested in momentum conservation, fourier spectrum of this confining potential should be small at wavevectors you consider. If you consider energy, levels spacing due to quantum confinement should be small compared to some characteristic energy, etc. - I suppose I am mainly interested in what kind of how quasimomentum would be allowed in such a system. Would it vary with position on the lattice? – Feynman Feb 23 '12 at 1:00 What do you mean by "how allowed"? Following the procedure I outlined you may define this value in any case. The next question is whether it will have any meaning/be useful in calculations. Answer to this question depends on a system you consider. – Misha Feb 23 '12 at 4:20 Ok then, I guess that's all there is to it. I checked the first answer only because it was more directed to my question (rather than the comment.) I appreciate your answer too and voted it up. – Feynman Feb 23 '12 at 11:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499922394752502, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-applied-math/51020-physics.html
# Thread: 1. ## physics... The diagram shows a small ball of mass m=0.280kg that is attached to a rotating vertical bar by means of two massless strings of equal lengths L=1.10m. The distance between the points where the strings are attached to the bar is D=1.56m. The rotational speed of the bar is such that both strings are taut and the ball moves in a horizontal circular path of radius R at constant speed v=3.65m/s. (a) Determine the magnitude and direction of the net force that is acting on the ball when it is in the position shown in the diagram. (b) Determine the magnitude of the tension T1 in the upper string and the magnitude of the tension T2 in the lower string. (c) Determine the speed of the ball at which the tension T2 becomes zero. i'm not sure how to go about solving this problem, i've never done anything like it...help? 2. (a) net force is the centripetal force ... $F_{net} = F_c = \frac{mv^2}{R}$ (b) two equations ... as stated in part (a), $F_{net} = F_c$ $F_c = T_1 \sin{\theta} + T_2 \sin{\theta}$ Forces in the vertical direction are in equilibrium $T_1 \cos{\theta} = T_2 \cos{\theta} + mg$ solve the system for $T_1$ and $T_2$ (c) The system becomes a conical pendulum. $T_1 \sin{\theta}$ will be providing $F_c$ ... also, $T_1 \cos{\theta} = mg$ 3. i'm not quite sure what you mean for part c? the rest was really helpful though 4. (c) when $T_2 = 0$, only $T_1$ and the force of gravity ( $mg$) act on the mass. The horizontal component of $T_1$ provides the centripetal force and the vertical component of $T_1$ counteracts the weight of the mass.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.915423572063446, "perplexity_flag": "head"}
http://peadarcoyle.wordpress.com/
# Energy firms and climate change: Unburnable fuel | The Economist Posted on May 11, 2013 by Public policies and markets do not currently reflect the risks of a warmer world. Posted in Uncategorized | # On the Bangladeshi tragedy Posted on May 1, 2013 by Clearly the Bangladeshi people deserve better enforcement of their laws. And clearly this is an incredible human tragedy. But boycotting products from Bangladesh (as some on the left argue) would be horrendous for Bangladesh. Prosperity takes time, and Bangladesh is a very poor country. I provide the following link. http://ipeatunc.blogspot.pt/2013/04/understanding-bangladesh-tragedy-with.html This is fundamentally a problem about the news, rural poverty (about 125k children under 5 died last year from poverty in Bangladesh) does not make the news. Whereas a horrendous garment factory disaster does. This was fundamentally not a problem about multinationals but a problem about the practice of law. This is further evidence of why corporations are not always the enemy, in this case it was corrupt governments and the interests of local owners. http://stumblingandmumbling.typepad.com/stumbling_and_mumbling/2013/04/biased-news.html It poses a question, is the best thing to help poor regions to invest in companies that will be working or expanding there? As opposed to charity say? Posted in Uncategorized | # On Climate Change Posted on March 24, 2013 by ¨The right has a point Climate Change has taken on quasi-religious characteristics : self –denying life style changes , self-imposed thriftiness , asceticism and moral superiority¨ – Benjamin Harrison My friend Ben, pretty much nailed it I think. Posted in Uncategorized | # It is for your own good – in defence of paternalism Posted on March 12, 2013 by Classical liberals like myself, often feel that freedom is a virtue. While it is logically sound that we need good reasons for coercion, what about the cognitive-bias program which shows we are really bad at making decisions and dealing with probabilities? The famous research of Tversky and Kahneman, highlighted that when faced with complicated probabilities – to save on computational power – we use heuristics. One famous example is the ´availability heuristic´. So what does a classical liberal, and one who adored J.S. Mill while studying Political Philosophy, do when faced with the extensive empirical research presented by the social sciences. It is certainly true that some things are above introspection, and beyond our scopes of reason. The world – as many a physicist who has encountered the social sciences knows – is far too complex! We are frankly, not as competent as Mill would have us believe. This reminds me of a quote by Franzen: “It’s all circling around the same problem of personal liberties,” Walter said. “People came to this country for either money or freedom. If you don’t have money, you cling to your freedoms all the more angrily. Even if smoking kills you, even if you can’t afford to feed your kids, even if your kids are getting shot down by maniacs with assault rifles. You may be poor, but the one thing nobody can take away from you is the freedom to fuck up your life whatever way you want to.” ― Jonathan Franzen, Freedom We regularly see adults make horrendous financial decisions, the Irish Housing Bubble is testament to that, and many of us have lovers who smoke or consume too much sugar. How are we to align the notion of freedom, and allow people – in light of social science research – to make optimal choices? I can not do better than Cass Sunstein so I shall quote him, and include the book review below. Until now, we have lacked a serious philosophical discussion of whether and how recent behavioral findings undermine Mill’s harm principle and thus open the way toward paternalism. Sarah Conly’s illuminating book Against Autonomy provides such a discussion. Her starting point is that in light of the recent findings, we should be able to agree that Mill was quite wrong about the competence of human beings as choosers. “We are too fat, we are too much in debt, and we save too little for the future.” With that claim in mind, Conly insists that coercion should not be ruled out of bounds. She wants to go far beyond nudges. In her view, the appropriate government response to human errors depends not on high-level abstractions about the value of choice, but on pragmatic judgments about the costs and benefits of paternalistic interventions. Even when there is only harm to self, she thinks that government may and indeed must act paternalistically so long as the benefits justify the costs. http://www.nybooks.com/articles/archives/2013/mar/07/its-your-own-good/?pagination=false The bottom line is: freedom isn´t always what it is cracked up to be, and Mayor Bloomberg was right with his proposed Soda ban to try to try to protect New Yorkers from themselves. Posted in Uncategorized | # On Aaron Swartz Posted on January 13, 2013 by I unfortunately have exams to study for at the moment. But the sad news of the tragic departure of Aaron Swartz, internet activist, defender of civil liberties and coder, compelled me to write. This is horrendously sad, I had an intense admiration for Aaron Swartz, especially his recent work on elites. He was brilliant, erudite and a leading light of the internet age. And what is incredibly important is that people realize that we are all part of the internet age. The web and technology, are politically and socially very important. The world is a poorer place without Aaron, and he certainly made the world a better place. Which is admirable and commendable for only 26 years. I’m sure many things will not be done without his presence, and he is a heroic reminder to us all. Now I’m going to get back to work on my own intellectual pursuits. Thank you Aaron for everything. Your sad departure, is evidence that we need to take Mental Illness more seriously, but we also need to take the tech community more seriously, and the over zealous persecution by the DOJ in the US of the hacking of ‘Academic Documents’ by Aaron, was wrong. Aaron RIP, you shall be missed. The below image is a creative commons one of Aaron. Posted in Uncategorized | # Markov Chains and Monte Carlo Algorithms Posted on August 15, 2012 by 1. General state space Markov chains Most applications of Markov Chain Monte Carlo algorithms (MCMC) are concerned with continuous random variables, i.e the corresponding Markov chain has a continuous state space S. In this section we will give a brief overview of the theory underlying Markov chains with general state spaces. Although the basic principles are not much different from the discrete case, the study of general state Markov chainns involves many more technicalities and subtleties. Though this section is concerned with general state spaces we will notationally assume that the state space is ${ S = \mathbb{R}^{d}}$ We need a definition of a Markov chain, to be a stochastic process in which, conditionally on the present, the past and the future are independent. In the discrete case we formalised this idea using the conditional probability of ${{X_t = j}}$ given different collections of past events. In a general state space it can be that all events of the type ${{X_t = j}}$ have probability 0, as it is the case for a process with a continuous state space. A process with a continuous state space spreads the probability so thinly that the probability of hitting one given state is 0 for all states. Thus we have to work with conditional probabilities of sets of states, rather than individual states. Definition 1 (Markov chain) . Let X be a stochastic process in discrete time with general state space S. X is called a Markov chain if X satisfies the Markov property $\displaystyle \mathbb{P}(X_{t+1} \in A|X_0 = x_0, \cdots , X_t = x_t) = \mathbb{P}(X_{t+1} \in A| X_t = x_t) \ \ \ \ \ (1)$ for all measurable sets ${A \subset S}$. In the following we will assume that the Markov chain is homogeneous. i.e. the probabilities ${\mathbb{P}}$ ${(X_{t+1} \in A| X_t = x_t)}$ are independent of t. For the remainder of this section we shall also assume that we can express the probability from definition 1 using a transition kernel ${K: S \times S \rightarrow \mathbb{R}^{+}_{0}:}$ $\displaystyle \mathbb{P}(X_{t+1} \in A| X_t = x_t) = \int_A K(x_t,x_{t+1})dx_{t+1} \ \ \ \ \ (2)$ where the integration is with respect to a suitable dominating measure, i.e. for example with respect to the Lebesgue measure if S = ${\mathbb{R}^d}$. The transition kernel K(x,y) is thus just the conditional probability density of ${X_{t+1}}$ given ${X_t = x_t}$. We obtain the special case of the definition of a transition kernel. Definition 2 The matrix ${ \mathbf{K} = (k_{ij})_{ij} = \mathbb{P}(X_{t+1} = j|X_{t} = i)}$ is called the transition kernel (or transition matrix) of the homogeneous Markov chain X. We will see that together with the initial distribution, whcih we might write as a vector ${\mathbf{\lambda_{0}} = (\mathbb{P}(X_{0} = i) _{(i \in S)}}$ the transition kernel ${\mathbf{K}}$ fully specifies the distribution of a homogeneous Markov chain. However, we start by stating two basic properties of the transition kernel K: • The entries of the transition kernel are non-negative (they are probabilities). • Each row of the kernel sums to 1, as $\displaystyle \sum_{j} k_{ij} = \sum_{j} \mathbb{P}(X_{t+1} = j|X_{t} = i) = \mathbb{P}(X_{t+1} \in S|X_{t} = i) = 1 \ \ \ \ \ (3)$ We obtain the special case of definition 1.8 by setting K(i,j) = ${k_{ij}}$, where ${k_{ij}}$ is the (i,j)-th element of the transition matrix ${\mathbb{K}}$. For a discrete state space the dominating measure is the counting measure, so integration just corresponds to summation, i.e. equation 3 is equivalent to $\mathbb{P}(X_{t+1} \in A|X_{t} = x_t) = \sum_{x_{t+1} \in A} k_{x_{t},x_{t+1}}$ We have for measurable set ${A \subset S}$ that $\mathbb{P} (X_{t+m} \in A|X_t = x_t) = \int_A \int_S \cdots \int_S K(x_t,x_{t+1}K(x_{t+1},x_{t+2}) \cdots K(x_{t+m-1},x_{t+m})dx_{t+1} \cdots dx_{t+m-1}dx_{t+m},$ thus the m-step transition kernel is $K^{(m)}(x_0,x_m) = \int_S \cdots \int_S K(x_0,x_1)\cdots K(x_{m-1},x_{m})dx_{m-1}\cdots dx_1$ The m-step transition kernel allows for expressing the m-step transition probabilities more conveniently: $\mathbb{P} (X_{t+m} \in A|X_t = x_t) = \int_A K^{(m)}(x_t,x_{t+m})dx_{t+m}$ Let us consider an example. Example 1 Consider the Gaussian random walk on ${\mathbb{R}}$. Consider the random walk on ${\mathbb{R}}$ defined by $X_{t+1} = X_{t} + E_{t}$ where ${E_{t}\cong N(0,1)}$, i.e. the probability density function of ${E_t}$ is ${\phi(z) = \frac{1}{\sqrt{2\pi}}exp(-\frac{z^{2}}{2})}$. This is equivalent to assuming that ${X_{t+1}|X_t = x_t \cong N(x_t, 1)}$ We also assume that ${E_t}$ is independent of ${X_0,E_1,\cdots,E_{t-1}}$. Suppose that ${X_0 \cong N(0,1)}$. In contrast to the random walk on ${\mathbb{Z}}$ the state space of the Gaussian random walk is ${\mathbb{R}}$. We have that $\mathbb{P}(X_{t+1} \in A|X_t = x_t,\cdots X_0 = x_0) = \mathbb{P}(E_t \in A - x_t| X_t =x_t, \cdots , X_0 = x_0)$ ${=\mathbb{P}(E_t \in A - x_t) = \mathbb{P}(X_{t+1} \in A| X_t = x_t),}$ where A – ${x_t = {a - x_t : a \in A}}$. Thus X is indeed a Markov chain. Furthermore we have that $\mathbb{P}(X_{t+1} \in A| X_t = x_t) = \mathbb{P}(E_t \in A - x_t) = \int_A \phi(x_{t+1} - x_{t})dx_{t+1}$ Thus the transition kernel (which is nothing other than the conditional density of ${X_{t+1}| X_t = x_t}$) is thus $K(x_t,x_{t+1}) = \phi(x_{t+1} -x_{t})$ To find the m-step transition kernel we could use equation 2. However, the resulting integral is difficult to compute. Rather we exploit the fact that $X_{t+m} = X_{t} + \boxed{E_{t} + \cdots + E_{t+m-1}},$ where the boxed formula is approximately ${ N(0,m)}$ thus we can write ${X_{t+m}|X_{t} ~ N(x_{t},m)}$ $\mathbb{P}(X_{t+m} \in A|X_t = x_t) = \mathbb{P}(X_{t+m} - X_{t}\in A - x_{t}) = \int_A \frac{1}{\sqrt{m}}\phi (\frac{x_{t+m} -x_{t}}{\sqrt{m}} )dx_{t+m}$ Comparing this with 2 we can identify $K^{(m)}(x_t, x_{t+m}) = \frac{1}{\sqrt{m}}\phi (\frac{x_{t+m} -x_{t}}{\sqrt{m}}$ as m-step transition kernel \qed We need the powerful probabilistic notion of irreducibility. Definition 3 (Irreducibility) Given a distribution ${\mu}$ on the states S, a Markov chain is said to be ${\mu}$-irreducible if for all sets A with ${\mu(A)> 0}$ and for all ${x \in S}$, there exists an ${m}$ ${\in \mathbb{N}_{0}}$ such that $\mathbb{P}(X_{t+m} \in A| X_t = x) = \int_A K^{(m)} (x,y) dy > 0$ If the number of steps m=1 then for all A, then the chain is said to be strongly ${\mu}$-irreducible. Example 2 In the example above we had that ${X_{t+1} | X_t = x_t \cong N(x_{t},1).}$ As the range of the Gaussian distribution is ${\mathbb{R}}$, we have that ${\mathbb{P}(X_{t+1} \in A | X_t =x_t)}$ > 0 for all sets A of non-zero Lebegue measure. Thus the chain is strongly irreducible with the respect to any continuous distribution. \qed We can extend the concepts of periodicity, recurrence, and transience from the discrete case to the general case. However this requires additional technical concepts like atoms and small sets one can see ‘Robert and Casella, 2004′ for a rigorous treatment of these concepts. Let us define a recurrent discrete Markov chain. Definition 4 A discrete Markov chain is recurrent, if all states (on average) are visited inifinitely often. For more general state spaces, we need to consider the number of visits to a set of states rather than single states. Let ${V_A = \sum^{+\infty}_{t=0}\mathbb{\oldstylenums{1}}_{{X_t \in A}}}$ be the number of visits the chain makes to states in the set ${A \subset S}$. We then define the expected number of visits in ${A \subset S}$, when we start the chain in ${x \in S}$: $\mathbb{E}(V_{A}|X_0 = x) = \mathbb{E}(\sum^{+\infty}_{t=0} \mathbb{\oldstylenums{1}}_{{X_t \in A}}|X_0 = x) = \sum_{t=0}^{+\infty} \int_A K^{(t)}(x,y)dy$ This allows us to define recurrence for general state spaces. We start with defining recurrence of sets before extending the definition of recurrence of an entire Markov chain. Definition 5 (a) A set A ${\subset S}$ is said to be recurrent for a Markov chain X if for all ${x\;\in\; A}$ $\mathbb{E}(V_A|X_0 = x) = +\infty,$ (b) A Markov chain to be recurrent, if • The chain is ${\mu}$-irreducible for some distribution ${\mu}$. • Every measurable set ${A \;\subset\;S}$ with ${\mu(A) > 0}$ is recurrent. According to the definition a set is recurrent if on average it is visited infinitely often. This is already the case if there is a non-zero probability of visiting the set infinitely often. A stronger concept of recurrence can be obtained if we require that the set is visited infinitely often with probability 1. This type of recurrence is referred to as Harris recurrence. Definition 6 (Harris Recurrence) . (a) A set ${A \;\subset\; S}$ is said to be Harris-recurrent for a Markov chain X if for all ${x\;\in \; A}$ ${\mathbb{P} \left(V_{A} = +\infty | X_0 = x \right) = 1,}$ (b) A Markov chain is said to be Harris-recurrent, if • The chain is ${\mu}$-irreducible for some distribution ${\mu}$. • Every measurable set ${A \; \subset \; S}$ with ${\mu(A)}$ ${>}$ 0 is Harris-recurrent. It is easy to see that Harris-recurrence implies recurrence. For discrete state spaces the two concepts are equivalent. Checking recurrence or Harris recurrence can be very difficult. We will state (without) proof a proposition which establishes that if a Markov chain is irreducible and has a unique invariant distribution, then the chain is also recurrent. However, before, we can state this proposition, we need to define invariant distributions for general state spaces. Definition 7 (Invariant Distribution). A distribution ${\mu}$ with density function ${f_{\mu}}$ is said to be the invariant distribution of a Markov chain X with transition kernel K if $f_{\mu} (y) = \int_{S} f_{\mu} (x) K(x,y) dx$ for almost all ${y \;\in\;S.}$ Proposition 8 Suppose that X is a ${\mu}$-irreducible Markov chain having ${\mu}$ as unique invariant distibution. Then X is also recurrent. Checking the invariance condition of definition7 requires computing an integral, but this can be cumbersome, so an alternative condition is the simpler (sufficient but not necessary) condition of detailed balance. Definition 9 (Detailed balance) . A transition kernel K is said to be in detailed balance with a distribution ${\mu}$ with denisity ${f_{\mu}}$ if for almost all x,y ${\in\;S}$ $f_{\mu}(x)K(x,y) = f_{\mu}(y)K(y,x).$ In complete analogy with theorem 1.22 one can also show in the general case that if the transition kernel of a Markov chain is in detailed balance with a distribution ${\mu}$, then the chain is time-reversible and has ${\mu}$ as its invariant distribution. 1.1. Ergodic theorems In this section we will study the question of whether we can use observations from a Markov chain to make inferences about its invariant distribution. We will see that under some regularity conditions it is even enough to follow a single sample path of the Markov chain. For independently identically distributed data the Law of Large Numbers is used to justify estimating the expected value of a functional using empirical averages. A similar result can be obtained for Markov chains. This result is the reason why MCMC methods work: it allows us to set up simulation algorithms to generate a Markov chain, whose sample path we can then use for estimating various quantities of interest. Theorem 10 (Ergodic Theorem) . Let X be a ${\mu}$-irreducible, recurrent ${\mathbb{R}^{d}}$-valued Markov chain with invariant distribution ${\mu}$. Then we have for any integrable function ${g: \mathbb{R}^{d} \rightarrow \mathbb{R}}$ that with probability 1 $lim_{t \rightarrow \infty} \frac{1}{t} \sum^{t}_{i=1} g(X_{i}) \rightarrow \mathbb{E}){\mu}(g(X)) = \int_S g(x)f_{\mu}(x)dx$ for almost every starting value ${X_0 = x}$. If X is Harris-recurrent this holds for every starting value x. Proof: For a proof see (Roberts and Rosenthal, 2004, fact 5) $\Box$ We conclude with an example that illustates that the condition of irreducibility and recurrence are necessary in theorem 10. These conditions ensure that the chain is permamently exploring the entire state space, which is a necessary condition for the convergence of ergodic averages. Example 3 Consider a discrete chain with two states ${S = \left\lbrace 1,2 \right\rbrace}$ and transition matrix Any distribution ${\mu}$ on ${{1,2}}$ is an invariant distribution, as $\mathbf{\mu ' K = \mu ' I = \mu '}$ for all ${\mu}$. However the chain is not irreducible (or recurrent): we cannot get from state 1 to state 2 and vice versa. If the inital distribution is ${\mu = (\alpha, 1 - \alpha)'}$ with ${\alpha \in [0,1]}$ then for every ${t \in \mathbb{N}_{0}}$ we have that $\mathbb{P}(X_t = 1) = \alpha \;\;\;\;\;\;\;\;\; \mathbb{P}(X_t = 2) = 1 - \alpha$ By observing one sample path (which is either 1,1,1,… or 2,2,2,…) we can make no inference about the distribution of ${X_t}$ or the parameter ${\alpha}$. The reason for this is that the chain fails to explore the whole space space. To clarify the chain fails to switch between the states 1 and 2. In order to estimate the parameter ${\alpha}$ we would need to look at more than one sample path. \qed 2. Monte Carlo Methods 2.1. What are Monte Carlo Methods? This collection of lectures is concerned with Monte Carlo methods, which are sometimes referred to as stochastic simulation. Examples of Monte Carlo methods include stochastic integration, where we use a simulation-based method to evaluate an integral, Monte Carlo tests, where we resort to simulation in order to computer the p-value, and Markov-Chain Monte Carlo (MCMC), where we construct a Markov chain which (hopefully) converges to the distribution of interest. A formal definition of Monte Carlo methods was given (amongst others) by Halton (1970)\footnote{Halton, J.H. A retrospective and prospective survey of the Monte Carlo method. SIAM Review, 12, 1-63.} He defined a Monte Carlo method as “representing the solution of a problem as a parameter of a hypothetical population, and using a random sequence of numbers to construct a sample of the population, from which statistical estimates of the parameter can be obtained.” 2.2. Introductory examples Example 4 (A raindrop experiment for computing ${\pi}$) Assume we want to compute an Monte Carlo estimate of ${\pi}$ using a simple experiment. Assume that we could produce “uniform rain” on the square ${[-1,1] \times [-1,1]}$, such that the probability of a raindrop falling in to a region ${\mathcal{R} \;\subset\;[-1,1]^{2}}$ is proportional to the area of ${\mathcal{R},}$ but independent of the position of ${\mathcal{R}}$. It is easy to see that this is the case iff the two coordinates X,Y are i.i.d. realisations of uniform distribution on the interval ${[-1,1]}$(in short ${X,Y i.i.d\sim \cup[-1,1]).}$ Now consider the probability that a raindrop falls into the unit circle. It is $\mathbb{P}(drop\; within\; the\; circle) = \frac{area\; of \;the\; unit\; circle}{area\; of\; the \;square} = \frac{\int \int _{{x^2 + y^2 \leq 1}} 1 dxdy}{\int \int _{{-1 \leq x,y \leq 1} }1 dxdy} = \frac{\pi}{2.2} = \frac{\pi}{4}$ In other words, $\pi = 4.\mathbb{P}(drop\;within\;circle),$ i.e. we found a way of expressing the desired quantity ${\pi}$ as a function of a probability. We can estimate the probability using our raindrop experiment. If we observe n raindrops, then the number of raindrops Z that fall inside the circle is a binomial random variable: $Z \sim B(n,p) \;\;\;\;\;\;\; with\;p\;=\;\mathbb{P}(drop\;within\;circle).$ Thus we can estimate p by its maximum -likelihood estimate $\hat{p} = \frac{Z}{n}$ and we can estimate ${\pi}$ by $\hat{\pi} = 4\hat{p} = 4\cdot\frac{Z}{n}.$ Assume we have observed that 77 of the 100 raindrops were inside the circle. In our case our estimate of ${\pi}$ is $\hat{\pi} = \frac{4 \cdot 77}{100} = 3.08$ which is relatively poor. However the law of large numbers guarantees that our estimate ${\hat{\pi}}$ converges almost surely to ${\pi}$. As n increases, our estimate improves. We can assess the quality of our estimate by computing a confidence interval for ${\pi}$. As we have ${Z \sim B(100,p)}$ and ${\hat{p} = \frac{Z}{n},}$ we use the approximation that ${Z \;\sim\;N(100p,100p(1-p)). }$ Hence, ${\hat{p} \sim }$ N(p,p(1-p)/100)${,}$ and we can obtain a 95${\%}$ confidence interval for p using this normal approximation $\left[ 0.77 - 1.96\cdot \sqrt{\frac{0.77\cdot(1-0.77)}{100}}, 0.77 1.96\cdot \sqrt{\frac{0.77\cdot(1-0.77)}{100}}\right]$ = ${[0.6875,0.8525]}$, As our estimate of ${\pi}$ is four times the estimate of ${p}$, we now also have a confidence interval for ${\pi}$: $[2.750,3.410]$ Historically, the main drawback of Monte Carlo methods was that they used to be expensive to carry out. Physically random experiments (for example an experiment to examine ‘Buffon’s Needle’ were difficult to perform and so was the numerical processing of their results. This changed fundamentally with the advent of the digital computer. Amongst the first to realize this potential were John von Neuman and Stanislaw Ulam. For any Monte-Carlo simulation we need to be able to reproduce randomness by a deterministic Computer Algorithm. Clearly this is a philosophical paradox, but lots of work has been done on this, and the statistical language R has a lot of ‘random number generators’ see ${(?RNGkind)}$ in GNU R for further details. | # Creative Commons License Posted on August 12, 2012 by I’ve been thinking recently about Copyright law, and hence I’ve offered all content on my website is under the Creative Commons licensing. Part of my naive ideas about ‘information should be free’ Posted in Uncategorized | ### Tweets from myself • Energy firms and climate change: Unburnable fuel | The Economist wp.me/pAEAv-8q 6 days ago • RT @BeautifulEqn: .@AlgebraFact    exp    𝔤 ⟶ 𝘎  ad↓  ↓ Ad    𝔤 ⟶ 𝘎     exp Ad∘exp = exp∘ad, Ad_exp(X) = exp(ad_X) Ad_h(g)=hgh⁻¹, ad_… 2 weeks ago • RT @TheEconomist: By defunding political-science research in America, spending on intellectual inquiry will be reduced http://t.co/iJeFRm8Q… 2 weeks ago • On the Bangladeshi tragedy wp.me/pAEAv-8g 2 weeks ago I'm a Mathematics Masters student interested in Differential Equations. Society and Climate Change.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 142, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9194304347038269, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/137628/evaluating-int-infty-inftyx-exp-left-b2-leftx-c-right2-right
# Evaluating $\int_{-\infty}^{\infty}x\exp\left(-b^{2}\left(x-c\right)^{2}\right)\mathrm{erf}^{2}\left(a\left(x-d\right)\right)\,\mathrm{d}x$ I have big difficulties solving the following integral: $$\int_{-\infty}^{\infty}x\exp\left(-b^{2}\left(x-c\right)^{2}\right)\mathrm{erf}^{2}\left(a\left(x-d\right)\right)\,\mathrm{d}x$$ I tried to use integration by parts, and also tried to apply the technique called “differentiation under the integration sign” but with no results. I’m not very good at calculus so my question is if anyone could give me any hint of how to approach this integral. I would be ultimately thankful. If it could help at all, I know that $$\int_{-\infty}^{\infty}x\exp\left(-b^{2}\left(x-c\right)^{2}\right)\mathrm{erf}\left(a\left(x-d\right)\right)\,\mathrm{d}x=\frac{a}{b^{2}\sqrt{a^{2}+b^{2}}}\exp\left(-\frac{a^{2}b^{2}\left(c-d\right)^{2}}{a^{2}+b^{2}}\right)+\frac{\sqrt{\pi}c}{b}\mathrm{erf}\left(\frac{ab\left(c-d\right)}{\sqrt{a^{2}+b^{2}}}\right),$$ for $b>0$. - Do you mean erf((a(x-d))^2), or [erf(a(x-d))]^2 (in the top expression)? – Adam Rubinson Apr 27 '12 at 12:06 @AdamRubinson I meant $[erf(a(x-d))]^2$. I edited the original post to avoid confusion. – petru Apr 27 '12 at 13:25 In that case, I am guessing that they gave you the integral at the bottom of your post as a hint. Note that, assuming your hint at the bottom is correct, that integral is just a constant (even though it looks incredibly messy). You can use IBP where one of your functions is the integrand of the integral at the bottom of your post, and the other function is erf(a(x-d)). The "hint" at the bottom of your post tells you how to integrate the longer one, and all you need to do now is to differentiate erf(a(x-d)) (which you can do) – Adam Rubinson Apr 27 '12 at 16:39 @AdamRubinson Well, after the initial excitement I realized that it’s not that simple. I cannot use the bottom integral because what I need is the antiderivative of $x\exp\left(-b^{2}\left(x-c\right)^{2}\right)\mathrm{erf}\left(a\left(x-d\right)‌​\right)$, and not the value of the integral. – petru Apr 27 '12 at 17:33 yes you are correct. My method fails. This is not an easy problem, and I don't know the answer. – Adam Rubinson Apr 27 '12 at 18:09 ## 3 Answers A suggestion instead of a complete answer: taking $c=0$ for ease of typing and integrating by parts $$\begin{align*} \int_{-\infty}^\infty xe^{-b^2x^2}\text{erf}^2(a(x-d))\ \mathrm dx &= -\text{erf}^2(a(x-d))\left.\frac{e^{-b^2x^2}}{2b^2}\right|_{-\infty}^\infty\\ &\qquad\qquad +\int_{-\infty}^\infty2\text{erf}(a(x-d))\frac{2}{\sqrt{\pi}} e^{-a^2(x-d)^2}\frac{e^{-b^2x^2}}{2b^2}\ \mathrm dx\\ &=\frac{2}{b^2\sqrt{\pi}}\int_{-\infty}^\infty\text{erf}(a(x-d)) e^{-a^2(x-d)^2-b^2x^2}\ \mathrm dx\\ &=\frac{2}{b^2\sqrt{\pi}}\int_{-\infty}^\infty\text{erf}(a(x-d)) e^{-(a^2+b^2)x^2 + 2a^2dx-a^2x^2}\ \mathrm dx \end{align*}$$ to which, after completing the square in the exponent, we can apply the OP's given integral formula $$\int_{-\infty}^{\infty}\exp\left(-b^{2}\left(x-c\right)^{2}\right)\mathrm{erf}\left(a\left(x-d\right)\right)\,\mathrm{d}x= {\frac{\sqrt\pi}{b}}\mathrm{erf}\left(\frac{ab\left(c-d\right)}{\sqrt{a^{2}+b^{2}}}\right)\,.$$ - Thanks Dilip. I did calculations you suggested but unfortunately I can’t see how this result is supposed to help me. With $c\neq 0$ I still have integrand $\exp\left(-a^2 (-d + x)^2\right) \mathrm{erf}\left(b (x-c)\right) \mathrm{erf}\left(a (x-d)\right)$ and I don’t know how to proceed with it. – petru May 3 '12 at 22:05 What I managed to achieve now is a better approximation of that complicated integral, but even though I tried from various angles, I can’t find the closed form. So basically I’m stuck with integral $$\int_{-\infty}^{\infty}\exp\left(-a^{2}\left(x-d\right)^{2}\right)\mathrm{erf}\left(b\left(x-c\right)\right)\mathrm{erf}\left(a\left(x-d\right)\right)\,\mathrm{d}x, \tag{1}$$ which after integrating by parts leaves me with $$\int_{-\infty}^{\infty}\exp\left(-a^{2}(x-d)^{2}\right)\mathrm{erf^{2}}\left(b\left(x-c\right)\right)\,\mathrm{d}x\,. \tag{2}$$ I’m writing this post in hope that somebody knows a trick or technique to deal with integral $(1)$ or $(2)$. Or maybe none of these integrals are “doable”? Any suggestions will be deeply appreciated. Notes In the original post I mentioned one helpful integral. Another integral that is useful here is $$\int_{-\infty}^{\infty}\exp\left(-b^{2}\left(x-c\right)^{2}\right)\mathrm{erf}\left(a\left(x-d\right)\right)\,\mathrm{d}x= {\frac{\sqrt\pi}{b}}\mathrm{erf}\left(\frac{ab\left(c-d\right)}{\sqrt{a^{2}+b^{2}}}\right)\,.$$ - thanks for all your comments and suggestions. I still haven’t found the closed form of the integral, or maby it cannot be done… What I have for this moment is a (very?) good approximation. I haven’t tested it carefully but it seems to be good enough for my purposes. $$\int_{-\infty}^{\infty}x\exp\left(-b^{2}\left(x-c\right)^{2}\right)\mathrm{erf}^{2}\left(a\left(x-d\right)\right)\,\mathrm{d}x\approx$$ $$\frac{c\sqrt{\pi}}{b}+\frac{2a}{b^{2}\sqrt{a^{2}+b^{2}}}\exp\left(-\frac{a^{2}b^{2}(c-d)^{2}}{a^{2}+b^{2}}\right)\mathrm{erf}\left(\frac{ab^{2}(c-d)}{\sqrt{a^{2}+b^{2}}\sqrt{2a^{2}+b^{2}}}\right)-\frac{c}{\sqrt{\frac{b^{2}}{\pi}+\frac{a^{2}\pi}{8}}}\exp\left(-\frac{a^{2}b^{2}(c-d)^{2}\pi^{2}}{8b^{2}+a^{2}\pi^{2}}\right)$$ for $a,\,b>0$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942190945148468, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/9194/what-is-the-meaning-of-commutation/9196
# What is the meaning of Commutation? I understand the mathematics of commutation relations and anti-commutation relations, but what does it mean for something to commute with another in quantum mechanics? i.e. an operator with the hamiltonian - When you say you understand the math, do you also understand the relation to representation theory and symmetries? – Marek Apr 27 '11 at 16:09 It means the commutator is zero, i.e. AH = HA – Mark Eichenlaub Apr 27 '11 at 16:14 1 Are you asking about the physical consequences and implications (rather than the mathematical definition, which you already know) when two operators commute? – Qmechanic♦ Apr 27 '11 at 16:41 – user346 Apr 27 '11 at 16:43 @Qmechanic yes. – adustduke Apr 28 '11 at 10:18 ## 4 Answers Let us first restate the mathematical statement that two operators $\hat A$ and $\hat B$ commute with each other. It means that $$\hat A \hat B - \hat B \hat A = 0,$$ which you can rearrange to $$\hat A \hat B = \hat B \hat A.$$ If you recall that operators act on quantum mechanical states and give you a new state in return, then this means that with $\hat A$ and $\hat B$ commuting, the state you obtain from letting first $\hat A$ and then $\hat B$ act on some initial state is the same as if you let first $\hat B$ and then $\hat A$ act on that state: $$\hat A \hat B | \psi \rangle = \hat B \hat A | \psi \rangle.$$ This is not a trivial statement. Many operations, such as rotations around different axes, do not commute and hence the end-result depends on how you have ordered the operations. So, what are the important implications? Recall that when you perform a quantum mechanical measurement, you will always measure an eigenvalue of your operator, and after the measurement your state is left in the corresponding eigenstate. The eigenstates to the operator are precisely those states for which there is no uncertainty in the measurement: You will always measure the eigenvalue, with probability $1$. An example are the energy-eigenstates. If you are in a state $|n\rangle$ with eigenenergy $E_n$, you know that $H|n\rangle = E_n |n \rangle$ and you will always measure this energy $E_n$. Now what if we want to measure two different observables, $\hat A$ and $\hat B$? If we first measure $\hat A$, we know that the system is left in an eigenstate of $\hat A$. This might alter the measurement outcome of $\hat B$, so, in general, the order of your measurements is important. Not so with commuting variables! It is shown in every textbook that if $\hat A$ and $\hat B$ commute, then you can come up with a set of basis states $| a_n b_n\rangle$ that are eigenstates of both $\hat A$ and $\hat B$. If that is the case, then any state can be written as a linear combination of the form $$| \Psi \rangle = \sum_n \alpha_n | a_n b_n \rangle$$ where $|a_n b_n\rangle$ has $\hat A$-eigenvalue $a_n$ and $\hat B$-eigenvalue $b_n$. Now if you measure $\hat A$, you will get result $a_n$ with probability $|\alpha_n|^2$ (assuming no degeneracy; if eigenvalues are degenerate, the argument still remains true but just gets a bit cumbersome to write down). What if we measure $\hat B$ first? Then we get result $b_n$ with probability $|\alpha_n|^2$ and the system is left in the corresponding eigenstate $|a_n b_n \rangle$. If we know measure $\hat A$, we will always get result $a_n$. The overall probability of getting result $a_n$, therefore, is again $|\alpha_n|^2$. So it didn't matter that we measure $\hat B$ before, it did not change the outcome of the measurement for $\hat A$. EDIT Now let me expand even a bit more. So far, we have talked about some operators $\hat A$ and $\hat B$. We now ask: What does it mean when some observable $\hat A$ commutes with the Hamiltonian $H$? First, we get all the result from above: There is a simultaneous eigenbasis of the energy-eigenstates and the eigenstates of $\hat A$. This can yield a tremendous simplification of the task of diagonalizing $H$. For example, the Hamiltonian of the hydrogen atom commutes with $\hat L$, the angular momentum operator, and with $\hat L_z$, its $z$-component. This tells you that you can classify the eigenstates by an angular- and magnetic quantum number $l$ and $m$, and you can diagonalize $H$ for each set of $l$ and $m$ independently. There are more examples of this. Another consequence is that of time dependence. If your observable $\hat A$ has no explicit time dependency introduced in its definition, then if $\hat A$ commutes with $\hat H$, you immediately know that $\hat A$ is a constant of motion. This is due to the Ehrenfest Theorem $$\frac{d}{dt} \langle \hat A \rangle = \frac{-i}{\hbar} \langle [\hat A, \hat H] \rangle + \underbrace{\langle \frac{\partial \hat A}{\partial t} \rangle}_{=0\,\text{by assumption}}$$ - Cheers, that helped – adustduke Apr 28 '11 at 10:21 It means you can (in principle) measure both quantities to arbitrary precision at the same time. If they didnt commute then this would be impossible by the uncertainty principle. - "Precision" depends on the state. There are QM states where commuting variables are still uncertain. – Vladimir Kalitvianski Apr 27 '11 at 16:35 Physically it means, it does not matter in which temporal order you measure the two commuting observables. - You may consider commutativity of different variables as physical independence, something like separated independent variables: $\frac{\partial}{\partial x} \frac{\partial}{\partial y} = \frac{\partial}{\partial y} \frac{\partial}{\partial x}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9250264763832092, "perplexity_flag": "head"}
http://mathoverflow.net/questions/99546?sort=newest
## Proof for which primes H*G has torsion ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In 1960 Borel proved a beautiful result: Theorem. Let G be a simple, simply connected Lie group. Suppose that p is a prime that does not divide any of the coefficients of the highest root (expressed as a linear combination of the simple roots). Then $H^*G$ has no p-torsion. Here $H^*G$ refers to the integral cohomology of $G$ as a space. (Interestingly, the converse of this statement does not hold. For example $Sp(n)$ does not have $2$-torsion, though the coefficients of the highest weight are (1,2). This counterexample is given by Borel). You can check out his proof here if you're interested, but basically it seems like what he does is this: he computes the cohomology of each of the simple, simply connected Lie groups with coefficients that have the relevant primes inverted and proves the result by observation. Since the statement of the theorem itself has little to do with the actual classification of simple Lie groups, and morally should only rely on the fact that we can recover a simple, simply-connected Lie algebra from its root system, it seems natural to ask: Question: In the intervening 50+ years since Borel proved this theorem, do we know of a direct proof of the theorem that does not use the classification of simple Lie groups? I really have no idea how to go about doing this... could one use Lie algebra cohomology? I can only seem to find a link between Lie algebra cohomology and deRham cohomology of Lie groups, which is of no help when asking questions of torsion. Maybe there's an integral version? - The link is broken :( – Igor Rivin Jun 14 at 4:40 Whoops- should be fixed now – Dylan Wilson Jun 14 at 4:53 2 You may want to look at math.ku.dk/~jg/papers/classificationpodd.pdf where we summarize what is known and unknown about this and related questions, and give references to the literature. See in particular Rem 10.9 and the onwards references from there. – Jesper Grodal Jun 14 at 12:54 @Jesper: This was actually really helpful, if you feel up to taking the relevant bits and writing them as an answer I would accept it (and I'm sure others would appreciate it.) If not, I still found the paper quite helpful- and interesting! – Dylan Wilson Jun 15 at 1:34 ## 1 Answer I can't answer this completely, but I can point out some important follow-ups in the literature by Borel and others which need to be be taken into account. Borel's Tohoku paper is reprinted in the second volume of his collected papers (Springer, 1983). At the end of this volume is a full two page commentary by Borel on this paper, which reiterates his disappointment that he was unable to prove his result on torsion without going through the classification. Here he assembles more detailed results on "torsion primes" which were afterwards proved by Demazure (1973 Inventiones paper) and especially by Steinberg (1975 Advances paper). Eventually all of this work yields a clearer picture with less reliance on the classification. Further closely related work then appeared in an AMS Memoir which Steinberg reviewed at length in Mathematical Reviews: Borel, Armand; Friedman, Robert; Morgan, John W. Almost commuting elements in compact Lie groups. Mem. Amer. Math. Soc. 157 (2002), no. 747, x+136 pp ADDED: It's a natural problem in Lie theory to find classification-free proofs of results which are first observed case-by-case. But I'm not aware of any substantial progress beyond Borel's careful commentary (in the 1983 volume), where he notes the equivalence of five statements about cohomology and the equivalence of three statements about orimes related to root systems and Weyl groups (with reference to Demazure and Steinberg). As he observes, the first five imply the last three, but only case-by-case study shows the converse as well. Along the way Borel also points out an improvement over his original Nagoya formulation: while the "bad" primes (possibly 2, 3, 5) are those dividing a coefficient of the highest root, the "torsion" primes are the more limited ones dividing a coefficient of the coroot of this highest root. For instance in type `$G_2$`, with respective short and long simple roots `$\alpha, \beta$`, the highest root is `$3\alpha + 2\beta$` whereas its coroot is `$\alpha^\vee + 2\beta^\vee$`. There is a long history of study of the topology of a semisimple Lie group (equivalent to the topology of a maximal compact subgroup), in which a reduction is made to study of the root system and its Weyl group: for example, the determination of Betti numbers in terms of exponents or degrees for the Weyl group (Chevalley, ICM 1950). But this kind of transition is rather subtle. In the study of torsion primes, a key role is played by subgroups of maximal rank in a compact Lie group, these being correlated with certain subdiagrams of the extended Dynkin diagram. Much of the technology recurs in the study of p-compact groups, as Jesper observes. But not everything is well understood conceptually. - Thank you; I managed to find the articles you mentioned and it seems like what Steinberg can do is connect up a lot of different definitions for what a "torsion prime" could be, with the exception (of course) of giving a new proof that all of these notions agree with torsion in cohomology. It's really intriguing that this result really seems to rely on the classification... do you know if there is even a heuristic or general idea of why this theorem should be true, independent of the classification? Are there directions of proof that we know don't work? This is all really neat- Thank you! – Dylan Wilson Jun 15 at 1:38 This answer is wonderful, thank you! The addition spelled things out a little more clearly, and I think I roughly understand what the current state of affairs is. Cheers, – Dylan Wilson Jun 15 at 21:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9455034732818604, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/33670/lipschitz-contradiction
# Lipschitz contradiction Assume that $\phi:\mathbb{R}^n\rightarrow \mathbb{R}^n$ is a smooth vector field, and assume that we can find vectors $y_k,x_k$ ($k$ positive integer) such that $(\phi(x_k)-\phi(y_k),x_k-y_k)\geq k \mid y_k-x_k\mid^2$, where $(,)$ is the usual scalar product. Why does this condition contradict the fact that $\phi$ is Lipschitz? - Hint: Cauchy-Schwarz. Caution: if you do not assume that $y_k\ne x_k$, your condition becomes empty. Comment: even with the (homework) tag, not sure your question would be in the scope of this site. – Did Apr 18 '11 at 17:43 ## 1 Answer Recall Cauchy Schwarz which says $(x,y)\leq \|x\| \|y\|$. Then $$(\phi(x_k)-\phi(y_k),x_k-y_k)\leq \|\phi(x_k)-\phi(y_k)\| \|x_k-y_k\|$$ Since $\phi$ is Lipschitz, there exists a constant $C$ such that $$|\phi(x_k)-\phi(y_k)\|\leq C\|x_k-y_k\|$$ and hence $$(\phi(x_k)-\phi(y_k),x_k-y_k)\leq C|x_k-y_k\|^2.$$ But if we had two sequences, $x_k$, $y_k$ with $$(\phi(x_k)-\phi(y_k),x_k-y_k)\geq k\|y_k-x_k\|^2,$$ we would have to have $C>k$ for every integer $k$ which is impossible. Hope that helps, Note: I assume that your statement is meant to be interpreted as "for every integer $k$ we can find $x_k$, $y_k$ ..." rather than "there exists $k$ with..." since setting $\phi$ equal to $k$ times the identity deals with the second case. - Did you read the Caution part of my comment? The reason why your (accepted) proof is (technically) false is instructive. – Did Apr 20 '11 at 6:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553070664405823, "perplexity_flag": "head"}
http://mathoverflow.net/questions/53670/cartesian-factorization-of-a-finite-set-of-n-tuples
## Cartesian factorization of a finite set of n-tuples ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm interested in factoring a finite set of $n$-tuples as the Cartesian product of two "factor sets", of which the first factor is itself the Cartesian product of some of the set's projection images, and the second factor is an unfactorizable "remainder". To describe the problem more precisely, let $X$ be a non-empty, finite set of $n$-tuples, and let `$A_1,...,A_n$` be the images of $X$ upon projection onto its various coordinates. Hence, `$X \subset \prod_{i=1}^{n} A_i $`. Furthermore, assume that this inclusion is strict; that is, `$X \neq \prod_{i=1}^{n} A_i $`. (From this it follows that $n > 1$.) If $\sigma$ is some permutation of `$\{1,...,n\}$`, let `$X_\sigma$` denote the set of $n$-tuples derived from $X$ by permuting the coordinates of its elements according to $\sigma$: `$X_\sigma = \{(x_{\sigma(1)},...,x_{\sigma(n)})|(x_1,...,x_n)\in X \}$`. For any $n$-permutation $\sigma$, I'm interested in factorizations of `$X_\sigma$` having the form `$X_\sigma \cong (\prod_{i=1}^{k} A_{\sigma(i)}) \times Z$`, where $0 \leq k \lt n$ and `$Z \subsetneq \prod_{i=k+1}^{n} A_{\sigma(i)}$`. I'm using the $\cong$ here as shorthand to indicates that every $n$-tuple in `$X_\sigma$` can be written (obviously uniquely) as the concatenation of one $k$-tuple from `$\prod_{i=1}^{k} A_{\sigma(i)}$` and one $(n-k)$-tuple from `$Z \subsetneq \prod_{i=k+1}^{n} A_{\sigma(i)}$`, and, conversely, every concatenation of such tuples represents some element of `$X_\sigma$`. (If $k = 0$, then the $\cong$ just means that `$Z = X_\sigma$`.) I am primarily interested in factorizations for which the factor $\prod_{i=1}^{k} A_{\sigma(i)}$ has maximal cardinality, over all possible choices of $\sigma$ and $k$. I am also interested in factorizations for which $k$ is maximal, over all possible choices of $\sigma$. I'm looking for keywords I may use to search for algorithms to compute such maximal factorizations, given some concrete set of $n$-tuples $X$. Do such factorizations, or the problem of computing them, have a name? Any pointers to the relevant literature would be appreciated! ~kj - I suspect that this is related to NP hard problems such as exact cover or 3 dimensional matching. What I find interesting is that you need few witnesses to "block" such factorizations. For example, if for all appropriate partial tuples y the tuple corresponding to a_1,a_2,y is not in X, then X will not be factorizable as A1x A2 x whatever. It reminds me of logic minimization of an almost boolean function (some inputs may receive a don't care value as output) and its (almost) complement. Gerhard "Anyone Remember Programmable Logic Devices?" Paseman, 2011.01.28 – Gerhard Paseman Jan 29 2011 at 3:07 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306804537773132, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/definition?sort=votes&pagesize=15
Tagged Questions The definition tag is used in situations where the question is either about how some term or concept is define or where the validity of an answer depends on a subtle definition of some term or concept used in the question. 1answer 1k views Differentiating Propagator, Greens function, Correlation function, etc For the following quantities respectively, could someone write down the common definitions, their meaning, the field of study in which one would typically find these under their actual name, and most ... 9answers 1k views What is the difference between weight and mass? My science teacher is always saying the words "weight of an object" and "mass of an object," but then my physics book (that I read on my own) tells me completely different definitions from the way ... 4answers 2k views Are matrices and second rank tensors the same thing? Tensors are mathematical objects that are needed in physics to define certain quantities. I have a couple of questions regarding them that need to be clarified: 1-Are matrices and second rank tensors ... 7answers 1k views What is a general definition of impedance? Impedance is a concept that shows up in any area of physics concerning waves. In transmission lines, impedance is the ratio of voltage to current. In optics, index of refraction plays a role similar ... 7answers 644 views What is the current status of Pluto? Pluto has been designated a planet in our solar system for years (ever since it was discovered in the last century), but a couple of years ago it was demoted. What caused this decision? And is there ... 3answers 878 views Why are quark types known as flavors? There are six types of quarks, known as flavors. Why where these types called flavors? Why do the flavors have such odd names (up, down, charm, strange, top, and bottom)? 5answers 2k views What is an asterism compared to a constellation? I'm doing an astronomy exam tomorrow and in the practice paper it asks for the difference between constellation and asterism. It seems asterism is a group of recognizable stars; however I thought that ... 1answer 1k views What is the definition of how to count degrees of freedom? This question resulted, rather as by-product, the discussion on how to count degrees of freedom (DOF). I extend that question here: Are necessary1 derivatives such as velocities counted as ... 1answer 366 views What accounts for the discrepancies in my calculations of year lengths? A common exercise in many introductory astronomy texts is to use the lengths of various kinds days to calculate the approximate length of the corresponding year. For example, ratio $k$ of the length ... 4answers 223 views Two planets in same orbit - not planets? Let us pretend for a moment that there are two identical planets that are exactly opposite their star from each other and are the same distance from said star. (This would make them, at all times, ... 2answers 910 views 273 + degree Celsius = Kelvin. Why 273? Temperature conversion: 273 + degree Celsius = Kelvin Actually why is that 273? How does one come up with this? My teacher mentioned Gann's law (not sure if this is the one) but I couldn't find ... 7answers 5k views Simple explanation of quantum mechanics Can you please describe quantum mechanics in simple words? When ever I read this word (quantum computers, quantum mechanics, quantum physics, quantum gravity etc) I feel like fantasy, myth and ... 1answer 789 views Are all central forces conservative? Wikipedia must be wrong It might be just a simple definition problem but I learned in class that a central force does not necessarily need to be conservative and the German Wikipedia says so too. However, the English ... 6answers 848 views What is a tensor? I have a pretty good knowledge of physics but couldn't understand what a tensor is. I just couldn't understand it, and the wiki page is very hard to understand as well. Can someone refer me to a good ... 2answers 63 views What distinguishes a moon from orbiting space debris? Or in other words, when is a satellite “too small” to be a moon? The Wikipedia article on Natural Satellites doesn't really give an adequate distinction as to what distinguishes a moon from other orbiting bodies. What I am looking for is a classification that ... 4answers 3k views What is sound and how is it produced? I've been using the term "sound" all my life, but I really have no clue as to what sound exactly is or how it is created. What is sound? How is it produced? Can it be measured? 2answers 155 views Is surface of a solid a streamline? In fluid dynamics, streamlines are defined as line where at each point flow velocity is tangential to the line. Is it correct to say surface of a solid a streamline? On the surface the velocity vector ... 3answers 767 views Definitions and usage of Covariant, Form-invariant, Invariant? Just wondering about the definitions and usage of these three terms. To my understanding so far, "covariant" and "form-invariant" are used when referring to physical laws, and these words are ... 1answer 169 views Operator Ordering Ambiguities I have been told that $$[\hat x^2,\hat p^2]=2i\hbar (\hat x\hat p+\hat p\hat x)$$ illustrates operator ordering ambiguity. What does that mean? I tried googling but to no avail. 1answer 497 views What is a general definition of the spin of a particle? In quantum field theory, one defines a particle as a unitary irreducible representations of the Poincaré group. The study of these representations allows to define the mass and the spin of the ... 1answer 179 views Introduction to Gauge Symmetries: Good, Bad or Ugly? I'm trying to come up with a good (as in intuitive and not 'too wrong') definition of a gauge symmetry. This is what I have right now: A dynamical symmetry is a (differentiable) group of ... 1answer 6k views Is the moon a planet? Can our moon qualify as a planet? With regard or without regard to the exact definition of the planet, can the moon be considered as planet as Mercury, Venus and Earth etc. not as the satellite of the ... 3answers 444 views Can temperature be defined as propensity to transmit thermal energy? I was recently surprised to learn that defining temperature isn't easy. For a long time, it was defined operationally: how much does a thermometer expand. Also surprising, temperature isn't a ... 1answer 125 views Definition of Fine-Tuning I've looked in and out the forum, and found no precise definition of the meaning of fine-tuning in physics. QUESTION Is it possible to give a precise definition of fine-tuning? Of course, I guess ... 8answers 2k views Definition of “direction” Is there an actual definition of "direction" (that is, spatial direction) in physics, or is it just one of those terms that's left undefined? In physics textbooks it's always just taken for granted ... 3answers 470 views Why is 1 AU the distance between the Sun and the Earth? Why 1 AU is defined as the distance between the Sun and the Earth? (approximately if you like to be precise) An astronomical unit (abbreviated as AU, au, a.u., or ua) is a unit of length ... 3answers 399 views Entanglement spectrum What does it mean by the entanglement spectrum of a quantum system? A brief introduction and a few key references would be appreciated. 2answers 275 views What does the phrase “limb of the earth” or “atmospheric limb” mean? What does the term limb of the earth (see this question, for example) or atmospheric limb mean? The phrase strikes me as very odd, since earth is nearly spherical. Do other planets with atmospheres ... 3answers 324 views Why do we still not have an exact definition for a kilogram? I read that there is an effort to define a kilogram in terms that can exactly be reproduced in a lab. Why has it taken so long to get this done? It seems this would be fairly important. Edit Today I ... 2answers 324 views Meaning of the phrase “dipole moment of the combination” Here is a question I came across in a book: Three point charges $-q$,$-q$ and $2q$ are placed on the vertices of an equilateral triangle of side length $d$ units.What is the dipole moment of the ... 1answer 44 views Vesta dwarf planet status Now that we have close-up photos of Vesta, which the IAU had previously said was a candidate dwarf planet, when is the IAU going to decide the issue? Personally, Vesta doesn't look round enough to me. ... 1answer 97 views Does the Kelvin have a rigorous definition? From Wikipedia: The kelvin is defined as the fraction 1⁄273.16 of the thermodynamic temperature of the triple point of water. That presupposes that we can take a fraction of temperature. Now, ... 3answers 436 views Is temperature an extensive property, like density? I was thinking about it some time ago, and now that I've discovered this site I would like to ask it here because I couldn't work it out then. I know that the higher temperature the air in my room ... 2answers 119 views Observationally indistinguishable quantum states What does it mean for 2 quantum states to be "observationally indistinguishable"? If I may venture a guess: Does that mean that the set of possible measured values are the same though the ... 2answers 193 views Higgs boson and quasiparticles Do we know exactly the difference between particles and quasiparticles? Is Higgs boson a particle or a quasiparticle? I ask this because if I understood well, Higgs boson created by a spontaneaous ... 3answers 190 views What is a “Center Of Mass” issue of a Gorillapod? I read somewhere that a Gorillapod may have "Center Of Mass" issues when used with the long lenses. So, I wish to understand what is a "Center Of Mass" issue? I have to clarify that I am NOT a ... 2answers 163 views Definition of “Quantizing” Could anyone explain to me what "quantize" means in the following context? Quantize the 1-D harmonic oscillator for which $$H~=~{p^2\over 2m}+{1\over 2} m\omega^2 x^2.$$ I understand that the ... 4answers 112 views What do people actually mean by “rolling without slipping”? I have never understood what's the meaning of the sentence "rolling without slipping". Let me explain. I'll give an example. Yesterday my mechanics professor introduced some concepts of rotational ... 2answers 652 views What is a non linear $\sigma$ model? What exactly is a non linear $\sigma$ model? In many books one can view many different types of non linear $\sigma$ models but I don't understand what is the link between all of them and why it is ... 1answer 216 views What is “charge discreteness”? I assume it is some kind of quantity. Google only made things more confusing. I get that it has something to do with circuits. I also get what a discrete charge is. In fact, I thought charges ... 1answer 77 views For how long must a molecule remain stable to be considered “stable”? In the Star Trek: Voyager episode The Omega Directive, Seven of Nine says that the Borg synthesized a molecule which was "kept [] stable for one trillionth of a nanosecond before it destabilized". ... 1answer 218 views Mathematical definitions in string theory Does anyone know of a book that has mathematical definitions of a string, a $p$-brane, a $D$-brane and other related topics. All the books I have looked at don't have a precise definition and this is ... 3answers 320 views How do Temperature Scales work? How exactly do temperature scales work? If my understanding is correct, the Celsius scale has two fixed points: (definitions of temperature irrespective of scale) 1. The freezing point of pure water ... 1answer 49 views Does a reference or classification standard for altitude classifications of geocentric orbits exist? I'm looking for a primary reference of the altitude classifications of geocentric orbits (LEO, MEO, GEO, HEO), but I was not able to find something so far. I noticed that there is very different ... 3answers 821 views What's the difference between “boundary value problems” and “initial value problems”? Mathematically speaking, is there any essential difference between initial value problems and boundary value problems? The specification of the values of a function $f$ and the "velocities" ... 1answer 164 views What's a pseudo-rotation? I'm sorry for this lexical, probably extremely elementary, question. But what is a pseudo-rotation? I just read this term for the first time, in the beginning of the 4th chapter book of CFT by Di ... 2answers 341 views Strict general mathematical definition of drag Is there a formal definition of drag, say, as some surface integral of normal and shear forces? There seem to be a lot of formulas for specific cases, but is there a general one? I need to accurately ... 4answers 1k views What is the difference between center of mass and center of gravity? What is the difference between center of mass and center of gravity? These terms seem to be used interchangeably. Is there a difference between them for non-moving object on Earth, or moving objects ... 1answer 181 views What are Low-lying energy levels? I am reading about some canonical transformations of the Hamiltonian (of a system consisting of an electron interacting with an ionic lattice) due to Tomanaga and Lee, Low and Pines. One of the ... 0answers 93 views Holonomy twisting There is Witten's topological twist of standard SUSY QFTs with enough SUSY into Witten-type TQFTs. What is a holonomy twist?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420813918113708, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/69160-volume-solids-revolution-help.html
# Thread: 1. ## Volume of solids of revolution help.. I need help with volumes of solids of revolution.. I am given region R bounded by curves $y = x + 2$, and $y = x^{2}$. I tried setting it up to figure it out using the disk method, but I am not sure I got the right answer, so I just would like some help setting it up. O and BTW my answer I got was Pi * 16 and 1/3 2. Originally Posted by Manizzle I need help with volumes of solids of revolution.. I am given region R bounded by curves $y = x + 2$, and $y = x^{2}$. I tried setting it up to figure it out using the disk method, but I am not sure I got the right answer, so I just would like some help setting it up. O and BTW my answer I got was Pi * 16 and 1/3 The curves intersect at $x = -1,\; 2$ $V = \pi \int_{-1}^2 (x+2)^2 - x^4\; dx = \frac{72 \pi}{5}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9721227288246155, "perplexity_flag": "head"}
http://mathoverflow.net/questions/79051/theorems-proved-with-ad-whose-proof-is-also-known-in-the-zf-world
Theorems proved with AD whose proof is also known in the ZF world Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question arises from discussions with my professor and from Todd Eisworth comments in this question http://mathoverflow.net/questions/78863/large-cardinal-axioms-and-the-perfect-set-property In $L(\mathbb{R})$ we have $AD$ and it is a powerful tool to prove theorems. Almost all of the theorem proved with $AD$ come in a very natural way: we use games and determinacy as in the First/Second/Third Periodicity Theorems. However no "$ZF$+Large Cardinal" proof is known for the Periodicity Theorems. Another example is that of the Perfect Set Property: Using $AD$ all sets of reals have the Perfect Set Property, but is a proof of the statement "Assuming infinitely many Woodin cardinals with a measurable above then every set of reals has the perfect set property" known? So my question is: which theorems proved with $AD$ also have a known proof in the $ZF$+large cardinals world? - Of course, AD has large cardinal consistency consequences, which are also directly provable in ZFC + large cardinals. – Joel David Hamkins Oct 25 2011 at 12:38 You probably need to sharpen this question up a bit. For example, ZFC implies that there is a set of reals without the perfect set property, so of course we can't prove (in ZFC+large cardinals) that every set of reals has the perfect set property. Do you mean to ask about ZF? – Todd Eisworth Oct 25 2011 at 14:32 @Todd, Thx for the correction. Of course I can't say ZFC. – alephomega Oct 25 2011 at 22:12 2 Answers Perhaps, this is along the lines of what you're looking for. This thesis gives a proof of the stong partition relation on $\omega_1$ from AD, and then "relativizes" the proof to $V$ to show, assuming the existence of Woodin cardinals, a collapsing result, namely, that some regular cardinal $<\aleph_{\omega_2}$ in $L[\mathbb{R}]$ must collapse in $V$. - 1 Perhaps as a full disclosure you should add that the thesis is actually your thesis. – Asaf Karagila Oct 25 2011 at 17:51 1 @Asaf Kargila: I thought about doing that, but I think it's fairer to say that all the major ideas there were from my advisor, Steve Jackson. – Russell May Oct 25 2011 at 18:07 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I don't know if this is what you are looking for, but many important examples of statements that are consequences both of AD and of large cardinals are themselves phrased in terms of determinacy or large cardinals. For example, AD and "there is a measurable cardinal" both imply "every analytic set is determined" and "every real has a sharp". (In fact, these two conclusions are equivalent to one another.) There are more such phenomena at other levels of consistency strength. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9592592716217041, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/21257?sort=newest
## What is state of the art for the Shooting Method? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am interested in examples where the Shooting Method has been used to find solutions to systems of ordinary differential equations that are either • reasonably large systems, or • the search algorithm in the shooting parameters is somewhat prohibitive because of the nature of the solutions, or • both of the above. Any references, descriptions, recent progress, folklore, in the ballpark would be of interest. Feel free to interpret "reasonably large" subjectively if necessary. - Thanks for two good answers! – Q.Q.J. Jun 15 2010 at 14:42 ## 2 Answers I don't know about state-of-the-art and I'm not sure if this is the kind of thing you were looking for...however in my two first papers I've used the shooting method in a parameter space that was originally too big (4 and 6 dimensions if I recall correctly) and the problem was that with randomly chosen parameters the numerical solver would not reach the other end of the domain and so I could not use a root-finding algorithm to search for the correct initial conditions. The problem was there were unstable directions in the ODE and thus even with the correct initial conditions, the numerical noise would grow so large that you would not reach the other side. My solution was to find more natural variables to use (using an algebraic similarity solution that satisfies the boundary conditions) and to rewrite the system in terms of the new variables. In the new variables the similarity solution is a fixed point and one can reach this fixed point only via its stable manifold, which had a lower dimension than the original space (in my case...). This allowed the root-finding algorithm to kick in and find a solution. OK, This was a little vague. Here are the two papers (shameless plug): http://arxiv.org/abs/0711.0730 http://arxiv.org/abs/0711.0734 (Added later:) Recently I've been working on another problem that has highly unstable directions and there I use the collocation method, which (AFAIK) basically amounts so splitting the domain into many smaller part, doing shooting on each part, and trying to get the pieces to match up. If the problem is linear, this is a simple linear problem, if the problem isn't linear you need a non-linear root finder. I didn't write the code for the collocation, Matlab does it for me...look up BVP4C or BVP5C. In writing this answer, I looked for "collocation method" online and found very little that seemed relevant. So I can only refer you to the Matlab function. perhaps someone else can find a reference that is relevant here. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Another shameless plug ... Coworkers and I used the Evans function formalism, which is a variant of the shooting method to deal with unstable directions (probably the same problem as mentioned by yfarjoun), on a boundary value problem of the form $y'(t) = (\lambda A_1 + A_2(t)) y(t)$ with $y$ specified as $t \to \pm \infty$. This is very similar to a Sturm-Liouville problem except that the differential operator is not self-adjoint. The application we're interested in is to do stability analysis of a travelling wave of a 2d reaction-diffusion equation. The main problem is that $y(t)$ is a fairly big vector with up to about 200 entries. For details, please ask or see the paper at http://arxiv.org/abs/0805.1706 and references therein. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471403360366821, "perplexity_flag": "head"}
http://mathoverflow.net/questions/85481/the-first-eigenvalue-of-the-laplacian-for-complex-projective-space
## The first eigenvalue of the laplacian for complex projective space ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is the exact value of the first eigenvalue of the laplacian for complex projective space viewed as $SU(n+1)/S(U(1)\times U(n))$? - 1 Hmmm, I guess answer is - zero. Constant function is zero eigen for Laplace is not it ? Otherwise you might want to fix some normalization of metric ? Like volume = 1 ? – Alexander Chervov Jan 12 2012 at 13:57 4 I guess the OP means the first NONZERO eigenvalue. – Alain Valette Jan 12 2012 at 16:35 ## 3 Answers The spectrum of the Laplacian of $\mathbb C P^n$ with the Fubini-Study metric is $$Spec(\Delta_{\mathbb C P^n})=\{4k(n+k):k\in\mathbb N\} \quad\quad(*)$$ So, the first non-zero eigenvalue of $\mathbb C P^n$ is $\lambda_1=4n+4$. Note this matches with the fact that $\mathbb C P^1$, with the FS metric, is isometric to the $2$-sphere of radius $1/2$, whose first non-zero eigenvalue is $\lambda_1=8$. Let me quote a brief justification of (*) that I had written here: Using the classic spherical harmonics theory, one obtains the $k$-th eigenvalue of the $n$-dimensional round sphere $S^n$ to be $k(k+n-1)$, and its multiplicity is $\binom{n+k}{k}-\binom{n+k-1}{k-1}$, see e.g. [Berger, Gauduchon,Mazet, "Le spectre d'une variété riemannienne", Lecture Notes in Mathematics, Vol. 194 Springer-Verlag]. By looking at eigenfunctions of the Laplacian on $S^n$,$S^{2n+1}$ and $S^{4n+3}$ (note they are the unit spheres of $\mathbb R^{n+1}$, $\mathbb C^{n+1}$ and $\mathbb H^{n+1}$) that are respectively invariant under the natural actions of $\mathbb Z_2$, $S^1$ and $S^3$, one can obtain the eigenfunctions hence the $k$-th eigenvalue of the projective spaces $\mathbb R P^n$, $\mathbb C P^n$ and $\mathbb H P^n$, respectively. These are, respectively, $2k(n+2k-1)$, $4k(n+k)$ and $4k(k+2n+1)$. If you can understand some French, you will find a thorough explanation of the above in the book by Berger, Gauduchon, Mazet, "Le spectre d'une variete Riemannienne", Lecture Notes in Math, Springer, vol 194. - What is decomposition of $L^2(CP^n)$ in irreps of su(n). From the "general theory(??)" there should be only tautological representations in $C^n$ and its symmetric powers. Is it true? what is their multiplicity ? (I guess should be 1 - by analogy with Borel-Weil). I mean by "General theory" the following argument which is not formal - one can look on "functional dimension" of irrep - which corresponds to 1/2 of dimension of corresponding orbit - the point ALL other irreps correspond to higher-dimensional orbits - so they cannot be realized by differential operators on $CP^n$. – Alexander Chervov Jan 20 2012 at 6:59 @Alexander: The way I indicate how the spectrum of $CP^n$ can be computed does not use (directly) the rep theory arguments you mention. Assuming one knows the eigenfunctions of the Laplacian on spheres (by using spherical harmonics), the invariant ones will be eigenfunctions of the Laplacian on projective spaces. Now, following your suggestions, one can also compute the spectrum using irreducible representations of $SU(n)$. From the Peter-Weyl Theorem one gets a decomposition of $L^2(SU(n))$ and then of $L^2(CP^n)$ by picking only invariant rep's ... – Renato G Bettiol Jan 20 2012 at 17:39 1 @Alexander: ... The Casimir element acts on each representation that appears in this decomposition of $L^2(CP^n)$ as multiplication by a certain scalar, that depends on half the sum of the positive roots of the representation and its highest weight. The formula can be found e.g. in N. Wallach's book. Thus, since the irred rep's of $SU(n)$ are well-known, one can look at the ones that appear in the decomposition of $L^2(CP^n)$ and compute its highest weight and half sum of positive roots to obtain the corresponding eigenvalue. Doing this for all such representations gives the entire spectrum. – Renato G Bettiol Jan 20 2012 at 17:42 @Renato thank you for your comments ! But let me formulate in the other words my remark. What is (or is there) the easiest (1-sentence) way to get spectrum for $L^2(CP^n)$ ??? What I advocated that it should be $S^k(C^n)$. May be argument is unclear (I can try to explain if so) - but it is quite short. – Alexander Chervov Jan 21 2012 at 14:59 @Alexander: I'm afraid there's no "1-sentence" way to get that... If I understood correctly what you suggest, the point is to use the Peter-Weyl theorem to decompose $L^2(G/K)=\bigoplus o_\rho(G/K)$ where we sum over equivalence classes of representations $\rho$ of $G$ for which $K$ has some non-trivial fixed vector. [for details, see Thm 1.3, p. 17 of Takeuchi's book "Modern spherical functions", AMS] Then, applying this to $G/K=SU(n+1)/S(U(1)\times U(n)$ we get a decomposition of $L^2(CP^n)$ and we know how the Casimir element acts on each factor (by mult. by the correspondent eigenvalue). – Renato G Bettiol Jan 21 2012 at 17:32 show 5 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. See SPECTRA AND EIGENFORMS OF THE LAPLACIAN ON $S^n$ AND $P^(C)$ (osaka j math 1977, you can skip to page 529 or if really lazy look at Theorem 5.2 - Not an answer but advise. Laplace probably(?) comes(should I explain this "comes"???) from quadratic Casimir in U(g) up to scalar factor which depends on volume. Casimir can be written as \sum_i e^ie_i, where e^i and e_i are dual basises in g, with respect to Cartan-Killing form. So the question is what is the minimal eigen of quadratic Casimirs in finite-dim representations which enter decomposition of L^2(CP^n). I guess(?) standard vector representation of su(n) in C^n enters this decomposition. I guess(?) minimal eigen of quadratic Casimir in ALL irreps corresponds to this C^n. If all guesses are correct you need just to calculate e^ie_i value in C^n and also care about the scalar which is related to volume normalization. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8803163766860962, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/7452/comparing-cash-equivalent-of-risky-portfolios/7457
# Comparing Cash Equivalent of risky portfolios To compare two risky portfolios, Mean-Variance (M-V) portfolios for example, many compare their Cash Equivalent ($CE$). $CE$ is defined as the amount of cash that provides the same utility as the risky portfolio: $$U\left(CE\right)= W\left(w\right)= w'\mu-\frac{1}{2}\lambda w'\Sigma w$$ where $W(x)$ is the investor's expected utility of wealth, and basically the function to be maximized in the M-V portfolio problem. My question is why not just limit the comparison on expected utilities of the investor $W(w)$. What is the advantages behind comparing $CE$. Thank you. - ## 2 Answers There is a simple reason to use prefer $CE$ to pure utility: $CE$ is independent of utility units. Thus it allows direct comparison. The cash equivalent of a risky portfolio is the certain amount of cash that provides the same utility that portfolio. So for portfolio $w$ we can define $CE$ via $U(CE)=E[U(w)]$ or $CE=U^{-1}(E[U(w)])$. Note that for risk-free portfolio the $CE$ equals to certain return. So there is one-to-one correspondence betwen expected utility and $CE$. $CE$ is not a new concept but a convinient way to express utility in different units. Also $CE$ is used in research papers when risk premia calculation of a lottery is required, i.e. then you can just substract $CE$ from the price of the lottery. This answer heavily borrows form the book "The Kelly Capital Growth Investment Criterion: Theory and Practice", page 251. - care to back up your claim? How do you get from a risky portfolio to Cash Equivalent? To be honest I do not even know what CE is supposed to be (in the context of representing a risky portfolio), but what I know is that a single variable cannot describe nor properly represent a non-trivial risk/return construct. – Freddy Mar 7 at 14:37 @Freddy sure. updated the answer. – Alexey Kalmykov Mar 7 at 14:59 this only works if the portfolio is a risk free portfolio. As soon as you introduce risk the CE can be identical for two portfolios with entirely different risk reward profiles. Hence my criticism of the simplifying assumptions made in the application of CE. By the way could you please cite the exact paper and page you reference as your book just contains a bunch of academic papers. – Freddy Mar 7 at 15:36 I found your reference and I stand by my claim that the assumptions made are entirely unrealistic: First of all you need as input a risk tolerance metric, as I said there is no way to map a MV portfolio to CE without assumptions, in this case a risk tolerance level.More importantly, this risk tolerance level is different for every person on this planet. Thus the authors further use as input expected utility. For them this comes down to a logarithmic function. Now, how many more simplifying assumptions you want to apply to rape this poor portfolio just to squeeze out a single number? – Freddy Mar 7 at 15:56 Think about it this way: you apply for a job. You have or don't have many properties that may qualify or disqualify you from the job. Now a service company sells a new software to all hiring managers claiming a single number can be attached to each human being to rank them relative to their peers. Great idea? Well not really cause a great education may be completely worthless if the job description is about shoveling soil or picking apples on a plantation. – Freddy Mar 7 at 16:01 show 7 more comments I do not see any advantage in this approach whatsoever, nor would I believe, as you suggested, that "many" use this kind of approach. In fact I find it horribly wrong. Using a single variable (CE in this case) to represent a non-trivial risk-return construct implies the ability to map such relationship to one variable representations. Everybody values risk differently, everybody looks for a different risk/reward relationship for million different reasons. The necessary simplifying assumption that is applied here is that risk/reward utility means the same to everyone. I am not saying that utility is identical for all risk/reward mappings, but that a 90% expected return with portfolio variance of 50% has the same utility to all, and a 10% expected return with portfolio variance of 5% has another utility albeit the assumption is that everyone measures this utility equally. I find such assumption plain wrong. It is almost as if there are no greeks in derivatives trading, only a price, buy and sell and that is it. Convenient, because now we can compare prices only, but all the fine-grained detail that makes one chose a 3 month fly over a 2 months spread gets lost because there are no more risk profiles, 2nd and higher order greeks. Sometimes mathematicians (and economists) go overboard simply because they spend too much time in an artificially lit room instead of dwelling among mortals. I suggest this is one of those cases. - Great answer. Sometimes it is surprising to see how that the academic community agreed on stuff without apparent reasons!! – Omar Mar 7 at 9:08 @Freddy "CE is suggesting we all set our utility equal" not really. – Alexey Kalmykov Mar 7 at 13:48 @AlexeyKalmykov, care to elaborate? How would you define CE as functional output of a risky portfolio? – Freddy Mar 7 at 14:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366409778594971, "perplexity_flag": "middle"}
http://www.reference.com/browse/wiki/Least-squares_spectral_analysis
Definitions Least-squares spectral analysis Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum, based on a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems. LSSA is also known as the Vaníček method after Petr Vaníček, and as the Lomb method (or the Lomb periodogram) and the Lomb–Scargle method (or Lomb–Scargle periodogram), based on the contributions of Nicholas R. Lomb and, independently, Jeffrey D. Scargle. Closely related methods have been developed by Michael Korenberg and by Scott Chen and David Donoho. Historical background The close connections between Fourier analysis, the periodogram, and least-squares fitting of sinusoids have long been known. Most developments, however, are restricted to complete data sets of equally spaced samples. In 1963, J. F. M. Barning of Mathematisch Centrum, Amsterdam, handled unequally spaced data by similar techniques, including both a periodogram analysis equivalent to what is now referred to the Lomb method, and least-squares fitting of selected frequencies of sinusoids determined from such periodograms, connected by a procedure that is now known as matching pursuit with post-backfitting or orthogonal matching pursuit. Petr Vaníček, a Canadian geodesist of the University of New Brunswick, also proposed the matching-pursuit approach, which he called "successive spectral analysis", but with equally spaced data, in 1969. He further developed this method, and analyzed the treatment of unequally spaced samples, in 1971. The Vaníček method was then simplified in 1976 by Nicholas R. Lomb of the University of Sydney, who pointed out its close connection to periodogram analysis. The definition of a periodogram of unequally spaced data was subsequently further modified and analyzed by Jeffrey D. Scargle of NASA Ames Research Center, who showed that with minor changes it could be made identical to Lomb's least-squares formula for fitting individual sinusoid frequencies. Scargle states that his paper "does not introduce a new detection technique, but instead studies the reliability and efficiency of detection with the most commonly used technique, the periodogram, in the case where the observation times are unevenly spaced," and further points out in reference to least-squares fitting of sinusoids compared to periodogram analysis, that his paper "establishes, apparently for the first time, that (with the proposed modifications) these two methods are exactly equivalent." Press summarizes the development this way: Michael Korenberg of Queen's University in 1989 developed the "fast orthogonal search" method of more quickly finding a near-optimal decomposition of spectra or other problems, similar to the technique that later became known as orthogonal matching pursuit. In 1994, Scott Chen and David Donoho of Stanford University have developed the "basis pursuit" method using miminization of the L1 norm of coefficients to cast the problem as a linear programming problem, for which efficient solutions are available. The Vaníček method In the Vaníček method, a discrete data set is approximated by a weighted sum of sinusoids of progressively determined frequencies, using a standard linear regression, or least-squares fit. The frequencies are chosen using a method similar to Barning's, but going further in optimizing the choice of each successive new frequency by picking the frequency that minimizes the residual after least-squares fitting (equivalent to the fitting technique now known as matching pursuit with pre-backfitting). The number of sinusoids must be less than or equal to the number of data samples (counting sines and cosines of the same frequency as separate sinusoids). A data vector Φ is represented as a weighted sum of sinusoidal basis functions, tabulated in a matrix A by evaluating each function at the sample times, with weight vector x: $phi approx textbf\left\{A\right\}x$ where the weight vector x is chosen to minimize the sum of squared errors in approximating Φ. The solution for x is closed-form, using standard linear regression: $x = \left(textbf\left\{A\right\}^\left\{mathrm\left\{T\right\}\right\}textbf\left\{A\right\}\right)^\left\{-1\right\}textbf\left\{A\right\}^\left\{mathrm\left\{T\right\}\right\}phi$. Here the matrix A can be based on any set of functions that are mutually independent (not necessarily orthogonal) when evaluated at the sample times; for spectral analysis, the functions used are typically sines and cosines evenly distributed over the frequency range of interest. If too many frequencies are chosen in a too-narrow frequency range, the functions will not be sufficiently independent, the matrix will be badly conditioned, and the resulting spectrum will not be meaningful. When the basis functions in A are orthogonal (that is, not correlated, meaning the columns have zero pair-wise dot products), the matrix ATA is a diagonal matrix; when the columns all have the same power (sum of squares of elements), then that matrix is an identity matrix times a constant, so the inversion is trivial. The latter is the case when the sample times are equally spaced and the sinusoids are chosen to be sines and cosines equally spaced in pairs on the frequency interval 0 to a half cycle per sample (spaced by 1/N cycle per sample, omitting the sine phases at 0 and maximum frequency where they are identically zero). This particular case is known as the discrete Fourier transform, slightly rewritten in terms of real data and coefficients. $x = textbf\left\{A\right\}^\left\{mathrm\left\{T\right\}\right\}phi$     (DFT case for N equally spaced samples and frequencies, within a scalar factor) Lomb proposed using this simplification in general, except for pair-wise correlations between sine and cosine bases of the same frequency, since the correlations between pairs of sinusoids are often small, at least when they are not too closely spaced. This is essentially the traditional periodogram formulation, but now adopted for use with unevenly spaced samples. The vector x is a good estimate of an underlying spectrum, but since correlations are ignored, Ax is no longer a good approximation to the signal, and the method is no longer a least-squares method – yet it has continued to be referred to as such. The Lomb–Scargle periodogram Rather than just taking dot products of the data with sine and cosine waveforms directly, Scargle modified the standard periodogram formula to first find a time delay τ such that this pair of sinusoids would be mutually orthogonal at sample times tj, and also adjusted for the potentially unequal powers of these two basis functions, to obtain a better estimate of the power at a frequency, which made his modified periodogram method exactly equivalent to Lomb's least-squares method: $tan\left\{2 pi tau\right\} = frac\left\{sum_j sin 2 omega t_j\right\}\left\{sum_j cos 2 omega t_j\right\}.$ The periodogram at frequency ω is then estimated as: $P_x\left(omega\right) = frac\left\{1\right\}\left\{2\right\}$ left( frac { left[sum_j X_j cos omega (t_j - tau ) right] ^ 2} { sum_j cos^2 omega (t_j - tau ) } + frac {left[sum_j X_j sin omega (t_j - tau ) right] ^ 2} { sum_j sin^2 omega (t_j - tau ) } right) which Scargle reports then has the same statistical distribution as the periodogram in the evenly-sampled case. At any individual frequency ω, this method gives the same power as does a least-squares fit to sinusoids of that frequency, of the form $phi\left(t\right) approx A sin omega t + B cos omega t$. Korenberg's "fast orthogonal search" method Michael Korenberg of Queens University in Kingston, Ontario, developed a method for choosing a sparse set of components from an over-complete set, such as sinusoidal components for spectral analysis, called fast orthogonal search (FOS). Mathematically, FOS uses a slightly modified Cholesky decomposition in a mean-square error reduction (MSER) process, implemented as a sparse matrix inversion. As with the other LSSA methods, FOS avoids the major shortcoming of discrete Fourier analysis, and can achieve highly accurate identifications of embedded periodicities and excels with unequally-spaced data; the fast orthogonal search method has also been applied to other problems such as nonlinear system identification. Chen and Donoho's "basis pursuit" method Chen and Donoho have developed a procedure called "basis pursuit" for fitting a sparse set of sinusoids or other functions from an over-complete set. The method defines an optimal solution as the one that minimizes the L1 norm of the coefficients, so that the problem can be cast as a linear programming problem, for which efficient solution methods are available. Applications The most useful feature of the LSSA method is enabling incomplete records to be spectrally analyzed, without the need to manipulate the record or to invent otherwise non-existent data. Magnitudes in the LSSA spectrum depict the contribution of a frequency or period to the variance of the time series. Generally, spectral magnitudes defined in the above manner enable the output's straightforward significance level regime. Alternatively, magnitudes in the Vanícek spectrum can also be expressed in dB. Note that magnitudes in the Vaníček spectrum follow β-distribution. Inverse transformation of Vaníček's LSSA is possible, as is most easily seen by writing the forward transform as a matrix; the matrix inverse (when the matrix is not singular) or pseudo-inverse will then be an inverse transformation; the inverse will exactly match the original data if the chosen sinusoids are mutually independent at the sample points and their number is equal to the number of data points. No such inverse procedure is known for the periodogram method. Implementation The LSSA can be implemented in less than a page of MATLAB code. For each frequency in a desired set of frequencies, sine and cosine functions are evaluated at the times corresponding to the data samples, and dot products of the data vector with the sinusoid vectors are taken and appropriately normalized; following the method known as Lomb/Scargle periodogram, a time shift is calculated for each frequency to orthogonalize the sine and cosine components before the dot product, as described by Craymer; finally, a power is computed from those two amplitude components. This same process implements a discrete Fourier transform when the data are uniformly spaced in time and the frequencies chosen correspond to integer numbers of cycles over the finite data record. As Craymer explains, this method treats each sinusoidal component independently, or out of context, even though they may not be orthogonal on the data points, whereas Vaníček's original method does a full simultaneous least-squares fit by solving a matrix equation, partitioning the total data variance between the specified sinusoid frequencies. Such a matrix least-squares solution is natively available in MATLAB as the backslash operator. Craymer explains that the least-squares method, as opposed to the independent or periodogram version due to Lomb, can not fit more components (sines and cosines) than there are data samples, and further that: Lomb's periodogram method, on the other hand, can use an arbitrarily high number of, or density of, frequency components, as in a standard periodogram; that is, the frequency domain can be over-sampled by an arbitrary factor. In Fourier analysis, such as the Fourier transform or the discrete Fourier transform, the sinusoids being fitted to the data are all mutually orthogonal, so there is no distinction between the simple out-of-context dot-product-based projection onto basis functions versus a least-squares fit; that is, no matrix inversion is required to least-squares partition the variance between orthogonal sinusoids of different frequencies. This method is usually preferred for its efficient fast Fourier transform implementation, when complete data records with equally spaced samples are available. External links • LSSA software freeware download (via ftp), FORTRAN, Vaníček's method, from the Natural Resources Canada.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9144030809402466, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/50037/passing-through-the-earths-magnetic-field/50042
# Passing through the earth's magnetic field When a projectile such as a jet plane passed through the earth's magnetic field, does an electric current get generated in the projectile? I don't sense it when I fly commercially. Why not? - 1 Faraday cage enclosure – Michael Luciuk Jan 12 at 16:47 You mean an induced current due to the inhomogeneity of Earth's magnetic field? How do you suppose you would "sense" that when you fly? Anyway, it will probably be extremely small since the magnetic field variations are on very large scales. – jkej Jan 12 at 17:09 Thank you. I forgot about Faraday's cage. – Tony Vignone Jan 12 at 17:09 ## 1 Answer Yes indeed. We do have observe an induced emf when moving perpendicular to the magnetic field. Let's assume that our plane travels at some $50\ ms^{-1}$ and the length of its wings to be $30\ m$. The vertical component of Earth's magnetic field is around some $4\times 10^{-5}\ T$ Roughly, the induced emf is given by $$e=-B\ l\ v =-50\times 30\times 4\times 10^{-5}$$ $$\implies e=-0.06\ V$$ You must be an amazingly sensitive bot in order to detect that very small voltage. If you cling to the wings somehow and wanna feel the current, we could make use of Ohm's law. But, the amperes would be further worse. Our body resistance is about $10^5\ \Omega$ $$I=\frac{0.06}{10^5}=0.6\ \mu A$$ What do ya think of this current? As Michael says, Faraday's cage is a great example... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319099187850952, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/31678/central-limit-theorem?answertab=votes
Central Limit Theorem If N is a poisson random variable, why is the following true? It is from "Probability and Stochastic Processes" by Yates, page 301, equation 8.2 to 8.3 $$P\left(\left|\frac{N-E(N)}{\sigma_N}\right| \geq \frac{c}{\sigma_N}\right) = P(|Z| \geq \frac{c}{\sigma_N})$$ Z is the standard normal Gaussian random variable. The explanation in the text is : "Since E[N] is large", the CLT can be used. But I am familiar with the CLT being used with sums of random variables. Thanks. - The context here is telephone calls entering a system. The total number of calls is a poisson random variable, N. For example, if the hypothesis was the number of calls is 100, then I could express the observed random variable as $n_1 + n_2 + \cdots + n_{100}$. I think here, we are assuming the number of observed random variables is E[N]? And each random variable has expected value = variance = 1? Then, in general, is it true that each observation is poisson with expected value = variance = 1? – jrand Apr 8 '11 at 4:00 2 Answers Several points. 1) CLT only gives approximation to normal, not equality. 2) While the standard CLT can be easily applied to the case where the parameter $\alpha$ is an integer tending to $\infty$, you have a slight problem if $\alpha_n \to \infty$ with $\alpha_n$ real; consider Douglas Zare's comment: the sequence of parameters $\alpha _n /\left\lfloor {\alpha _n } \right\rfloor$ is not fixed (though tends to $1$). 3) This problem is essentially a special case of this recent one . Indeed, if $N$ has parameter $\alpha=t$, then it is equal in distribution to $X_t$, where $X = \{X_t: t \geq 0\}$ is a Poisson process with rate $1$. But $X$ is just a special case of a compound Poisson process, where the jump distribution is the $\delta_1$-distribution (this corresponds to the $Y_i$ being equal to $1$ in the linked post). So, instead of considering $\frac{{N - E(N)}}{{\sigma (N)}}$, you can consider $\frac{{X_t - E(X_t )}}{{\sigma (X_t )}}$ (which has been done in the linked post). Remark. Note that in the linked post the $\frac{{X_t - E(X_t )}}{{\sigma (X_t )\sqrt {N_t } }} \to {\rm N}(0,1)$ appearing in question 1 should have been replaced with $\frac{{X_t - E(X_t )}}{{\sigma (X_t ) }} \to {\rm N}(0,1)$. - N is a poisson($\alpha$) random variable. Then, it can be expressed as the sum of $\alpha$ poisson(1) random variables. If $\alpha$ is large, then the central limit theorem can be used. The reason for changing the poisson distribution is on page 253 of the text, the derivation involves the moment generating function. $$P\left(\left|\frac{N-E(N)}{\sigma_N}\right| \geq \frac{c}{\sigma_N}\right)$$ $$= P\left(\left|\frac{\sum_{i=1}^{i=\alpha}n-\alpha}{\sqrt{\alpha}}\right| \geq \frac{c}{\sigma_N}\right)$$ Since $E(n) = {\sigma_n}^2 = 1$, set $Z_n = \frac{\sum_{i=1}^{i=\alpha}n-\alpha}{\sqrt{\alpha}}$ and $Z_n$ is standard normal, by CLT. - What if $\alpha$ is not an integer? – Mike Spivey Apr 8 '11 at 4:11 2 Then use a sum of $\lfloor \alpha \rfloor$ IID Poisson distributions with parameter $\alpha/\lfloor \alpha \rfloor$. – Douglas Zare Apr 8 '11 at 4:45 +1, thanks partly to the comment by Douglas Zare. – Mike Spivey Apr 8 '11 at 5:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8777701258659363, "perplexity_flag": "head"}
http://mathoverflow.net/questions/77520?sort=newest
## A competitive root finding game ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Inspired by a question about bisection I wondered about the following: The are two players X and Y and a moderator Z who knows two (random,independent, uniformly chosen) hidden reals $x$ and $y$ from (0,1). The game has $t$ turns. At the beginning of each turn player X knows an open interval containing $x$, she names an internal point and learns which sub-interval contains $x$. Simultaneously, Y does the same for $y$. The player with the shorter interval wins one dollar from the other player (in case of a tie, no one wins). What is the optimal strategy? For $t=1$ turn it is obviously to choose the midpoint $0.5$. But for $t=2$ the following strategy beats a player who has decided on bisection: on turn 1 choose $0.45$ and on turn 2 choose either $0.225$ or $0.675$. Then the probabilities of winning are $0.45,0.225$ and $0.325$ for $+1,0$ and $-1$ respectively. Also: Consider the variation where X wins only if her interval is $90 \%$ or less that of player $Y$. What are the (respective) optimal strategies in this case? It is still not optimal for $Y$ to use bisection. The exact same strategy as above has the exact same payoff odds for X. - There is certainly not a pure strategy equilibrium, owing to the discrete jumps in payoffs for small changes in interval size. Do X and Y know the length of the other one's interval? If not, then you could just have both players choose a partition into 2^k. If so, then one would want to define an optimal strategy dependent on the time left and the ratio between the partitions. Doing so would handily answer your second question. However, it sounds quite difficult. – Will Sawin Oct 8 2011 at 6:11 I suppose one could discretize and say that the selected x and y are irrational but the guesses must be $\frac{n}{2^{100}}$ or something like that. – Aaron Meyerowitz Oct 8 2011 at 7:39 ## 2 Answers Possibly nothing simple, even when there is only one turn left. Suppose that there is just one turn left, the intervals are of length 4 and 6 (just to scale up) and the moves must create intervals of length $\frac{j}{4}.$ Then by my calculations, the player with the shorter interval should guess $$\frac54, \frac64 \text{ or }\frac74 \text{ with probabilities } \frac{4462\ }{6005\ }, \frac{53}{6005}, \frac{1490}{6005}$$ while the other player should choose $$\frac54, \frac74 \text{ or } \frac84 \text{ with probabilities } \frac{392}{1201} ,\frac{200}{1201}, \frac{609}{1201}.$$ - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is not a complete answer, but I lack the reputation to post as a comment. The strategy must account for the winning/losing streak because the advantage of a smaller interval compounds. If both players choose 0.5 for the first turn, by the rules neither one wins. This highlights an ambiguity in the rules as stated. In the case of a tie, do the players know that they tied? Suppose after some number of moves, player X has a losing streak of N moves and first lost with an interval of length I. Then player X knows that player Y will have an interval no larger than $I^{-2(N-1)}$ if player Y is following a bisection strategy. Player X must make a guess with an interval smaller than this number to have any chance of winning, unless player Y is following some other strategy. Suppose instead that player X has a winning streak of N moves and first won with an interval of length I. Then player X knows that player Y must be guessing smaller and smaller intervals with each loss. As a result, player X can attempt to safeguard the streak by choosing non-bisecting moves. This requires also knowing how much time is left (as Will Sawin says) because the more turns left the more important it is not to start losing. Fascinating question. Have you tried any Monte Carlo simulations on it? - I assume that at the start of each turn one knows the length of both intervals. – Aaron Meyerowitz Oct 9 2011 at 5:19 That makes it a much different problem than I was considering. I'll give it some more thought. – Chad Musick Oct 9 2011 at 5:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9581806063652039, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/43483/rotating-sphere-and-circular-trajectory-minimum-speed
# Rotating sphere and circular trajectory: minimum speed I have a sphere (mass = 3 kg), constrained to a fixed length rope, rotating (radius = 5 m) on a vertical plane. My textbook ask me about the minimum speed in the highest point in order to keep the circular trajectory. Now, I know that in the highest point (v=speed): $$F_c=mg+T$$ $$\frac{mv^2}{r}=mg+T$$ $$\frac{3}{5}v^2=29,43+T$$ I know that with a low speed I have a low Tension, so let's put T=0 and go on: $$v^2=49,05 \rightarrow v=7$$ The result is correct, but I have a little doubt: if the Tension is 0, why the sphere doesn't go along the tangent or start falling down? Exactly what is forcing the sphere to preserve the circular trajectory? In my ignorant opinion, the minimum speed is a little more than 7.00 m/s, because the tension mustn't be 0, even a very small number, but not null.. - The tension is only zero for an infinitely short time. Immediately before and immediately after the apex of the swing the tension is non-zero. – John Rennie Nov 5 '12 at 14:09 The tension may be nearly zero at the highest point, but its momentum keeps it going towards its tangent. – prash Nov 5 '12 at 14:32 @JohnRennie but why it follows the tracjectory and doesn't go along the tangent? – Surfer on the fall Nov 5 '12 at 14:36 @prash Exactly, towards its tangent! So it doesn't move along the circular trajectory: it's starting to go away! Am I right? – Surfer on the fall Nov 5 '12 at 14:37 How can it fly away, if it is attached by a rope? It always "wants" to fly tangentially, but the rope stops it from doing so; that's what causes the tension. At the top, if T=0, the rope is not needed for an infinitesimally short time; after that T increases again, and the rope is needed again. If T<0 then yes, the ball will fall down. – hdhondt Nov 6 '12 at 3:12 ## 1 Answer Here tension is zero for very sort span of time (infinitesimally sort time), or for an instant only. When ever it moves away from the vertically top position it will fill tension again. So it will move in circular trajectory. And if we consider instantaneous velocity it is tangential. I think this clarify your doubt. - Thanks a lot, but the problem is: why will Tension fill again? If tension is 0, we can imagine that there's no bound. So, exactly why the sphere follows the trajectory? – Surfer on the fall Nov 5 '12 at 14:35 1 Because the sphere is tied to rope. And when the sphere will move out from the vertical top position with tangential velocity of will stretch the string as it will try to move out from the circle. So string will pull it towards the center. That is the tension. – Apurba Nov 5 '12 at 14:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196053147315979, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/220284/does-sint-have-the-same-frequency-as-sin-sint
# Does $\sin(t)$ have the same frequency as $\sin(\sin(t))$? I plotted $\sin(t)$ and below it $\sin(\sin(t))$ on my computer and it looks as if they have the same frequency. That led me to wonder about the following statement: $\sin(t)$ has the same frequency as $\sin(\sin(t))$ Is this statement true or false, and how to prove it? Many thanks - ## 5 Answers For every function $f$, $f(\sin(t))$ is going to be $2\pi$-periodic, because $\sin(t)$ is $2\pi$-periodic. At every $2\pi$-interval, $\sin(t)$ is simply ranging over the values $[-1,1]$, so $f(\sin(t))$ is simply $f$ being evaluated over and over in the domain $[-1,1]$. - 4 The frequency requires the prime period, though. Just because a function has period x doesn't mean it doesn't also have period x/2 or x/3. In an extreme example, what if f is arcsin? – ex0du5 Oct 24 '12 at 19:28 4 @ex0du5: With $f=\arcsin$, you get a triangular sawtooth function - of period $2\pi$. But with $f(x)=x^2$, you are right: This would produce period $\pi$ and not just $2\pi$. Not to mention $f(x)=$const. – Hagen von Eitzen Oct 24 '12 at 19:50 1 Sure, the function could also be $T$ periodic for some other $T$, depending on the $f$. But it should be easy to check for nice $f$ whether there would be smaller periods. – Christopher A. Wong Oct 24 '12 at 19:51 @ChristopherA.Wong: Right. In this special case, $x\in[-1,1]$ and $\sin x=0$ implies $x=0$, therefore $\sin(\sin(x))=0$ iff $x=k\pi$. So apart from $2\pi$, only $\pi$ would be a possible period. But $\sin(\sin(\pi/2))>0$, $\sin(\sin(-\pi/2))<0$ rules this out. – Hagen von Eitzen Oct 24 '12 at 19:55 $\color{#C00000}{\sin}(x)$ is injective on $[-1,1]$ which is the range of $\color{#00A000}{\sin}(x)$. Thus, $$\color{#C00000}{\sin}(\color{#00A000}{\sin}(x))=\color{#C00000}{\sin}(\color{#00A000}{\sin}(y))\Leftrightarrow\color{#00A000}{\sin}(x)=\color{#00A000}{\sin}(y)$$ Therefore, the period of $\color{#C00000}{\sin}(\color{#00A000}{\sin}(t))$ is the same as that of $\color{#00A000}{\sin}(t)$. - In general, it is possible to express trigonometric functions of trigonometric functions via the Jacobi-Anger expansion. In the case of $\sin(\sin(t))$, we have: $\sin(\sin(t)) = 2 \sum_{n=1}^{\infty} J_{2n-1}(1) \sin\left[\left(2n-1\right) t\right]$, where $J_{2n-1}$ is the Bessel function of the first kind of order $2n-1$. It is clear from this expansion that the zeroes of $\sin(\sin(t))$ are the same as that of $\sin(t)$, since any even multiple of $\pi$ for the argument $t$ will also lead to an even multiple of $\pi$ for the $\sin[(2n-1)t]$ term in the expansion. As MrMas mentioned, though the functions have the same period, their spectral content is different. The expansion can viewed as a Fourier series for the spectral components of $\sin(\sin(t))$, the amplitudes of which are governed by the amplitude of the $2n-1$-th Bessel function. Here is a plot of $|J_n(1)|$ for $n \in \mathbb R [1, 10]$: For z = 1 in $\sin(z\sin(t))$, there is very little harmonic content, and in the time domain $\sin(\sin(t))$ doesn't look terribly different from an ordinary sine wave. - In short, the answer is no if you look at instantaneous frequency. The instantaneous frequency of a sinusoid is the derivative of the argument. That is, the frequency of $\sin(f(t))$ is $\frac{d}{dt}f(t)$. Thus the frequency of $\sin(\sin(t))$ is $\frac{d}{dt}\sin(t)=\cos(t)$. On the other hand, the frequency of $\sin(t)$ is $\frac{d}{dt}t = 1$. So they do have the same period, but their spectral content is different. - You also need to argue that the period is not shorter than $2\pi$ to be able to conclude that it is exactly two pi, even though it follows more or less directly from considering the graph of $\sin(t)$. - I do not understand "the zeros of $\sin(\sin(t))$ is not evenly spaced in the interval $[−1,1]$". The zeros of $\sin(\sin(t))$ are the integer multiples of $\pi$. – robjohn♦ Oct 24 '12 at 21:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459747076034546, "perplexity_flag": "head"}
http://mathoverflow.net/questions/31426?sort=newest
## When is a surface in a threefold contractible to a curve? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a threefold $Y$ containing a surface $S$. Under which conditions can I contract $S$ so that I still end up with a smooth variety? In other words what are the conditions for the existence of a smooth variety $X$ and a morphism $Y\rightarrow X$ such that the image of $S$ under the morphism is a curve and the morphism is an isomorphism away from $S$? What are the conditions when $Y$ is a fourfold and $S$ is still a surface? - ## 3 Answers You want a divisorial contraction $Y \to X$ on a smooth 3-fold Y. Such contractions have been classified by Mori in his paper "3-folds whose canonical bundles are not numerically effective", thm 3.3. There are exactly the following possibilities: 1. the smooth blow-up of a point; in this case $S$ is isomorphic to $\mathbb{P}^2$ with normal bundle $\mathcal{O}(-1)$; 2. the smooth blow-up of a curve, in this case $S$ is a ruled surface whose normal bundle restricted to the ruling has degree $(-1)$; this is the situation described by Donu Arapura in his answer; 3. the contraction of a plane $S$ with normal bundle $\mathcal{O}(-2)$; in this case the surface $X$ has an isolated singularity isomorphic to the quotient of $\mathbb{A}^3$ by the involution $(x,y,z) \to (-x, -y, -z)$; 4. the contraction of a smooth quadric $S$ whose rulings are numerically equivalent; in this case the image of $S$ is a single point, which is a singular point for $X$; 5. the contraction of a singular quadric $S$; again, the image of $S$ is a point in $X$. Summing up, if you want that $X$ is smooth and the image of $S$ is a curve, the only possibility is 2. In the case where $Y$ is a smooth 4-fold and $S$ is a smooth surface, the answer can be found in the paper of Kawamata "Small contractions of four-dimensional algebraic manifolds": in this case the only possibility is that $S$ is the disjoint union of copies of $\mathbb{P}^2$, with normal bundle $\mathcal{O}(-1) \oplus \mathcal{O}(-1)$ - @ Francesco. Thank you very much for your answer. In the of a smooth 4-fold, Kawamata's paper assumed that the contraction is " a small elementary contraction". It sounds that this is a pretty strong restriction since there are no small elementary contractions in the case of algebraic 3-folds. Do you know if there are any results with weaker assumptions? – JME Aug 14 2010 at 18:52 Unfortunately I'm not aware of any results with weaker assumptions. Maybe they exist, but I'm not really a specialist in Mory theory... – Francesco Polizzi Aug 18 2010 at 9:43 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think there are birational geometers lurking around who would do a better job. But let me make a small attempt for now. If you blow up a smooth curve on a smooth threefold, you would get a ruled surface with normal bundle restricting to $O(-1)$ along the rulings. I think you can do the converse if you allow yourself to work in the category of smooth algebraic spaces [see Artin, Cor. 6.11, Algebraization of formal moduli..., Annals 1970], but such a statement is generally be false for varieties. I remember learning this from Moishezon long ago. I think that if in addition to the above conditions, the fibre of the ruling is an extremal ray in Mori's sense, you can probably use his stuff to get a projective contraction. Take a look at Kenji Matsuki's book on the Intro. to the Mori program for more about that. - I hadn't realized there was already an answer, but I'll let mine stand for now. – Donu Arapura Jul 11 2010 at 22:07 Your reference is very useful! Thank you! – JME Jul 11 2010 at 23:19 You want a divisorial contraction. This paper may be the answer. 1. MR2041612 (2005c:14019) Tziolas, Nikolaos . Terminal 3-fold divisorial contractions of a surface to a curve. I. Compositio Math. 139 (2003), no. 3, 239--261. from the paper "This paper studies divisorial contractions of a surface to a curve, i.e. when dim $\Gamma = 1$ and X has only index 1 terminal singularities along $\Gamma$. It is not always true that given $\Gamma \subset X$, there is a terminal contraction of a surface to $\Gamma$. We investigate when there is one, give criteria for existence or not and in the case that there is a terminal contraction we also describe the singularities of Y." - Do you have a reference for the case where $X$ and $Y$ are smooth fourfolds and $S$ is still a surface contracting to a curve? – JME Jul 11 2010 at 22:55 1 @JME, I can't access these two papers right now but see alpha.science.unitn.it/~andreatt/bravo/rev.html (MR1620110, MR1638131). "...They achieve in dimension four what Mori completed for threefolds. In this they constitute the first significant step of the last decade in the four-dimensional minimal model program..." If not, ask Donu Arapura ! – SandeepJ Jul 12 2010 at 13:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436323046684265, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/265296/two-equivalent-definitions-of-a-s-convergence-of-random-variables?answertab=votes
Two equivalent definitions of a.s. convergence of random variables. Could someone prove that $\mathbb{P}[\omega:\lim_{n\to\infty}X_n(\omega) = X(\omega)] = 1$ iff $\lim_{n\to\infty}\mathbb{P}[\omega:\sup_{k>n}|X_k(\omega) - X(\omega)|>\epsilon] = 0$ ? Here $\{X_n\}_{n=1,2,\cdots}$ is a sequence of random variables. Those two are equivalent definitions of a.s. convergence of random variables. - 3 Answers Well, let's denote by $Y_n$ the variable $\sup_{k>n} |X_k - X|$. Observe that 1. $(Y_n)$ is a decreasing sequence of nonnegative random variables 2. $Y_n \to 0$ iff $X_n \to X$ 3. "$\mathsf{P}\{Y_n > \epsilon\} \to 0$ for all $\epsilon$" means that $Y_n \to 0$ in probability Now everything follows from the simple but useful fact: if a $(Y_n)$ is a monotone sequence, then its convergence in probability implies almost sure convergence (that the converse is true for any sequence is an extremely standard fact). This can be proved by different means, but the easiest proof that I know of is as follows: Any monotone sequence converges (almost surely) to something (this is just standard calculus, and "almost surely" is actually irrelevant). Hence it also converges in probability to the same limit, hence if $Y_n \to Y$ in probability, then $Y$ must coincide with the almost sure limit. - The former implies $$\begin{eqnarray} 0 &=& P(\lim_{i\to\infty}X_i \neq X\, \text{or} \lim_{i\to\infty}X_i \, \text{does not exists}) \\ &=& P(\omega:\exists n\in\mathbf{N}, \forall m\in\mathbf{N}, \exists i>m \,\,\, \text{s.t.} \,\, |X_i(\omega) - X(\omega)| < 1/n)\\ &=& P(\bigcup_n \bigcap_m \bigcup_{i>m} \{\omega:|X_i(\omega) - X(\omega)| \ge 1/n\})\\ &\overset{\forall n}{\ge}& P(\bigcap_m \bigcup_{i>m} \{\omega:|X_i(\omega) - X(\omega)| \ge 1/n\})\\ &=&P(\limsup_{i\to\infty} \{\omega:|X_i(\omega) - X(\omega)| \ge 1/n\}) \\ &=&P(\lim_{i\to\infty} \sup_{j > i}\{\omega:|X_j(\omega) - X(\omega)| \ge 1/n\})\\ &=&\lim_{i\to\infty} P(\sup_{j > i}\{\omega:|X_j(\omega) - X(\omega)| \ge 1/n\}) \ge 0.\\ \end{eqnarray}$$ The last line is justified by continuity of measures from above (see here). Now let's prove the converse. For some $n\in\mathbf{N}$, $$\begin{eqnarray} 0&=&\lim_{i\to\infty} P(\sup_{j > i}\{\omega:|X_j(\omega) - X(\omega)| \ge 1/n\})\\ &=&P(\lim_{i\to\infty} \sup_{j > i}\{\omega:|X_j(\omega) - X(\omega)| \ge 1/n\})\\ &=&P(\limsup_{i\to\infty} \{\omega:|X_j(\omega) - X(\omega)| \ge 1/n\})\\ &=&P(\bigcap_i\bigcup_{j>i} \{\omega:|X_j(\omega) - X(\omega)| \ge 1/n\})\\ &=&P(\omega:\forall i, \exists j>i, \text{s.t.}\, |X_j(\omega) - X(\omega)| \ge 1/n) \end{eqnarray}$$ This implies, for each $n$, $$\begin{eqnarray} 1 &=& P(\omega: \exists i, \forall j>i, \text{s.t.}\, |X_j(\omega) - X(\omega)| < 1/n)\\ &=&P(\omega:\lim_{i\to\infty}|X_j(\omega) - X(\omega)| = 0)\\ &=&P(\omega:\lim_{i\to\infty}X_j(\omega) = X(\omega)) \end{eqnarray}$$ Actually, I think this proof may need to be polished. But I hope it is not wrong in a bird's eye view at least. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311185479164124, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables
# Sum of normally distributed random variables In probability theory, calculation of the sum of normally distributed random variables is an instance of the arithmetic of random variables, which can be quite complex based on the probability distributions of the random variables involved and their relationships. ## Independent random variables If X and Y are independent random variables that are normally distributed (not necessarily jointly so), then their sum is also normally distributed. i.e., if $X \sim N(\mu_X, \sigma_X^2)$ $Y \sim N(\mu_Y, \sigma_Y^2)$ $Z=X+Y$ and X and Y are independent, then $Z \sim N(\mu_X + \mu_Y, \sigma_X^2 + \sigma_Y^2).$ This means that the sum of two independent normally distributed random variables is normal, with its mean being the sum of the two means, and its variance being the sum of the two variances (i.e., the square of the standard deviation is the sum of the squares of the standard deviations). Note that the result that the sum is normally distributed requires the assumption of independence, not just uncorrelatedness; two separately (not jointly) normally distributed random variables can be uncorrelated without being independent, in which case their sum can be non-normally distributed (see Normally distributed and uncorrelated does not imply independent#A symmetric example). The result about the mean holds in all cases, while the result for the variance requires uncorrelatedness, but not independence. ### Proofs #### Proof using characteristic functions [citation needed] The characteristic function $\varphi_{X+Y}(t) = \operatorname{E}\left(e^{it(X+Y)}\right)$ of the sum of two independent random variables X and Y is just the product of the two separate characteristic functions: $\varphi_X (t) = \operatorname{E}\left(e^{itX}\right), \qquad \varphi_Y(t) = \operatorname{E}\left(e^{itY}\right)$ of X and Y. The characteristic function of the normal distribution with expected value μ and variance σ2 is $\varphi_X(t) = \exp\left(it\mu - {\sigma^2 t^2 \over 2}\right).$ So $\varphi_{X+Y}(t)=\varphi_X(t) \varphi_Y(t) =\exp\left(it\mu_X - {\sigma_X^2 t^2 \over 2}\right) \exp\left(it\mu_Y - {\sigma_Y^2 t^2 \over 2}\right) = \exp \left( it (\mu_X +\mu_Y) - {(\sigma_X^2 + \sigma_Y^2) t^2 \over 2}\right).$ This is the characteristic function of the normal distribution with expected value $\mu_X + \mu_Y$ and variance $\sigma_X^2+\sigma_Y^2$ Finally, recall that no two distinct distributions can both have the same characteristic function, so the distribution of X+Y must be just this normal distribution. #### Proof using convolutions [citation needed] For random variables X and Y, the distribution fZ of Z = X+Y equals the convolution of fX and fY: $f_Z(z) = \int_{-\infty}^\infty f_Y(z-x) f_X(x) dx$ Given that fX and fY are normal densities, $f_X(x) = \frac{1}{\sqrt{2\pi}\sigma_X} e^{-{(x-\mu_X)^2 \over 2\sigma_X^2}}$ $f_Y(y) = \frac{1}{\sqrt{2\pi}\sigma_Y} e^{-{(y-\mu_Y)^2 \over 2\sigma_Y^2}}$ Substituting into the convolution: $\begin{align} f_Z(z) &= \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_Y} e^{-{(z-x-\mu_Y)^2 \over 2\sigma_Y^2}} \frac{1}{\sqrt{2\pi}\sigma_X} e^{-{(x-\mu_X)^2 \over 2\sigma_X^2}} dx \\ &= \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sqrt{\sigma_X^2+\sigma_Y^2}} \exp \left[ - { (z-(\mu_X+\mu_Y))^2 \over 2(\sigma_X^2+\sigma_Y^2) } \right] \frac{1}{\sqrt{2\pi}\frac{\sigma_X\sigma_Y}{\sqrt{\sigma_X^2+\sigma_Y^2}}} \exp \left[ - \frac{\left(x-\frac{\sigma_X^2(z-\mu_Y)+\sigma_Y^2\mu_X}{\sigma_X^2+\sigma_Y^2}\right)^2}{2\left(\frac{\sigma_X\sigma_Y}{\sqrt{\sigma_X^2+\sigma_Y^2}}\right)^2} \right] dx \\ &= \frac{1}{\sqrt{2\pi(\sigma_X^2+\sigma_Y^2)}} \exp \left[ - { (z-(\mu_X+\mu_Y))^2 \over 2(\sigma_X^2+\sigma_Y^2) } \right] \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\frac{\sigma_X\sigma_Y}{\sqrt{\sigma_X^2+\sigma_Y^2}}} \exp \left[ - \frac{\left(x-\frac{\sigma_X^2(z-\mu_Y)+\sigma_Y^2\mu_X}{\sigma_X^2+\sigma_Y^2}\right)^2}{2\left(\frac{\sigma_X\sigma_Y}{\sqrt{\sigma_X^2+\sigma_Y^2}}\right)^2} \right] dx \end{align}$ The expression in the integral is a normal density distribution on x, and so the integral evaluates to 1. The desired result follows: $f_Z(z) = \frac{1}{\sqrt{2\pi(\sigma_X^2+\sigma_Y^2)}} \exp \left[ - { (z-(\mu_X+\mu_Y))^2 \over 2(\sigma_X^2+\sigma_Y^2) } \right]$ #### Geometric proof [citation needed] First consider the normalized case when X, Y ~ N(0, 1), so that their PDFs are $f(x) = \sqrt{1/2\pi \,} e^{-x^2/2}$ and $g(y) = \sqrt{1/2\pi\,} e^{-y^2/2}.$ Let Z = X+Y. Then the CDF for Z will be $z \mapsto \int_{x+y \leq z} f(x)g(y) \, dx \, dy.$ This integral is over the half-plane which lies under the line x+y = z. The key observation is that the function $f(x)g(y) = (1/2\pi)e^{-(x^2 + y^2)/2}\,$ is radially symmetric. So we rotate the coordinate plane about the origin, choosing new coordinates $x',y'$ such that the line x+y = z is described by the equation $x' = c$ where $c = c(z)$ is determined geometrically. Because of the radial symmetry, we have $f(x)g(y) = f(x')g(y')$, and the CDF for Z is $\int_{x'\leq c, y' \in \reals} f(x')g(y') \, dx' \, dy'.$ This is easy to integrate; we find that the CDF for Z is $\int_{-\infty}^{c(z)} f(x') \, dx' = \Phi(c(z)).$ To determine the value $c(z)$, note that we rotated the plane so that the line x+y = z now runs vertically with x-intercept equal to c. So c is just the distance from the origin to the line x+y = z along the perpendicular bisector, which meets the line at its nearest point to the origin, in this case $(z/2,z/2)\,$. So the distance is $c = \sqrt{ (z/2)^2 + (z/2)^2 } = z/\sqrt{2}\,$, and the CDF for Z is $\Phi(z/\sqrt{2})$, i.e., $Z = X+Y \sim N(0, 2).$ Now, if a, b are any real constants (not both zero!) then the probability that $aX+bY \leq z$ is found by the same integral as above, but with the bounding line $ax+by =z$. The same rotation method works, and in this more general case we find that the closest point on the line to the origin is located a (signed) distance $\frac{z}{\sqrt{a^2 + b^2}}$ away, so that $aX + bY \sim N(0, a^2 + b^2).$ The same argument in higher dimensions shows that if $X_i \sim N(0,1), \qquad i=1, \dots, n,$ then $X_1+ \cdots + X_n \sim N(0, \sigma_1^2 + \cdots + \sigma_n^2).$ Now we are essentially done, because $X \sim N(\mu,\sigma^2) \Leftrightarrow \frac{1}{\sigma} (X - \mu) \sim N(0,1).$ So in general, if $X_i \sim N(\mu_i, \sigma_i^2), \qquad i=1, \dots, n,$ then $\sum_{i=1}^n a_i X_i \sim N\left(\sum_{i=1}^n a_i \mu_i, \sum_{i=1}^n (a_i \sigma_i)^2 \right).$ ## Correlated random variables In the event that the variables X and Y are jointly normally distributed random variables, then X + Y is still normally distributed (see Multivariate normal distribution) and the mean is the sum of the means. However, the variances are not additive due to the correlation. Indeed, $\sigma_{X+Y} = \sqrt{\sigma_X^2+\sigma_Y^2+2\rho\sigma_X \sigma_Y},$ where ρ is the correlation. In particular, whenever ρ < 0, then the variance is less than the sum of the variances of X and Y. This is perhaps the simplest demonstration of the principle of diversification. Extensions of this result can be made for more than two random variables, using the covariance matrix. ### Proof In this case, one needs to consider $\frac{1}{2 \pi \sigma_x \sigma_y \sqrt{1-\rho^2}} \iint_{x\,y} \exp \left[ -\frac{1}{2(1-\rho^2)} \left(\frac{x^2}{\sigma_x^2} + \frac{y^2}{\sigma_y^2} - \frac{2 \rho x y}{\sigma_x\sigma_y}\right)\right] \delta(z - (x+y))\, \operatorname{d}x\,\operatorname{d}y.$ As above, one makes the substitution $y\rightarrow z-x$ This integral is more complicated to simplify analytically, but can be done easily using a symbolic mathematics program. The probability distribution fZ(z) is given in this case by $f_Z(z)=\frac{1}{\sqrt{2 \pi}\sigma_+ }\exp\left(-\frac{z^2}{2\sigma_+^2}\right)$ where $\sigma_+ = \sqrt{\sigma_x^2+\sigma_y^2+2\rho\sigma_x \sigma_y}.$ If one considers instead Z = X − Y, then one obtains $f_Z(z)=\frac{1}{\sqrt{2\pi(\sigma_x^2+\sigma_y^2-2\rho\sigma_x \sigma_y)}}\exp\left(-\frac{z^2}{2(\sigma_x^2+\sigma_y^2-2\rho\sigma_x \sigma_y)}\right)$ which also can be rewritten with $\sigma_-=\sqrt{\sigma_x^2+\sigma_y^2-2\rho\sigma_x \sigma_y}.$ The standard deviations of each distribution are obvious by comparison with the standard normal distribution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184503555297852, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/09/24/decomposability/?like=1&source=post_flair&_wpnonce=fe7f791e1e
# The Unapologetic Mathematician ## Decomposability Today I’d like to cover a stronger condition than reducibility: decomposability. We say that a module $V$ is “decomposable” if we can write it as the direct sum of two nontrivial submodules $U$ and $W$. The direct sum gives us inclusion morphisms from $U$ and $W$ into $V$, and so any decomposable module is reducible. What does this look like in terms of matrices? Well, saying that $V=U\oplus W$ means that we can write any vector $v\in V$ uniquely as a sum $v=u+w$ with $u\in U$ and $w\in W$. Then if we have a basis $\{u_i\}_{i=1}^m$ of $U$ and a basis $\{w_j\}_{j=1}^n$ of $W$, then we can write $u$ and $w$ uniquely in terms of these basis vectors. Thus we can write any vector $v\in V$ uniquely in terms of the $\{u_1,\dots,u_m,v_1,\dots,v_n\}$, and so these constitute a basis of $V$. If we write the matrices $\rho(g)$ in terms of this basis, we find that the image of any $u_i$ can be written in terms of the others because $U$ is $G$-invariant. Similarly, the $G$-invariance of $W$ tells us that the image of each $w_j$ can be written in terms of the others. The same reasoning as last time now allows us to conclude that the matrices of the $\rho(g)$ all have the form $\displaystyle\left(\begin{array}{c|c}\alpha(g)&0\\\hline0&\gamma(g)\end{array}\right)$ Conversely, if we can write each of the $\rho(g)$ in this form, then this gives us a decomposition of $V$ as the direct sum of two $G$-invariant subspaces, and the representation is decomposable. Now, I said above that decomposability is stronger than reducibility. Indeed, in general there do exist modules which are reducible, but not decomposable. Indeed, in categorical terms this is the statement that for some groups $G$ there are short exact sequences which do not split. To chase this down a little further, our work yesterday showed that even in the reducible case we have the equation $\gamma(g)\gamma(h)=\gamma(gh)$. This $\gamma$ is the representation of $G$ on the quotient space, which gives our short exact sequence $\displaystyle\mathbf{0}\to W\to V\to V/W\to\mathbf{0}$ But in general this sequence may not split; we may not be able to write $V\cong W\oplus V/W$ as $G$-modules. Indeed, we’ve seen that the representation of the group of integers $\displaystyle n\mapsto\begin{pmatrix}1&n\\{0}&1\end{pmatrix}$ is indecomposable. ## 5 Comments » 1. sorry, just a question: what software do you use to prepare mathematical formulas in your blog? Comment by Ng Foo Keong | September 26, 2010 | Reply • WordPress.com-hosted weblogs have native support for $\LaTeX$. Before the formula, you put `$latex`, and after it you close with another `$`. Comment by | September 26, 2010 | Reply 2. [...] form is this: if we have an invariant form on our space , then any reducible representation is decomposable. That is, if is a submodule, we can find another submodule so that as [...] Pingback by | September 27, 2010 | Reply 3. [...] saw last time that in the presence of an invariant form, any reducible representation is decomposable, and so any representation with an invariant form is completely reducible. Maschke’s theorem [...] Pingback by | September 28, 2010 | Reply 4. [...] into a direct sum of irreducible submodules, we say that is “completely reducible”, or “decomposable”, as we did when we were working with groups. Any module where any nontrivial proper submodule has a [...] Pingback by | September 16, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216116070747375, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/137333-find-basis-orthogonal-given-u.html
# Thread: 1. ## Find Basis orthogonal to a given u Let u = (1 , 0, 2, 1). Find a basis for the subspace of vectors orthogonal to u in R4. Please help me get started. The ones I did before used the Gram- Schmidt process, however I only have one vector, so I am unsure of how to begin. Thanks. 2. Those vectors $(x_1, x_2, x_3, x_4)$ which are orthogonal to $(1,0,2,1)$ satisfy the equation $x_1+2x_3+x_4=0$. Now you can let, say, $x_1=u$, $x_2=v$, $x_3=w$ and then we have $x_4=-u-2w$. Thus your subspace consists of all vectors of the form $(u, v, w, -u-2w)$. It's a 3-dimensional vector space (3 parameters) so you'll need 3 vectors in your basis; by picking values of $u,v,w$ it's easy to find 3 vectors who do the job. Take it from there! 3. Ok, I think I got it now. Example, if I were to let u = 1 and v = w= 0. then x4 would be -1 and that would be one of the vectors in the basis. So a solution would be ( 1 0 0 -1) (0 1 0 0) (0 0 1 -2), am I right? I am unsure if the second one is correct, but I think it makes sense to me. 4. That's good! 5. Thank you soo much
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585776329040527, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/85917/coin-toss-bet-and-gamblers-that-bet-everything/85929
# Coin toss bet and gamblers, that bet everything I have come into one problem that I don't know the solution of. Let's suppose there is a casino, where you can bet on toin coss, and you get the double if you bet right. Let's also suppose that the casino always has money to pay to everyone. It is a fair coin, so the gamblers have 50% chance to be right. Now, imagine that all the gamblers always bet everything they have and what they have won already. In every bet. New gamblers enter the casino time from time. Gamblers do have finite amount of money. The gamblers have high probability of losing everything, very soon. So, most of the gamblers will come out empty-handed after a while. However, from the casino's point of view, they have zero expected income, because you do have zero expected income at this type of game. So, the players lose everything, while the casino shouldn't have any income. How is that possible? edit: There was a question if casino has infinite amount of money. I am not sure about that, actually. Does it change the problem? If it does, how? - Can you please tell me the odds of being correct? Are assuming the casino has infinite amount of money? are we assuming all the players have infinite amount of money? Also, if you only had 10% of being correct on each bet, the casino wouldn't have zero expected income. – simplicity Nov 27 '11 at 1:14 I clarified it. It is a fair coin and they have 50% chance of being right. – Karel Bílek Nov 27 '11 at 1:16 I don't see the problem here. Also we assuming the casino has infinite amount of money and there are infinite amount of people playing it? – simplicity Nov 27 '11 at 1:23 There is a finite number of people playing it at a given time. Casino ... I am not sure about that. OK, yes, suppose it has infinite amount of money. – Karel Bílek Nov 27 '11 at 1:29 My "problem" is - the people are in loss, but the casino shouldn't be in gain. – Karel Bílek Nov 27 '11 at 1:35 ## 2 Answers The casino only has zero expected income at every finite time step, because at every stage there is a small chance that one of the gamblers has won an exponentially large amount of money. However, if there are a finite number of gamblers, then after some finite time they will all have run out of money, and the casino will be in profit. But a priori you don't know when this time is, so there is always a zero expectation for the casino. A nice way to see this is by looking at some random realizations. I wrote a quick Matlab script to simulate this situation. I took 1024 gamblers who each came into the casino with a dollar, and ran the simulation for 20 time steps (so that there's only a very small probability that any of the gamblers have any money remaining by the 20th step). I then repeated this 100 times, and plotted the casino's net gain/loss after each time step for each of the 100 runs. As you can see, although all the casinos end up in profit, some of them suffer extreme losses before they enter profitability. It is this possibility for extreme loss that means that they always have zero expectation of profit. Note that I had to truncate the bottom of the plot for this to be anything other than a mess of blue lines - the worst-performing casino in my sample incurred losses of \$65,000 before returning to profitability! Here's another way to look at it. After $t$ timesteps, the total player winnings are $$\sum_{i=1}^n X_i$$ where each $X_i$ is a random variable taking the values $2^t$ with probability $2^{-t}$ or $0$ with probability $1-2^{-t}$. For each player, their expected winnings at time $t$ is 1, but the variance of their winnings is $2^t-1$. Therefore the variance of the total winnings for $n$ players is $n(2^t-1)$. And the variance of casino winnings is the same, by symmetry. This explains why the result is so unintuitive - for most problems, the variance stabilises as the number of timesteps increases. For this problem the variance keeps increasing, so talking about the limit of infinitely many timesteps doesn't make a lot of sense. - Extremely helpful answer, thanks! – Karel Bílek Nov 27 '11 at 2:13 There is a possibility that one of the gamblers will keep winning forever and become infinitely rich. The probability of this happening is $0$, but since the loss to the casino in this case is infinite, it is not clear that it is sound simply to disregard it. In particular, it is not clear that we can validly use expected values to analyze the casino's long-term position here -- the situation is somewhat reminiscent of the St. Petersburg paradox. See also Gambler's ruin. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9650779366493225, "perplexity_flag": "middle"}
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Mathematical_formulation_of_quantum_mechanics
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Mathematical formulation of quantum mechanics One of the remarkable characteristics of the mathematical formulation of quantum mechanics, which distinguishes it from mathematical formulations of theories developed prior to the early 1900s, is its use of abstract mathematical structures, such as Hilbert spaces and operators on these spaces. Many of these structures had not even been considered before the twentieth century. In a general sense they are drawn from functional analysis, a subject within pure mathematics that developed in parallel with, and was influenced by the needs of quantum mechanics. In brief, physical quantities such as energy and momentum were no longer considered as functions on some phase space, but as operators on such functions. This formulation of quantum mechanics continues to be used today, and still forms the basis of ab-initio calculations in atomic, molecular and solid-state physics. At the heart of the description is an idea of quantum state which, for systems of atomic scale, is radically different from the previous models of physical reality. While the mathematics is a complete description and permits calculation of many quantities that can be measured experimentally, there is a definite limit to access for an observer with macroscopic instruments. This limitation was first elucidated by Heisenberg through a thought experiment, and is represented mathematically by the non-commutativity of quantum observables. Prior to the emergence of quantum mechanics as a separate theory, the mathematics used in physics consisted mainly of differential geometry and partial differential equations; probability theory was used in statistical mechanics. Geometric intuition clearly played a strong role in the first two and, accordingly, theories of relativity were formulated entirely in terms of geometric concepts. The phenomenology of quantum physics arose roughly between 1895 and 1915, and for the 10 to 15 years before the emergence of quantum theory (around 1925) physicists continued to think of quantum theory within the confines of what is now called classical physics, and in particular within the same mathematical structures. The most sophisticated example of this is the Sommerfeld-Wilson-Ishiwara quantization rule, which was formulated entirely on the classical phase space. Contents ## History of the formalism ### The "old quantum theory" and the need for new mathematics Main article: Old quantum theory In the decade of 1890, Planck was able to derive the blackbody spectrum and solve the classical ultraviolet catastrophe by making the unorthodox assumption that, in the interaction of radiation with matter, energy could only be exchanged in discrete units which he called quanta. Planck postulated a direct proportionality between the frequency of radiation and the quantum of energy at that frequency. The proportionality constant, h is now called Planck's constant in his honour. In 1905, Einstein explained certain features of the photoelectric effect by assuming that Planck's light quanta were actual particles, which he called photons. In 1913, Bohr calculated the spectrum of the hydrogen atom with the help of a new model of the atom in which the electron could orbit the proton only on a discrete set of classical orbits, determined by the condition that angular momentum was an integer multiple of Planck's constant. Electrons could make quantum leaps from one orbit to another, emitting or absorbing single quanta of light at the right frequency. All of these developments were phenomenological and flew in the face of the theoretical physics of the time. Bohr and Sommerfeld went on to modify classical mechanics in an attempt to deduce the Bohr model from first principles. They proposed that, of all closed classical orbits traced by a mechanical system in its phase space, only the ones that enclosed an area which was a multiple of Planck's constant were actually allowed. The most sophisticated version of this formalism was the so-called Sommerfeld-Wilson-Ishiwara quantization . Although the Bohr model of the hydrogen atom could be explained in this way, the spectrum of the helium atom (classically an unsolvable 3-body problem) could not be predicted. The mathematical status of quantum theory remained uncertain for some time. In 1923 de Broglie proposed that wave-particle duality applied not only to photons but to electrons and every other physical system. The situation changed rapidly in the years 1925-1930, when working mathematical foundations were found through the groundbreaking work of Schrödinger and Heisenberg and the foundational work of von Neumann, Weyl and Paul Dirac, and it became possible to unify several different approaches in terms of a fresh set of ideas. ### The "new quantum theory" Schrödinger's wave mechanics originally was the first successful attempt at replicating the observed quantization of atomic spectra with the help of a precise mathematical realization of de Broglie's wave-particle duality. Schrödinger proposed an equation (now bearing his name) for the wave associated to an electron in an atom according to de Broglie, and explained energy quantization by the well-known fact that differential operators of the kind appearing in his equation had a discrete spectrum. However, Schrödinger himself initially did not understand the fundamental probabilistic nature of quantum mechanics, as he thought that the (amplitude squared) wavefunction of an electron must be interpreted as the charge density of an object smeared out over an extended, possibly infinite, volume of space. It was Max Born who introduced the probabilistic interpretation of the (amplitude squared) wave function as the probability distribution of the position of a pointlike object. With hindsight, Schrödinger's wave function can be seen to be closely related to the classical Hamilton-Jacobi equation. Heisenberg's matrix mechanics formulation, introduced contemporaneously to Schrödinger's wave mechanics and based on algebras of infinite matrices, was certainly very radical in light of the mathematics of classical physics. In fact, at the time linear algebra was not generally known to physicists in its present form. The reconciliation of the two approaches is generally associated to Paul Dirac, who wrote a lucid account in his 1930 classic Principles of Quantum mechanics. In it, he introduced the bra-ket notation, together with an abstract formulation in terms of the Hilbert space used in functional analysis, and showed that Schödinger's and Heisenberg's approaches were two different representations of the same theory. The first complete mathematical formulation of this approach is generally credited to von Neumann's 1932 book Mathematical Foundations of Quantum Mechanics, although Weyl had already referred to Hilbert spaces (which he called unitary spaces) in his 1927 classic book. It was developed in parallel with a new approach to the mathematical spectral theory based on linear operators rather than the quadratic forms that were David Hilbert's approach a generation earlier. Though theories of quantum mechanics continue to evolve to this day, there is a basic framework for the mathematical formulation of quantum mechanics which underlies most approaches and can be traced back to the mathematical work of von Neumann. In other words, discussions about interpretation of the theory, and extensions to it, are now mostly conducted on the basis of shared assumptions about the mathematical foundations. ### Later developments The application of the new quantum theory to electromagnetism resulted in quantum field theory, which was developed starting around 1930. Quantum field theory has driven the development of more sophisticated formulations of quantum mechanics, of which the one presented here is a simple special case. In fact, the difficulties involved in implementing any of the following formulations cannot be said yet to have been solved in a satisfactory fashion except for ordinary quantum mechanics. On a different front, von Neumann originally dispatched quantum measurement with his infamous postulate on the collapse of the wavefunction, raising a host of philosophical problems. Over the intervening 70 years, the problem of measurement became an active research area and itself spawned some new formulations of quantum mechanics. • Relative state/Many-worlds interpretation of quantum mechanics • Decoherence • Consistent histories formulation of quantum mechanics • Quantum logic formulation of quantum mechanics A related topic is the relationship to classical mechanics. Any new physical theory is supposed to reduce to successful old theories in some approximation. For quantum mechanics, this translates into the need to study the so-called classical limit of quantum mechanics. Also, as Bohr emphasized, human cognitive abilities and language are inextricably linked to the classical realm, and so classical descriptions are intuitively more accessible than quantum ones. In particular, quantization, namely the construction of a quantum theory whose classical limit is a given and known classical theory, becomes an important area of quantum physics in itself. Finally, some of the originators of quantum theory (notably Einstein and Schrödinger) were unhappy with what they thought were the philosophical implications of quantum mechanics. In particular, Einstein took the position that quantum mechanics must be incomplete, which motivated research into so-called hidden-variable theories. The issue of hidden variables has become in part an experimental issue with the help of quantum optics. • de Broglie-Bohm-Bell pilot wave formulation of quantum mechanics • Bell's inequalities • Kochen-Specker theorem ## Mathematical structure of quantum mechanics A physical system is generally described by three basic ingredients: states; observables; and dynamics (or law of time evolution) or, more generally, a group of physical symmetries . A classical description can be given in a fairly direct way by a phase space model of mechanics: states are points in a symplectic phase space, observables are real-valued functions on it, time evolution is given by a one-parameter group of symplectic transformations of the phase space, and physical symmetries are realized by symplectic transformations. A quantum description consists of a Hilbert space of states, observables are self-adjoint operators on the space of states, time evolution is given by a one-parameter group of unitary transformations on the Hilbert space of states, and physical symmetries are realized by unitary transformations. ### Postulates of quantum mechanics The following summary of the mathematical framework of quantum mechanics can be partly traced back to von Neumann's postulates. • Each physical system is associated with a separable complex Hilbert space H with inner product $\langle\phi\mid\psi\rangle$. Rays (one-dimensional subspaces) in H are associated with states of the system. In other words, physical states can be identified with equivalence classes of vectors of length 1 in H, where two vectors represent the same state if they differ only by a phase factor. Separability is a mathematically convenient hypothesis, with the physical interpretation that countably many observations are enough to uniquely determine the state. • The Hilbert space of a composite system is the Hilbert space tensor product of the state spaces associated with the component systems. For a non-relativistic system consisting of a finite number of distinguishable particles, the component systems are the individual particles. • Physical symmetries act on the Hilbert space of quantum states unitarily or antiunitarily (supersymmetry is another matter entirely). • Physical observables are represented by densely-defined self-adjoint operators on H. The expected value (in the sense of probability theory) of the observable A for the system in state represented by the unit vector $\left|\psi\right\rangle\in H$ is $\langle\psi\mid A\mid\psi\rangle$ By spectral theory, we can associate a probability distribution to the values of A in any state ψ. We can also show that the possible values of the observable A in any state must belong to the spectrum of A. In the special case A has discrete spectrum, the possible values of A in any state are its eigenvalues. More generally, a state can be represented by a so-called density operator, which is a trace class, nonnegative self-adjoint operator ρ normalized to be of trace 1. The expected value of A in the state ρ is $\operatorname{tr}(A\rho)$ If ρψ is the orthogonal projector onto the one-dimensional subspace of H spanned by $\left|\psi\right\rangle$, then $\operatorname{tr}(A\rho_\psi)=\left\langle\psi\mid A\mid\psi\right\rangle$ Density operators are those that are in the closure of the convex hull of the one-dimensional orthogonal projectors. Conversely, one-dimensional orthogonal projectors are extreme points of the set of density operators. Physicists also call one-dimensional orthogonal projectors pure states and density operators mixed states. One can in this formalism state Heisenberg's uncertainty principle and prove it as a theorem, although the exact historical sequence of events, concerning who derived what and under which framework, is the subject of historical investigations outside the scope of this article. Superselection sectors. The correspondence between states a rays needs to be refined somewhat to take into account so-called superselection sectors . States in different superselection sectors cannot influence each other, and the relative phases between them are unobservable. ### Pictures of dynamics In the so-called Schrödinger picture of quantum mechanics, the dynamics is given as follows: The state is given by a differentiable map (with respect to the Hilbert space norm topology) from time, which is an infinite one dimensional manifold parameterized by t, to the Hilbert space of states. If $\left|\psi\left(t\right)\right\rangle$ denotes the state of the system at any one time t, the following Schrödinger equation holds: $i\hbar\frac{d}{dt}\left|\psi(t)\right\rangle=H\left|\psi(t)\right\rangle$ where H is a densely-defined self-adjoint operator, called the system Hamiltonian, i is the imaginary unit and $\hbar$ is the reduced Planck constant. As an observable, H corresponds to the total energy of the system. Alternatively, one can state that there is a continuous one-parameter unitary group U(t): H → H such that $\left|\psi(t+s)\right\rangle=U(t)\left|\psi(s)\right\rangle$ for all times s, t. The existence of a self-adjoint Hamiltonian H such that $U(t)=e^{-(i/\hbar)t H}$ is a consequence of Stone's theorem on one-parameter unitary groups. The Heisenberg picture of quantum mechanics focuses on observables and instead of considering states as varying in time, it regards the states as fixed and the observables as changing. To go from the Schrödinger to the Heisenberg picture one needs to define time-independent states and time-dependent operators thus: $\left|\psi\right\rangle = \left|\psi(0)\right\rangle$ $A(t) = U(-t)AU(t) \quad$ It is then easily checked that the expected values of all observables are the same in both pictures $\langle\psi\mid A(t)\mid\psi\rangle=\langle\psi(t)\mid A\mid\psi(t)\rangle$ and that the time-dependent Heisenberg operators satisfy $i\hbar{d\over dt}A(t) = [A(t),H]$ This assumes A is not time dependent in the Schrödinger picture. The so-called Dirac picture or interaction picture has time-dependent states and observables, evolving with respect to different Hamiltonians. This picture is most useful when the evolution of the states can be solved exactly, confining any complications to the evolution of the operators. For this reason, the Hamiltonian for states is called "free Hamiltonian" and the Hamiltonian for observables is called "interaction Hamiltonian". In symbols: $i\hbar\frac{\partial}{\partial t}\left|\psi(t)\right\rangle=\operatorname{H_0}\left|\psi(t)\right\rangle$ $i\hbar{\partial\over\partial t}A(t) = [A(t),H_{\rm int}]$ The interaction picture does not always exist, though. In interacting quantum field theories, Haag's theorem states that the interaction picture does not exist. This is because the Hamiltonian cannot be split into a free and an interacting part within a superselection sector. The Heisenberg picture is the closest to classical mechanics, but the Schrödinger picture is considered easiest to understand by most people, to judge from pedagogical accounts of quantum mechanics. The Dirac picture is the one used in perturbation theory, and is specially associated to quantum field theory. Similar equations can be written for any one-parameter unitary group of symmetries of the physical system. Time would be replaced by a suitable coordinate parameterizing the unitary group (for instance, a rotation angle, or a translation distance) and the Hamiltonian would be replaced by the conserved quantity associated to the symmetry (for instance, angular or linear momentum). ### Representations The original form of the Schrödinger equation depends on choosing a particular representation of Heisenberg's canonical commutation relations. The Stone-von Neumann theorem states all irreducible representations of the finite-dimensional Heisenberg commutation relations are unitarily equivalent. This is related to quantization and the correspondence between classical and quantum mechanics, and is therefore not strictly part of the general mathematical framework. The quantum harmonic oscillator is an exactly-solvable system where the possibility of choosing among more than one representation can be seen in all its glory. There, apart from the Schrödinger (position or momentum) representation one encounters the Fock (number) representation and the Bargmann-Segal (phase space or coherent state) representation. All three are unitarily equivalent. ### Time as an operator The framework presented so far singles out time as the parameter that everything depends on. It is possible to formulate mechanics in such a way that time becomes itself an observable associated to a self-adjoint operator. At the classical level, it is possible to arbitrarily parameterize the trajectories of particles in terms of an unphysical parameter s, and in that case the time t becomes an additional generalized coordinate of the physical system. At the quantum level, translations in s would be generated by a "Hamiltonian" H-E, where E is the energy operator and H is the "ordinary" Hamiltonian. However, since s is an unphysical parameter, physical states must be left invariant by "s-evolution", and so the physical state space is the kernel of H-E (this requires the use of a rigged Hilbert space and a renormalization of the norm). This is related to quantization of constrained systems and quantization of gauge theories. ## The problem of measurement The picture given in the preceding paragraphs is sufficient for description of a completely isolated system. However, it fails to account for one of the main differences between quantum mechanics and classical mechanics, that is the effects of measurement. ### Wavefunction collapse When von Neumann proposes his mathematical postulate for quantum mechanics he included the following "measurement postulate". • Carrying out a measurement of an observable A with discrete spectrum on a system in the state represented by $\left|\psi\right\rangle$ will cause the system state to collapse into an eigenstate (i.e. eigenvector), $\left|\psi_a\right\rangle$ of the operator; the observed value corresponds to the eigenvalue a of the eigenstate: $A \left|\psi_a\right\rangle=a\left|\psi_a\right\rangle$ Mathematically, the collapse corresponds to orthogonal projection of $\left|\psi\right\rangle$ onto the eigenspace of A corresponding to the observed value a Von Neumann's postulate regarding the effects of measurement has always been a source of confusion and speculation. Fortunately, there is a general mathematical theory of such irreversible operations (see quantum operation) and various physical interpretations of the mathematics. ### The relative state interpretation An alternative interpretation of measurement is Everett's relative state interpretation, which was later dubbed "many-worlds interpretation" of quantum mechanics. ## List of mathematical tools Part of the folklore of the subject concerns the mathematical physics textbook Courant-Hilbert, put together by Richard Courant from David Hilbert's Göttingen University courses. The story is told (by mathematicians) that physicists had dismissed the material as not interesting in the current research areas, until the advent of Schrödinger's equation. At that point it was realised that the mathematics of the new quantum mechanics was already laid out in it. It is also said that Heisenberg had consulted Hilbert about his matrix mechanics, and Hilbert observed that his own experience with infinite-dimensional matrices had derived from differential equations, advice which Heisenberg ignored, missing the opportunity to unify the theory as Weyl and Dirac did a few years later. Whatever the basis of the anecdotes, the mathematics of theory was conventional at the time, where the physics was radically new. The main tools include: See also: list of mathematical topics in quantum theory. 03-10-2013 05:06:04 The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details Science kits, science lessons, science toys, maths toys, hobby kits, science games and books - these are some of many products that can help give your kid an edge in their science fair projects, and develop a tremendous interest in the study of science. When shopping for a science kit or other supplies, make sure that you carefully review the features and quality of the products. Compare prices by going to several online stores. Read product reviews online or refer to magazines. Start by looking for your science kit review or science toy review. Compare prices but remember, Price \$ is not everything. Quality does matter. Science Fair Coach What do science fair judges look out for? ScienceHound Science Fair Projects for students of all ages All Science Fair Projects.com Site All Science Fair Projects Homepage Search | Browse | Links | From-our-Editor | Books | Help | Contact | Privacy | Disclaimer | Copyright Notice
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315053224563599, "perplexity_flag": "head"}
http://mathoverflow.net/questions/8178?sort=newest
## Examples of divisors on an analytical manifold ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am trying to understand divisors reading through Griffith and Harris but it is difficult to come up with any particular interesting example. I have browsed through Hartshone's book but everything is expressed in terms of schemes, and I believe it is still possible to find some toy example to carry with me without having to learn what a scheme is first. Does anyone know any reference or book which has some exercise or example on this? In particular I would like to see examples of linear systems of divisors and how given a linear system of dimension $n$ I can choose a pencil inside it. Apologies if this is not the place to write this, it is my first post. I have searched through the questions and have not find anything similar. - 1 Thanks to all of those who replied. Your answers have been very useful – Jesus Martinez Garcia Dec 14 2009 at 19:26 ## 4 Answers Hartshorne is the reference where you can find the following example which might be useful. I what follow everything is with multiplicity. Now Alberto pointed out above the case of the divisor over $\mathbb{P}^1$ associated to its "tangent bundle": Two points over the sphere counted with multiplicity (from here though, it is not hard to believe that the Chern class of such a bundle is going to be 2). Notice that these two points are given by zeros of polynomials of degree two defined over the sphere. I think nothing stops you taking now polynomial of degree 3, 4 and so on. Then what we get are nothing but 3, 4 points over the sphere: Divisors of degree 3, 4 and so on. We can do something similar over all the curves (Riemann Surfaces) and what we get are divisors: points with labels. Such labels are the multiplicities. Chapter IV Hartshorne. or Klaus-Hulek: Elementary Algebraic Geometry. Now, let's take a look at divisors over the surface $\mathbb{P}^2$: they are algebraic curves (Riemann Surfaces). Do not get confused please by the name Surface here. Applying the same argument as before, a divisor of degree two is going to be the zero locus of polynomials of degree 2: conics. Same for degree three (cubics), four (quartics), and so on and so forth. For instance, in degree two we might have the divisor $C=([x:y:z]\in \mathbb{P}^2|\ \ x^2+y^2=z^2)$. Deshomogenizing with $H=[z=1]$ you get a perfect polynomial $x^2+y^2=1$ which defines the intersection $H\cap C$. This is how your global divisor $C$ looks like locally. Now taking a family of divisors of degree two, the conics, it is well known that the space of embeddings of conics in $\mathbb{P}^2$ is (the linear system) $\mathbb{P}^5$. We get this by considering the coefficients in the equation $ax^2+by^2+cz^2+dxy+exz+fyz=0$ as coordinates in $\mathbb{P}^5$. Notice that we get the following map out of the previous considerations, $$\phi:\mathbb{P}^2\rightarrow \mathbb{P}^5$$ given by $[x:y:z]\mapsto [x^2:y^2:z^2:xy:xz:yz]$. Here pencils are a subfamily of conics in the complete linear system given above with a certain property (find out which one). However, we can consider the following subfamily of conics: all those conics passing through a fixed point in $\mathbb{P}^2$. This is nothing but a hyperplane $H$ in $\mathbb{P}^5$. We can even consider $\phi(\mathbb{P}^2)\cap H$. This is going to be a divisor on $\mathbb{P}^2\cong \phi(\mathbb{P}^2)$. Guess which one?. Hartshorne II section 7. One can apply the the ideas with zero locus of polynomials of degree three: Divisors of degree 3 in $\mathbb{P}^2$. These were given the name of elliptic curves. (did someone say that in considering such curves, we find the divisor associated to the canonical bundle of $\mathbb{P}^2$?). We can go on with the degree and getting divisors on the projective plane of higher degree. These were only examples of divisors on $\mathbb{P}^2$. Notice that all of them have a nontrivial topology and geometry. This fact is not a coincidence and the book of HG argues in this direction in Chapter zero. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For divisors on curves, as recommended by Alberto above, I enjoyed Otto Forster's book Lectures on Riemann surfaces. Section 16 of the book has a nice treatment on the way to the proof of Riemann-Roch. - Here's a basic (and often used!) example: the zero locus of a homogeneous polynomial of degree $d$ in $\mathbb{P}^n$. For concreteness, let me spell out the case $n = 1$, $d = 2$. Cover $\mathbb{P}^1$ with the two open subsets `$$\mathcal{U}_0 = \lbrace [x_0 : x_1] \; | x_0 \neq 0 \rbrace, \quad\text{and}\quad \mathcal{U}_1 = \lbrace [x_0 : x_1] \; | x_1 \neq 0 \rbrace$$` Local coordinates on these patches are $z:= x_1 / x_0$ and $w := x_0 / x_1$, respectively; on `$\mathcal{U}_0 \cap \mathcal{U}_1$`, they are related by $w = z^{-1}$. If your quadratic polynomial is given by $F = a_0 x_0^2 + a_1 x_0 x_1 + a_2 x_1^2$, you have the local expressions $$f_0 = a_0 z^2 + a_1 z + a_2, \quad\text{and}\quad f_1 = a_0 + a_1 w + a_2 w^2$$ respectively. You can check that they are related over `$\mathcal{U}_0 \cap \mathcal{U}_1$` by multiplication by a unit, and hence define a divisor as a global section of the sheaf $\mathcal{M}^\ast / \mathcal{O}^\ast$. Notice that this divisor just consists of two points (counting multiplicities, of course). In these terms, a pencil is given by the vanishing loci of linear combinations of the form $\lambda F + \mu G$, where $\lambda, \mu \in \mathbb{C}$ not both zero and $G$ another homogeneous polynomial of degree 2. As Charlie pointed out, you will want to learn more about curves, where divisors are formal integral linear combinations of points. Other cases with beautiful geometry: • Divisors on $\mathbb{P}^2$ give you plane curves, where you can see Bézout's theorem at work. • Pencils of homogeneous polynomials of degree 2 in $\mathbb{P}^n$ (i.e., pencils of quadric hypersurfaces) also constitute a very rich case: Griffiths and Harris' last chapter talks about it (although it is a nice case to play with by yourself). - You want concrete? Then you want curves and surfaces! Check out Chapter V of Miranda's "Algebraic Curves and Riemann Surfaces" to see lots of stuff about divisors, how they're made up of functions, how they can give maps to projective space, etc. As for how to choose a pencil, that's just choosing two divisors that are linearly equivalent, and taking linear combinations. For surfaces, you'll want Beauville's book "Complex Algebraic Surfaces" which does surface theory without schemes, over $\mathbb{C}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.952363908290863, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/136518/a-question-about-the-proof-of-seifert-van-kampen/136893
# A question about the proof of Seifert - van Kampen I'm studying Algebraic Topology using Hatcher's textbook as my main reference, and there is a detail in the proof of the Seifert - van Kampen Theorem (page 43) which is still unclear to me. The heart of the proof of the second part of the statement (the one regarding relations) is where it says (page 45): If we can show that any two factorizations of $[f]$ are equivalent, this will say that the map $Q \rightarrow \pi_1(X)$ induced by $\phi$ is injective, hence the kernel of $\phi$ is exactly $N$, and the proof will be complete. I'm not exactly sure which result Hatcher is referring to here, but the way I see it, this statement is a sort of a converse of the classical homomorphism theorem (if $\phi: G \rightarrow G'$ is a group homomorphism then $\phi$ canonically defines an injective map ${G}/{\ker \phi} \rightarrow G'$). To make things rigorous I stated the following Lemma Let $\phi: G \rightarrow G'$ be a group homomorphism and $N \leq \ker \phi$ ($N$ not necessarily normal in $G$). Let $G/N^r$ be the set of right cosets of $N$. Then we can define \begin{align} \tilde{\phi}: G/N^r &\rightarrow G' \\ Nx &\mapsto \phi(x) \end{align} and if $\tilde{\phi}$ is injective then $\ker \phi = N$. I managed to prove this lemma (if necessary I will edit the question to include such proof), but the problem now is that this seems to be too strong a result! What the Theorem says is that the kernel of $\Phi$ is the normal subgroup generated by all the $i_{\alpha \beta}(\omega) i_{\beta \alpha}(\omega)^{-1}$'s; it doesn't say that the kernel is the subgroup generated by all such elements, which also happens to be normal. By applying the above lemma to Hatcher's proof, with $G = \ast_{\alpha} \pi_1({A_\alpha)}, N = \langle \{ i_{\alpha \beta}(\omega) i_{\beta \alpha }(\omega)^{-1}: \omega \in \pi_1 (A_{\alpha} \cap A_{\beta}) \}\rangle, \phi = \Phi$ (note that $N \leq \ker \phi$ is remarked at the beginning of page 43), and by subsituting every instance of $Q = \ast_{\alpha} \pi_1(A_{\alpha})$ with the set of right cosets, one would thus be able to prove a stronger version of the theorem (taking $\ker \Phi$ to be the subgroup generated by the $i_{\alpha \beta}(\omega) i_{\beta \alpha }(\omega)^{-1}$'s , and not its normal closure; the normality of such a subgroup would then be a consequence of the fact that it is the kernel of a group homomorphism). This makes me suspicious that I might be getting something wrong... So, to summarize, here are my questions: • What am I getting wrong? Have I misunderstood the statement of the theorem, is the lemma wrong or is the normality of the subgroup used in some other part of the proof? Or is everything right? • Can someone provide an explicit example in which the "stronger version" of the theorem doesn't hold: i.e. in which $\langle \{ i_{\alpha \beta}(\omega) i_{\beta \alpha }(\omega)^{-1}: \omega \in \pi_1 (A_{\alpha} \cap A_{\beta}) \}\rangle$ is different from $\langle \{ i_{\alpha \beta}(\omega) i_{\beta \alpha }(\omega)^{-1}: \omega \in \pi_1 (A_{\alpha} \cap A_{\beta}) \}^{\ast_{\alpha} \pi_1(A_{\alpha})} \rangle$. Thank you very much for any help. - The right cosets are often denoted by $N\backslash G$, by the way. – Dylan Moreland Apr 25 '12 at 13:46 Thanks, I looked into the standard notations for the set of right/left cosets, and changed it to $G/N^r$. – Emilio Ferrucci Apr 25 '12 at 14:11 Oddly enough, I've never seen that notation. – Dylan Moreland Apr 25 '12 at 14:14 – Emilio Ferrucci Apr 25 '12 at 14:21 ## 3 Answers I think the flaw in your reasoning comes earlier in the proof. In the previous paragraph, Hatcher defines two moves that can be performed on a factorization of $[f]$. The second move is Regard the term $[f_i]\in\pi_1(A_\alpha)$ as lying in the group $\pi_1(A_\beta)$ rather than $\pi_1(A_\alpha)$ if $f_i$ is a loop in $A_\alpha\cap A_\beta$. Regarding this move, Hatcher asserts that [This move] does not change the image of this element in the quotient group $Q=\ast_\alpha\, \pi_1(A_\alpha)/N$, by the definition of $N$ This is the step at which Hatcher is using the hypothesis that $N$ is normal. In particular, if $N$ were simply the subgroup generated by the elements $i_{\alpha\beta}(\omega)i_{\beta\alpha}(\omega)^{-1}$ (instead of the normal subgroup generated by these elements), this move would not necessarily preserve the image of this element in $G/N$. - Thank you for your answer. I still don't understand one thing though: why does simply substituting $Q$ with the set of left cosets not work? No group structure on $Q$ is used to complete the proof, and a group can still be partitioned in cosets modulo a nononrmal subgroup. Doesn't the second remark you quoted just use the fact that two such factorizations represent the same coset? – Emilio Ferrucci Apr 25 '12 at 10:00 2 It is not true in general that two such factorizations represent the same coset. If $N\leq G$ is not normal and $aN=bN$ and $cN=dN$, it does not follow that $acN = bdN$ – Jim Belk Apr 25 '12 at 14:25 Ah, I see.. the well-definedness of the coset multiplication is used after all. My reasoning only held for words of length $1$. Thanks a lot for clearing this up for me. – Emilio Ferrucci Apr 25 '12 at 14:47 I think it should be pointed out that another style of proof is available, and is used to give a proof of the version using many base points, and so groupoids, at http://www.bangor.ac.uk/r.brown/pdffiles/vKT-proof.pdf With the result for many base points, you can compute the fundamental group of the circle, which is, after all, THE basic example in algebraic topology. The proof given there does only the union of 2 open sets, but it gives the proof by verification of the universal property, which is a general procedure of great use in mathematics. For example this method is used to prove higher dimensional versions of the van Kampen Theorem. This method also avoids description of the result by generators and relations. - Thank you for your interest. I can't open the website you linked though.. – Emilio Ferrucci Apr 25 '12 at 18:20 1 Thanks for pointing this out - my mistake, now corrected! – Ronnie Brown Apr 29 '12 at 20:11 Ok, I can at least show why if $N$ is a normal subgroup of $*_{\alpha}\pi_{1}(A_{\alpha})$, $N\subseteq \ker\Phi \subseteq *_{\alpha}(A_{\alpha})$, and $\Phi:*_{\alpha}\pi_{1}(A_{\alpha})\rightarrow\pi_{1}(X)$ is injective, then $\ker\Phi=N$. The reason is as follows: we have the following diagram $$\newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex} \newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex} \newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}} \begin{array}{c} *_{\alpha}\pi_{1}(A_{\alpha})/N & \ra{\Phi} & \pi_{1}(X) \\ \da{f} & & \da{id} \\ *_{\alpha}\pi_{1}(A_{\alpha})/\ker\Phi & \ras{\Phi} & \pi_{1}(X) & \\ \end{array}$$ and the mapping $f$ that makes the diagram commutes is the following: $$\begin{eqnarray} f:&*_{\alpha}\pi_{1}(A_{\alpha})/N\rightarrow *_{\alpha}\pi_{1}(A_{\alpha})/\ker\Phi\\ & gN\rightarrow g\ker\Phi \end{eqnarray}$$ By the fact that $\Phi:*_{\alpha}\pi_{1}(A_{\alpha})/N\rightarrow \pi_{1}(X)$ is injective, this makes $f$ an injective homomorphism. On the other hand, $f$ is also surjective. By a theorem in group theory (3rd isomorphism theorem), $\ker f=(\ker\Phi)/N$. But $f$ is also injective, so $(\ker\Phi)/N=\{N\}$. Hence if $x\in\ker\Phi$, then $xN=N$ means that $x\in N$. So $\ker\Phi=N$. I don't understand is why Regard the term $[f_{i}]\in \pi_{1}(A_{\alpha})$ as lying in the group $\pi_{1}(A_{\beta})$ rather than $\pi_{1}(A_{\alpha})$ if $f_{i}$ is a loop in $A_{\alpha}\cap A_{\beta}$. would imply that [This move] does not change the image of the element in $Q$... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409588575363159, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/259321/solve-lim-n-to-infty-left-frac32n-sqrt3-sqrt2n-i-n-right/259326
# solve $\lim_{n \to \infty}\left( \frac{3+2n}{\sqrt{3}+\sqrt{2}n} + i\,n\right)$ I am checking this sequence for convergence but i am not sure whether i am on the right path in calculations, these steps are what i am doing now. $\infty + n$ will go to infinity, right? $$\lim_{n \to \infty} \frac{3+2n}{\sqrt{3}+\sqrt{2}n} + i\,n = \lim_{n \to \infty} \frac{3+2n}{\sqrt{3}+\sqrt{2}n} + \lim_{n \to \infty}i\,n = \sqrt{2} + \infty = \infty$$ am i okay? can someone please correct me if i am wrong. many thanks for any guidance - ## 1 Answer If $i$ the imaginary unit then the sequece diverges in modulus. That sequence will diverge in any case if $i\not = -\sqrt{2}$ is a constant and does not depend on $n$ - why it doesnot depend on $n$, i dont get the point – doniyor Dec 15 '12 at 15:26 you're right, I should start avoiding this kind of words, they do not carry any positive contribution and can lead to awkward moments! – Moritzplatz Dec 15 '12 at 15:27 @doniyor You did not give any information about what $i$ is. In principle it could be something like $-n$ or $-e^n$! – Moritzplatz Dec 15 '12 at 15:29 ok then if $i$ is the imaginary unit the sequence diverges in modulus as you said! – Moritzplatz Dec 15 '12 at 15:33 it means that the sequence of the absolute values of the terms of your sequence goes to +infinity! – Moritzplatz Dec 15 '12 at 15:40 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8837646245956421, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/energy
# Tagged Questions Energy is a quantity which gives an overview of the amount of work doable by the system. 4answers 2k views ### Why doesn't light kill me? I was attending my philosophy class and in the middle of student presentations, I found myself mentally wondering off and thinking about light. After a few minutes of trying to piece together how the ... 2answers 56 views ### Energy dependent Maxwell-Boltzmann distribution I'm having a bit of a problem figuring out the energy dependent Maxwell-Boltzmann distribution. According to my book (Ashcroft & Mermin) they write the velocity dependent distribution as: ... 2answers 56 views ### What is the derivation for the exponential energy relation and where does it apply? Very often when people state a relaxation time $\tau_\text{kin-kin}, \tau_\text{rot-kin}$,, etc. they think of a context where the energy relaxation goes as $\propto\text e^{-t/\tau}$. Related is an ... 2answers 157 views ### First law of thermodynamics? The first law says that the change in internal energy is equal to the work done on the system (W) minus the work done by the system (Q). However, can $Q$ be any kind of work, such as mechanical work? ... 1answer 110 views ### Why do wind power plants have just 3 blades? [duplicate] Why do wind power plants have just 3 blades? It seems that adding more blades would increase the area that interacts with the wind and gather more energy. (Image from Wikipedia.) 1answer 60 views ### Do electrons need specific energies to excite electrons Photons need specific energy levels, equal to the difference between two energy levels to excite an electron in an atom. Is this the same case with electrons that collide with atoms? 2answers 58 views ### Why is potential energy negative when orbiting in a gravitational field? I had to do a problem, and part of it was to find the mechanical energy of satellite orbiting around mars, and I had all of the information I needed. I thought the total mechanical energy would be the ... 0answers 45 views ### Work And Energy Question [closed] $H = 3\text{ m}$,$m=2\text{ kg}$ The right side is rough. I want to figure: what is the coefficient of friction $\mu$? How high and exceed the maximum return on the plane right body? I know ... 4answers 83 views ### The Preference for Low Energy States The idea that systems will achieve the lowest energy state they can because they are more "stable" is clear enough. My question is, what causes this tendency? I've researched the question and been ... 2answers 100 views ### Negative potential energy of gravity Does the negative potential energy in the gravitational field have to be considered in calculating the total mass of the system in question (because of $E=mc^2$)? If so it seems to me that the ... 3answers 69 views ### How do you determine the heat transfer from a P-V diagram? I doubt this question has been addressed properly before, but if there are similar answers, do direct them to me. I am currently studying the First Law of Thermodynamics, which includes the p-V ... 2answers 78 views ### What happens to the energy not absorbed by a radio? If a radio tunes to a specific frequency, where does the excess energy go? If one continues to hit the resonant frequency, shouldn't the wire begin to melt at some point from too much energy? 0answers 36 views ### A space vehicle travelling at a velocity of 1296 (km/h) separates by a controlled explosion into two sections of mass 7276 (kg) and 236 (kg) [closed] A space vehicle traveling at a velocity of 1296 (km/h) separated by a controlled explosion into two sections of mass 7276 (kg) and 236 (kg). the two parts continue in the same direction with the ... 0answers 27 views ### Energy or Work done to pull an Iron cyclinder into a Solenoid RadiI have been following the calcuations from these lecture slides here (slide 11). Where the slides attempt to approximate how much a solenoid pulls, by working out the energy required to pull an ... 1answer 58 views ### Why does compressing a piston increase the internal energy? When we compress a piston, its total internal energy increases, however I don't understand why. As the piston compresses, the temperature should change, as the total energy density increases. As a ... 2answers 129 views ### What does it mean if a body has kinetic energy? What does it mean if a body has kinetic energy? Does it mean that the momentum vectors of each particle of that body has the same direction? What about angular momentum? 1answer 34 views ### What lifting mechanism is likely to have the best energy recovery ratio? Suppose I was designing an apparatus which needed to lift 250kg 5cm high, hold it there for a few seconds, and then lower the object back to the original height. Such a process would need to be ... 1answer 60 views ### Why are electrons in one atom attracted to negative electrons in another atom? [closed] Why or for what purpose is this attraction purposeful or what is the reasoning behind why these energy particles are drawn to each other? Any proven results, or proposed theories of your own, or ... 0answers 35 views ### What is the total work done in this problem? [closed] A 1800 kg trick airplane is 450 m in the air. At this point the plane takes a dive with an initial speed of 42 m/s and accelerates to 64 m/s, dropping a total distance of 120 m. (a) Using the ground ... 2answers 141 views ### Perpendicular Elastic Collision (different masses, different velocities) I'm stuck on a mechanics problem and I can't make any headway past momentum and kinetic energy being conserved. Here is the problem: Two hover cars are approaching an intersection from ... 1answer 36 views ### Pendulum system: how is derived the output as Energy? Good day to everyone, I want to understand in which way the "Energy equation" is been implemented to this pendulum system. $x_1(t)$: The angular position of the mass $x_2(t)$: The angular velocity ... 1answer 51 views ### Maximum separation when 2 unlike charges shot apart I want to compute the maximum separation when 2 unlike charges are shot apart from each other ... 3answers 130 views ### Integrating factor $1/T$ in 2nd Law of Thermodynamics How would you prove that $1/T$ is the most suitable integrating factor to transform $\delta Q$ to an exact differential in the second law of thermodynamics: $$dS = \frac{\delta Q}{T}$$ Where $dS$ is ... 2answers 648 views ### Why can't Humans run any faster? If you wanted to at least semi-realistically model the key components of Human running, what are the factors that determine the top running speed of an individual? The primary things to consider would ... 3answers 83 views ### Why doesn't a stationary electron lose energy by radiating electric field (as per coulomb's law)? If an electron in a universe constantly generates an electric field why does it not get annihilated ? I am confused because I read that an accelerating charge radiates and loses energy. So, why won't ... 1answer 37 views ### Composition of solar spectrum I read some where that there are three types of UV and infrared rays namely UV-A, UV-B, UV-C and near infrared, mid infrared and far infrared. Which is the most abundant among the the three in ... 1answer 45 views ### Total energy is extremal for the static solutions of equation of motions In physics total energy is extremal for the static solutions of equation of motions. Can anyone explain this sentence to me? 1answer 80 views ### Energy stored in fields How to intuitively think of energy stored in a magnetic/electric field? Kindly answer in a bit simple terms without referring to mathematics. 1answer 88 views ### Energy Functional I am a graduate student in pure mathematics, during my study on Ricci Flow I faced some functional known as energy functional. For example Einstein-Hilbert functional is called an energy functional, ... 1answer 79 views ### Energy needed to lift and bring down an object A mass of 0.5 Kg needs to be moved from point A to another point (B) which is 1 meters above point A. The time for this movement should be 0.2 seconds, then the mass is kept at position B for another ... 1answer 62 views ### Basic energy calculation for N identical spin system We have a system that has N identical spins $n_i$, and each spin can be in state 1 or 0. The overall energy for the system is $\epsilon\sum_{i=1}^{N}n_i$. My understanding: There is only one ... 1answer 52 views ### Fundamental properties of motion The first paragraph of the Wikipedia article on the angular momentum operator states that In both classical and quantum mechanical systems, angular momentum (together with linear momentum and ... 0answers 79 views ### General physics question involving Heisenberg Uncertainty Principle Question: An unstable particle produced in a high-energy collision is measured to have an energy of $483\ \mathrm{MeV}$ and an uncertainty in energy of $84\ \mathrm{keV}$. Use the Heisenberg ... 0answers 47 views ### How to convert Richter magnitude scale to approximate TNT? I know the Richter magnitude scale is often used for measuring the strength of earthquakes. At the same time, explosive/destructive releases of energy are often quoted in equivalent amounts of TNT. Is ... 1answer 46 views ### What energy changes take place when you operate a jet-ski? What energy changes take place when you operate a jet-ski? 9answers 3k views ### What makes running so much less energy-efficient than bicycling? Most people can ride 10 km on their bike. However, running 10 km is a lot harder to do. Why? According to the law of conservation of energy, bicycling should be more intensive because you have to ... 1answer 55 views ### Determine KE of electron given momentum & mass [closed] Some info: wavelength of electron: $2.78 \times 10^{-10}$ momentum of electron: $2.38 \times 10^{-24}$ Determine KE of electron. In a provided hint: $KE = \frac{p^2}{2m}$. So I have: KE = ... 1answer 74 views ### Simple harmonic oscillator system and changes in its total energy Suppose I have a body of mass $M$ connected to a spring (which is connected to a vertical wall) with a stiffness coefficient of $k$ on some frictionless surface. The body oscillates from point $C$ to ... 2answers 66 views ### Is it possible to “add cold” or to “add heat” to systems? Amanda just poured herself a cup of hot coffee to get her day started. She took her first sip and nearly burned her tongue. Since she didn't have much time to sit and wait for it to cool down, ... 1answer 47 views ### Liquid oxygen how do they use it as fuel? Rockets are said to be using liquid oxygen as fuel. How do they use liquid oxygen since it's just oxygen, it only helps in the combustion process. How can it be a fuel on its own? 2answers 132 views ### Energy of particle in electric field I'm taking a physics class and the professor teaches us really basic things in lecture and then gives homework way beyond what he taught in lecture. Obviously I need to find some resource other than ... 1answer 75 views ### Mass-energy equivalence and Newton's Second Law of motion According to Einstein's Mass-energy equivalence, $E = mc^2$ OR $m = \frac E{c^2}$..... (1) and According to Newton's Second Law of motion, $F = ma$ OR $m = \frac Fa$ ..... (2) If we compare eq. ... 0answers 36 views ### Time reversed laser Recently, I read an article on time reversed laser. I don't know why they call it a time reversed. I have a doubt that why they use two laser in the device. And what is an anti-laser? The device ... 3answers 124 views ### Universe to energy Would it violate any known laws of physics to construct a universe containing no mass, only energy? 1answer 76 views ### Calculate/estimate power of a fission bomb I have some questions about the released energy and power of a nuclear fission bomb. What are the key dependencies of the power of a fission bomb? Is it true that the power of a fission bomb depends ... 1answer 53 views ### How to find work done due to friction [closed] The force F=40N is applied on a 10kg block at an angle of 36 with the horizontal. The block moves a distance of 15m. If the surface is frictionless. If the coefficient of kinetic friction is 0.25, ... 2answers 106 views ### What particles carry various forms of energy? If I didn't get this wrong, light or heat energy consists of photons and they in turn effect electrons' behavior and thus responsible for chemical and electrical energy. What kind of similar particle ... 1answer 71 views ### calculating work done by friction I want to calculate the work done by friction if the length $L$ of uniform rope on the table slides off. There is friction between the cord and the table with coefficient of kinetic friction $\mu_k$. ... 1answer 27 views ### Is lattice enthalpy positive or negative? I've learnt that the lattice enthalpy (defined as the energy change from a solid ionic lattice to separate gaseous ions) is always positive, obviously. However, I've seen it explained as the opposite ... 5answers 172 views ### Why is momentum conserved (or rather what makes an object carry on moving infinitely)? I know this is an incredibly simple question, but I am trying to find a very simple explanation to this other than the simple logic that energy is conserved when two items impact and bounce off each ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349440336227417, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/196164-conditional-probability-independence.html
# Thread: 1. ## Conditional probability and independence I hope someone can help me with this little exercise (Exercise 4.1,Probability theory, Jaynes): Suppose that we have vectors of events $\{H_1,...,H_n\}$ and $\{D_1,...,D_m\}$ which satisfy: (1) $P(H_i H_j)=0$ for any $i\neq j$ and $\sum_iP(H_i)=1$ (2) $P(D_rD_s|H_i)=P(D_r|H_i)P(D_s|H_i)$, for any $r\neq s$, $1\leq i\leq n$ (3) $P(D_rD_s|\overline{H_i})=P(D_r|\overline{H_i})P(D_ s|\overline{H_i})$, for any $r\neq s$, $1\leq i\leq n$ where $\overline{H_i}$ means the negation of $H_i$. Prove: If $n>2$, then at most one of the following fractions $\frac{P(D_1|H_i)}{P(D_1|\overline{H_i})},\frac{P(D _2|H_i)}{P(D_2|\overline{H_i})},...,\frac{P(D_m|H_ i)}{P(D_m|\overline{H_i})}$ can differ from unity, $1\leq i\leq n$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9138542413711548, "perplexity_flag": "head"}
http://mathoverflow.net/questions/75831/a-classic-problem-on-the-limit-of-continuous-function-at-infinity
## A classic problem on the limit of continuous function at infinity [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I don't know whether it's suitable to post this problem here, but I really need a help. $f$ is a coutinuous function on $\mathbb{R}^+$, if the limit $\lim_{n\to\infty}f(nx)$ exists for all points $x$ of a nonempty closed set with no isolated point of $\mathbb{R}^+$, prove that the limit $\lim_{x\to\infty}f(x)$ exist. Please see more details of my quesion here. - 2 This isn't the site for such questions. You'd be better off responding to the comments you have on Maths-SX and trying some of the suggestions. – Andrew Stacey Sep 19 2011 at 10:15 sorry, I have respondded to all comments for days, but still no one gives an answer. – gylns Sep 19 2011 at 10:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364351630210876, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/179147-odds-having-flop-poker-least-10-higher.html
# Thread: 1. ## odds of having a flop in poker with at least 10 or higher I was wondering how often low flops (flops with all cards less than 9) happen. Or we could do the opposite, odds of having a flop in poker with at least an A, K, Q, J, or 10. I was wondering how to do this properly. Let's say the hand I'm holding is random (negligable) so the amount of cards in the deck is to be 52. So 3 flop cards out of 52. It would help me enormously because if I knew how to do this, I could expand further and figure out other problems. 2. Do you know binomial coefficients? The number C(n,k) determines the number of ways you can choose k elements among n elements, where order doesn't matter. It is usually written $\binom{n}{k}$. There are 8*4 = 32 cards less than (or equal to) 9. You can choose 3 such cards in C(32,3) ways. Among all 52 cards, you can choose 3 cards in C(52,3) ways, hence the probability of flopping cards all less than or equal to 9 is C(32,3)/C(52,3), which is approximately 22,44%.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9658679962158203, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/1858/where-is-the-atiyah-singer-index-theorem-used-in-physics/1923
# Where is the Atiyah-Singer index theorem used in physics? I'm trying to get motivated in learning the Atiyah-Singer index theorem. In most places I read about it, e.g. wikipedia, it is mentioned that the theorem is important in theoretical physics. So my question is, what are some examples of these applications? - ## 4 Answers Eric and others have given good answers as to why one expects the index theorem to arise in various physical systems. One of the earliest and most important applications is 't Hooft's resolution of the $U(1)$ problem. This refers to the lack of a ninth pseudo-Goldstone boson (like the pions and Kaons) in QCD that one would naively expect from chiral symmetry breaking. There are two parts to the resolution. The first is the fact that the chiral $U(1)$ is anomalous. The second is the realization that there are configurations of finite action (instantons) which contribute to correlation functions involving the divergence of the $U(1)$ axial current. The analysis relies heavily on the index theorem for the Dirac operator coupled to the $SU(3)$ gauge field of QCD. For a more complete explanation see S. Coleman's Erice lectures "The uses of instantons." There are also important applications to S-duality of $N=4$ SYM which involve the index theorem for the Dirac operator on monopole moduli spaces. - +1 you know your stuff ! – user346 Dec 14 '10 at 23:22 Jeff, stay on the line! I think Physics Stack Exchange could be helpful to the physics community if it is used as widely and as wisely as Math Overflow -- e.g., from people like you! – Eric Zaslow Dec 15 '10 at 1:27 Thanks Eric. I gather this just got restarted. I hope it works. It has some ways to go before it is MO quality. – pho Dec 15 '10 at 2:50 Indeed. I think there's now a site in development (Theoretical Physics Stack Exchange) which will aim to be more like Math Overflow, but this one has the advantage of being extant. – Eric Zaslow Dec 15 '10 at 16:23 The equations of motion, or the equations of instantons, or solitons, or Einstein's equations, or just about any equations in physics, are differential equations. In many cases, we are interested in the space of solutions of a differential equation. If we write the total (possibly nonlinear) differential equation of interest as $L(u) = 0,$ we can linearize near a solution $u_0,$ i.e. write $u = u_0 + v$ and expand $L(u_0 + v) = 0 + L'|_{u_0}(v) + ... =: D(v)$ to construct a linear equation $D(v)=0$ in the displacement $v.$ A linear differential equation is like a matrix equation. Recall that an $n\times m$ matrix $M$ is a map from $R^n$ to $R^m$, and $dim(ker(M)) - dim(ker(M^*)) = n-m,$ independent of the particular matrix (or linear transformation, more generally). This number is called the "index." In infinite dimensions, these numbers are not generally finite, but often (especially for elliptic differential equations) they are, and depend only on certain "global" information about the spaces on which they act. The index theorem tells you what the index of a linear differential operator ($D,$ above) is. You can use it to calculate the dimension of the space of solutions to the equation $L(u)=0.$ (When the solution space is a manifold [another story], the dimension is the dimension of the tangent space, which the equation $D(v)=0$ describes.) It does not tell you what the actual space of solutions is. That's a hard, nonlinear question. - +1 great answer – user346 Dec 13 '10 at 1:40 1 I guess it's a nice mathematical answer for physicists who don't already know the statement of the index theorem. But I fail to see any actual physical example. Which is a pity, I am certain Eric must know lots of them. I know people use it in string theory all the time. But I don't know enough to provide an answer of my own. – Marek Dec 14 '10 at 18:16 The index theorem is very general and applies to all of the examples I cited (instantons, solitons, Einstein's equations). For example, the moduli space of $SU(2)$ instantons on the four-sphere $S^4$ ($R^4$ with constant behavior at infinity) with instanton number $k$ is equal to $8k - 3$ by the index theorem. – Eric Zaslow Dec 14 '10 at 21:21 Well, you said "just about any equations in physics" which is in direct contradiction with my everyday observation :-) What I was hoping for were some concrete examples like the ones Steve gave. Or something like your instanton example (I think you meant $S^3$ though?). I would love to see more of these, especially connected to some physical interpretation. Thanks in advance :-) – Marek Dec 14 '10 at 22:32 It is true that just about any equation in physics is a differential equation! Not all lead to index problems, though. (I did mean S^4. Instantons are time-dependent field configurations.) An example from string theory, whose Feynman diagrams are two-dimensional QFT amplitudes. That 2d field theory describes maps from a surface to a spacetime, and the instantons of that theory are holomorphic maps. The dimension of the space of such maps is found by an index formula. For a CY, this dimension is zero, which means you can count solutions (this is related to topological string theory). – Eric Zaslow Dec 15 '10 at 1:23 show 1 more comment First let me explain what the index in question refers to. If the math gets too full of jargon let me know in the comments. In physics we are often interested in the spectrum of various operators on some manifolds we care about. Eg: the Dirac operator in 3+1 spacetime. In particular the low-energy long distance physics is contained in the zero modes (ground states). Now what the "index" measures, for the Dirac operator $D$ and a given manifold $M$, is the difference between the number of left-handed zero modes and the number of right-handed zero modes. More technically: $$ind\,D = dim\,ker\,D - dim\,ker\,D^{+}$$ where $D$ is the operator in question; $ker\,D$ is the kernel of $D$ - the set of states which are annihilated by $D$; and $ker\,D^{+}$ is the kernel of its adjoint. Then, as you can see, $ind\,D$ counts the difference between the dimensionalities of these two spaces. This number depends only on the topology of $M$. In short, the ASI theorem relates the topology of a manifold $M$ to the zero modes or ground states of a differential operator $D$ acting on $M$. This is obviously information of relevance to physicists. Perhaps someone else can elaborate more on the physical aspects. The best reference for this and other mathematical physics topics, in my opinion, is Nakahara. - In the case of a Dirac operator, the index is the (signed) excess dimension of the space of vacuum modes of one chirality w/r/t the other: i.e., the number of anomalous “ghost” states in a chiral field theory. Anomalies arise when the classical/quantum symmetry correspondence breaks down under renormalization (a global anomaly could be responsible for quark mass in QCD; resolving the local chiral anomaly in the SM accounts for quarks and leptons; resolving it in superstring theory fixes the gauge group [to either SO(32) or E8 x E8], and the resolution of a conformal anomaly fixes the dimension of spacetime and the fermion content). When trying to turn string theory into actual physics, one asks • Can it explain three generations of chiral fermions? • Can it explain the experimental results on proton decay? • Can it explain the smallness of the electron mass? • Can it explain [things about the cosmological constant]? and AST helps to answer these questions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258940815925598, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/5376/word-based-stream-ciphers-vs-regular-stream-ciphers?answertab=votes
# Word-based stream ciphers vs “regular” stream ciphers? Could somebody explain what is the difference between "word-based" stream ciphers and the regular ones? Those last ones use pseudo-random sequences XOR'd bit by bit with the message, as far as I know. How does that change when it comes to "word-based" ciphers? Examples of word based keystream ciphers include SNOW, PANAMA, SOBER, ORYX. I also read that word based keystream ciphers "allow greater efficiency in software than most bit based stream-ciphers". - 1 Would you care to provide a citation for your quotes? I don't think it is accurate to say that "less is known about the security of word-based stream ciphers than bit-based stream ciphers". – D.W. Nov 15 '12 at 2:35 Yes, they are from Analysis and Design Issues for Synchronous Stream Ciphers by Dawson and Simpson in Lecture Notes Series, Insititute for Mathematical Sciences, National University of Singapore written in 2002. – geo909 Nov 15 '12 at 15:15 1 @D.W. You are right! I just realized I misread! There was a small part saying that "ORYX was found to be a seriously flawed design" but much less is known about the security of the rest of the above mentioned word based stream ciphers (i.e. SNOW,PANAMA,SOBER,ORYX), not all stream ciphers in general. Thanks for pointing that out! I removed that part of the question. – geo909 Nov 15 '12 at 15:18 ## 3 Answers A word-based stream-cipher produces the pseudo-random sequence one word (of several bits) at a time. For example RC4 is a word-based stream cipher, with the word size a byte. By contrast, a typical bit-based stream cipher such as one using the ASG produces the pseudo-random sequence one bit at a time. Word-based stream ciphers may be better suited to fast software implementations than are bit-based stream ciphers, since CPUs typically process one word no slower than one bit. - 1 I don't have enough reputation to vote this up, but it answers my question, many thanks.. I was also wondering how one could compare word-based stream ciphers and block ciphers. I do not know much about block ciphers so there may be a very basic and fundamental difference that I do not see, however I can tell that they process blocks of bits at a time, so it seems that they are similar to woed-based stream ciphers in that sense? What is the difference? – geo909 Nov 14 '12 at 20:11 @geo909: A stream cipher XOR the plaintext and keystream, and thus has the property $\mathtt{Enc}(P')=\mathtt{Enc}(P)\oplus P'\oplus P$. A block cipher does an arbitrary-looking permutation of plaintext block to ciphertext block, not bound by this property. That makes block ciphers versatile building blocks for all kinds of constructs: stream ciphers (using the CTR or OFB mode), MAC, hashes. Ah, also: even without reputation, you should still be able to accept an answer. – fgrieu Nov 15 '12 at 6:52 I'm not sure what $P'$ is but I do understand the difference now, thanks. I guess regular stream ciphers are more efficient in hardware implementation that block ciphers in CTR or OFB mode, right? – geo909 Nov 15 '12 at 15:41 @geo909: $P'$ would be an alternate plaintext. The stated property allows computing the ciphertext for $P'$ from $P$, $P'$, and the ciphertext for $P$. This is a case of malleability. Efficiency of implementation depends on many factors, e.g. resistance to side-channel attacks, thus comparisons are hard; but at least one argument of the proponents of bit-based stream ciphers is that they are more efficient in hardware than block ciphers in CTR or OFB mode. – fgrieu Nov 15 '12 at 16:23 As D.W. notes, the distinction made between "bit-based" and "word-based" stream ciphers in the source you cite is irrelevant to the end-user. For both kinds of stream ciphers (as well as for block ciphers in streaming modes like CTR or OFB), the manner in which the keystream is combined with the plaintext is always the same: bitwise XOR, which operates at the bit level but is easily parallelized both in hardware and in software. In any case, the actual distinction they seem to be making is between LFSR-based stream ciphers, which sequentially output one bit per iteration, and ciphers such as RC4 (and, presumably, block ciphers in streaming modes) which generate their output bitstream in larger chunks. LFSR-based stream ciphers have generally been designed for direct hardware implementation, where they have the advantage of simplicity, but they've traditionally suffered from poor performance in software; whereas in hardware it's easy to shuffle single bits around and combine them using simple logic gates, typical CPUs are designed for carrying out higher-level operations on chunks of 8, 16, 32, 64 or 128 bits in parallel. Running a cipher that operates on single bits in software thus wastes most of the CPU's power, unless the algorithm can be reformulated to operate on many bits in parallel, something that many older LFSR-based cipher designs haven't been very well suited for. On the other hand, ciphers like RC4 were designed for software implementation from the beginning, and do very well there. (Although RC4 itself operates on 8-bit bytes, which is arguably suboptimal for modern high-end CPUs, its simplicity still keeps is competitive there. Besides, there are still plenty of 8-bit processors around in embedded devices and such.) However, they often make use of features that are difficult and costly to implement in hardware, such as access to relatively large amounts of RAM (e.g. 258 bytes for RC4, accessed in an essentially random order). In recent years, though, the trend in stream cipher design has been towards ciphers that blur these lines, being efficient to implement in both hardware and software. Good examples can be found e.g. in the eSTREAM portfolio: the Trivium cipher, for example, is fundamentally a "bit-based" shift register design, and can indeed be implemented as such if desired; however, it is designed such that up to 64 bits of the output can be computed in parallel, which allows very efficient software implementations using any word length of up to 64 bits. (Conveniently, the same parallelizability also allows Trivium to be made very fast in hardware, by essentially duplicating the circuitry up to 64 times, as long as available space permits this.) - From your perspective as a user of stream ciphers, the distinction is irrelevant. As a driver of a car, you don't need to care the details of how the car's engine was built. What you care about is whether the car gets you there safely and rapidly. Similarly, from your perspective as a user of a stream cipher, what you care about is whether the stream cipher is secure and how fast it is. If you measure those two things, you don't need to care about which type the stream cipher is. Generally speaking, word-based stream ciphers are often built that way because operating on a word at a time can potentially be faster in software (on the other hand, bit-based stream ciphers can potentially be better in hardware, e.g., use less power and less silicon). However, rather than worry about this detail, you should just measure how fast the stream cipher actually is, and make your decision based upon that -- as well as whether the stream cipher offers sufficient security. - You are right about the car user analogy but my perspective is not exactly the driver's. I'm studying finite fields focusing on the non-applied side, however I wanted to have a small look on the applications and see how exactly finite fields are used and why we prefer this or that in cryptography. That is, I am mostly interested to have a sneak peak on the engine rather than going for a ride :) – geo909 Nov 15 '12 at 15:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9617237448692322, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Support_Vector_Machine
# Support vector machine (Redirected from Support Vector Machine) Not to be confused with Secure Virtual Machine. In machine learning, support vector machines (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. The basic SVM takes a set of input data and predicts, for each given input, which of two possible classes forms the output, making it a non-probabilistic binary linear classifier. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on. In addition to performing linear classification, SVMs can efficiently perform non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. ## Formal definition More formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high- or infinite-dimensional space, which can be used for classification, regression, or other tasks. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training data point of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. Whereas the original problem may be stated in a finite dimensional space, it often happens that the sets to discriminate are not linearly separable in that space. For this reason, it was proposed that the original finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation easier in that space. To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure that dot products may be computed easily in terms of the variables in the original space, by defining them in terms of a kernel function $K(x,y)$ selected to suit the problem.[2] The hyperplanes in the higher-dimensional space are defined as the set of points whose dot product with a vector in that space is constant. The vectors defining the hyperplanes can be chosen to be linear combinations with parameters $\alpha_i$ of images of feature vectors that occur in the data base. With this choice of a hyperplane, the points $x$ in the feature space that are mapped into the hyperplane are defined by the relation: $\textstyle\sum_i \alpha_i K(x_i,x) = \mathrm{constant}.$ Note that if $K(x,y)$ becomes small as $y$ grows further away from $x$, each element in the sum measures the degree of closeness of the test point $x$ to the corresponding data base point $x_i$. In this way, the sum of kernels above can be used to measure the relative nearness of each test point to the data points originating in one or the other of the sets to be discriminated. Note the fact that the set of points $x$ mapped into any hyperplane can be quite convoluted as a result, allowing much more complex discrimination between sets which are not convex at all in the original space. ## History The original SVM algorithm was invented by Vladimir N. Vapnik and the current standard incarnation (soft margin) was proposed by Vapnik and Corinna Cortes in 1995.[1] ## Motivation H1 does not separate the classes. H2 does, but only with a small margin. H3 separates them with the maximum margin. Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a new data point will be in. In the case of support vector machines, a data point is viewed as a p-dimensional vector (a list of p numbers), and we want to know whether we can separate such points with a (p − 1)-dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum margin classifier; or equivalently, the perceptron of optimal stability. ## Linear SVM Given some training data $\mathcal{D}$, a set of n points of the form $\mathcal{D} = \left\{ (\mathbf{x}_i, y_i)\mid\mathbf{x}_i \in \mathbb{R}^p,\, y_i \in \{-1,1\}\right\}_{i=1}^n$ where the yi is either 1 or −1, indicating the class to which the point $\mathbf{x}_i$ belongs. Each $\mathbf{x}_i$ is a p-dimensional real vector. We want to find the maximum-margin hyperplane that divides the points having $y_i=1$ from those having $y_i=-1$. Any hyperplane can be written as the set of points $\mathbf{x}$ satisfying Maximum-margin hyperplane and margins for an SVM trained with samples from two classes. Samples on the margin are called the support vectors. $\mathbf{w}\cdot\mathbf{x} - b=0,\,$ where $\cdot$ denotes the dot product and ${\mathbf{w}}$ the normal vector to the hyperplane. The parameter $\tfrac{b}{\|\mathbf{w}\|}$ determines the offset of the hyperplane from the origin along the normal vector ${\mathbf{w}}$. If the training data are linearly separable, we can select two hyperplanes in a way that they separate the data and there are no points between them, and then try to maximize their distance. The region bounded by them is called "the margin". These hyperplanes can be described by the equations $\mathbf{w}\cdot\mathbf{x} - b=1\,$ and $\mathbf{w}\cdot\mathbf{x} - b=-1.\,$ By using geometry, we find the distance between these two hyperplanes is $\tfrac{2}{\|\mathbf{w}\|}$, so we want to minimize $\|\mathbf{w}\|$. As we also have to prevent data points from falling into the margin, we add the following constraint: for each $i$ either $\mathbf{w}\cdot\mathbf{x}_i - b \ge 1\qquad\text{ for }\mathbf{x}_i$ of the first class or $\mathbf{w}\cdot\mathbf{x}_i - b \le -1\qquad\text{ for }\mathbf{x}_i$ of the second. This can be rewritten as: $y_i(\mathbf{w}\cdot\mathbf{x}_i - b) \ge 1, \quad \text{ for all } 1 \le i \le n.\qquad\qquad(1)$ We can put this together to get the optimization problem: Minimize (in ${\mathbf{w},b}$) $\|\mathbf{w}\|$ subject to (for any $i = 1, \dots, n$) $y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1. \,$ ### Primal form The optimization problem presented in the preceding section is difficult to solve because it depends on ||w||, the norm of w, which involves a square root. Fortunately it is possible to alter the equation by substituting ||w|| with $\tfrac{1}{2}\|\mathbf{w}\|^2$ (the factor of 1/2 being used for mathematical convenience) without changing the solution (the minimum of the original and the modified equation have the same w and b). This is a quadratic programming optimization problem. More clearly: Minimize (in ${\mathbf{w},b}$) $\frac{1}{2}\|\mathbf{w}\|^2$ subject to (for any $i = 1, \dots, n$) $y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1.$ By introducing Lagrange multipliers $\boldsymbol{\alpha}$, the previous constrained problem can be expressed as $\min_{\mathbf{w},b } \max_{\boldsymbol{\alpha}\geq 0 } \left\{ \frac{1}{2}\|\mathbf{w}\|^2 - \sum_{i=1}^{n}{\alpha_i[y_i(\mathbf{w}\cdot \mathbf{x_i} - b)-1]} \right\}$ that is we look for a saddle point. In doing so all the points which can be separated as $y_i(\mathbf{w}\cdot\mathbf{x_i} - b) - 1 > 0$ do not matter since we must set the corresponding $\alpha_i$ to zero. This problem can now be solved by standard quadratic programming techniques and programs. The "stationary" Karush–Kuhn–Tucker condition implies that the solution can be expressed as a linear combination of the training vectors $\mathbf{w} = \sum_{i=1}^n{\alpha_i y_i\mathbf{x_i}}.$ Only a few $\alpha_i$ will be greater than zero. The corresponding $\mathbf{x_i}$ are exactly the support vectors, which lie on the margin and satisfy $y_i(\mathbf{w}\cdot\mathbf{x_i} - b) = 1$. From this one can derive that the support vectors also satisfy $\mathbf{w}\cdot\mathbf{x_i} - b = 1 / y_i = y_i \iff b = \mathbf{w}\cdot\mathbf{x_i} - y_i$ which allows one to define the offset $b$. In practice, it is more robust to average over all $N_{SV}$ support vectors: $b = \frac{1}{N_{SV}} \sum_{i=1}^{N_{SV}}{(\mathbf{w}\cdot\mathbf{x_i} - y_i)}$ ### Dual form Writing the classification rule in its unconstrained dual form reveals that the maximum margin hyperplane and therefore the classification task is only a function of the support vectors, the training data that lie on the margin. Using the fact that $\|\mathbf{w}\|^2 = w\cdot w$ and substituting $\mathbf{w} = \sum_{i=1}^n{\alpha_i y_i\mathbf{x_i}}$, one can show that the dual of the SVM reduces to the following optimization problem: Maximize (in $\alpha_i$ ) $\tilde{L}(\mathbf{\alpha})=\sum_{i=1}^n \alpha_i - \frac{1}{2}\sum_{i, j} \alpha_i \alpha_j y_i y_j \mathbf{x}_i^T \mathbf{x}_j=\sum_{i=1}^n \alpha_i - \frac{1}{2}\sum_{i, j} \alpha_i \alpha_j y_i y_j k(\mathbf{x}_i, \mathbf{x}_j)$ subject to (for any $i = 1, \dots, n$) $\alpha_i \geq 0,\,$ and to the constraint from the minimization in $b$ $\sum_{i=1}^n \alpha_i y_i = 0.$ Here the kernel is defined by $k(\mathbf{x}_i,\mathbf{x}_j)=\mathbf{x}_i\cdot\mathbf{x}_j$. $W$ can be computed thanks to the $\alpha$ terms: $\mathbf{w} = \sum_i \alpha_i y_i \mathbf{x}_i.$ ### Biased and unbiased hyperplanes For simplicity reasons, sometimes it is required that the hyperplane pass through the origin of the coordinate system. Such hyperplanes are called unbiased, whereas general hyperplanes not necessarily passing through the origin are called biased. An unbiased hyperplane can be enforced by setting $b = 0$ in the primal optimization problem. The corresponding dual is identical to the dual given above without the equality constraint $\sum_{i=1}^n \alpha_i y_i = 0$ ## Soft margin In 1995, Corinna Cortes and Vladimir N. Vapnik suggested a modified maximum margin idea that allows for mislabeled examples.[1] If there exists no hyperplane that can split the "yes" and "no" examples, the Soft Margin method will choose a hyperplane that splits the examples as cleanly as possible, while still maximizing the distance to the nearest cleanly split examples. The method introduces slack variables, $\xi_i$, which measure the degree of misclassification of the data $x_i$ $y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1 - \xi_i \quad 1 \le i \le n. \quad\quad(2)$ The objective function is then increased by a function which penalizes non-zero $\xi_i$, and the optimization becomes a trade off between a large margin and a small error penalty. If the penalty function is linear, the optimization problem becomes: $\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + C \sum_{i=1}^n \xi_i \right\}$ subject to (for any $i=1,\dots n$) $y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1 - \xi_i, ~~~~\xi_i \ge 0$ This constraint in (2) along with the objective of minimizing $\|\mathbf{w}\|$ can be solved using Lagrange multipliers as done above. One has then to solve the following problem: $\min_{\mathbf{w},\mathbf{\xi}, b } \max_{\boldsymbol{\alpha},\boldsymbol{\beta} } \left \{ \frac{1}{2}\|\mathbf{w}\|^2 +C \sum_{i=1}^n \xi_i - \sum_{i=1}^{n}{\alpha_i[y_i(\mathbf{w}\cdot \mathbf{x_i} - b) -1 + \xi_i]} - \sum_{i=1}^{n} \beta_i \xi_i \right \}$ with $\alpha_i, \beta_i \ge 0$. ### Dual form Maximize (in $\alpha_i$ ) $\tilde{L}(\mathbf{\alpha})=\sum_{i=1}^n \alpha_i - \frac{1}{2}\sum_{i, j} \alpha_i \alpha_j y_i y_j k(\mathbf{x}_i, \mathbf{x}_j)$ subject to (for any $i = 1, \dots, n$) $0 \leq \alpha_i \leq C,\,$ and $\sum_{i=1}^n \alpha_i y_i = 0.$ The key advantage of a linear penalty function is that the slack variables vanish from the dual problem, with the constant C appearing only as an additional constraint on the Lagrange multipliers. For the above formulation and its huge impact in practice, Cortes and Vapnik received the 2008 ACM Paris Kanellakis Award.[3] Nonlinear penalty functions have been used, particularly to reduce the effect of outliers on the classifier, but unless care is taken the problem becomes non-convex, and thus it is considerably more difficult to find a global solution. ## Nonlinear classification Kernel machine The original optimal hyperplane algorithm proposed by Vapnik in 1963 was a linear classifier. However, in 1992, Bernhard E. Boser, Isabelle M. Guyon and Vladimir N. Vapnik suggested a way to create nonlinear classifiers by applying the kernel trick (originally proposed by Aizerman et al.[4]) to maximum-margin hyperplanes.[5] The resulting algorithm is formally similar, except that every dot product is replaced by a nonlinear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be nonlinear and the transformed space high dimensional; thus though the classifier is a hyperplane in the high-dimensional feature space, it may be nonlinear in the original input space. If the kernel used is a Gaussian radial basis function, the corresponding feature space is a Hilbert space of infinite dimensions. Maximum margin classifiers are well regularized, so the infinite dimensions do not spoil the results. Some common kernels include: • Polynomial (homogeneous): $k(\mathbf{x_i},\mathbf{x_j})=(\mathbf{x_i} \cdot \mathbf{x_j})^d$ • Polynomial (inhomogeneous): $k(\mathbf{x_i},\mathbf{x_j})=(\mathbf{x_i} \cdot \mathbf{x_j} + 1)^d$ • Gaussian radial basis function: $k(\mathbf{x_i},\mathbf{x_j})=\exp(-\gamma \|\mathbf{x_i} - \mathbf{x_j}\|^2)$, for $\gamma > 0.$ Sometimes parametrized using $\gamma=1/{2 \sigma^2}$ • Hyperbolic tangent: $k(\mathbf{x_i},\mathbf{x_j})=\tanh(\kappa \mathbf{x_i} \cdot \mathbf{x_j}+c)$, for some (not every) $\kappa > 0$ and $c < 0$ The kernel is related to the transform $\varphi(\mathbf{x_i})$ by the equation $k(\mathbf{x_i}, \mathbf{x_j}) = \varphi(\mathbf{x_i})\cdot \varphi(\mathbf{x_j})$. The value w is also in the transformed space, with $\textstyle\mathbf{w} = \sum_i \alpha_i y_i \varphi(\mathbf{x}_i).$ Dot products with w for classification can again be computed by the kernel trick, i.e. $\textstyle \mathbf{w}\cdot\varphi(\mathbf{x}) = \sum_i \alpha_i y_i k(\mathbf{x}_i, \mathbf{x})$. However, there does not in general exist a value w' such that $\mathbf{w}\cdot\varphi(\mathbf{x}) = k(\mathbf{w'}, \mathbf{x}).$ ## Properties SVMs belong to a family of generalized linear classifiers and can be interpreted as an extension of the perceptron. They can also be considered a special case of Tikhonov regularization. A special property is that they simultaneously minimize the empirical classification error and maximize the geometric margin; hence they are also known as maximum margin classifiers. A comparison of the SVM to other classifiers has been made by Meyer, Leisch and Hornik.[6] ### Parameter selection The effectiveness of SVM depends on the selection of kernel, the kernel's parameters, and soft margin parameter C. A common choice is a Gaussian kernel, which has a single parameter γ. The best combination of C and γ is often selected by a grid search with exponentially growing sequences of C and γ, for example, $C \in \{ 2^{-5}, 2^{-3}, \dots, 2^{13},2^{15} \}$; $\gamma \in \{ 2^{-15},2^{-13}, \dots, 2^{1},2^{3} \}$. Typically, each combination of parameter choices is checked using cross validation, and the parameters with best cross-validation accuracy are picked. The final model, which is used for testing and for classifying new data, is then trained on the whole training set using the selected parameters.[7] ### Issues Potential drawbacks of the SVM are the following three aspects: • Uncalibrated class membership probabilities • The SVM is only directly applicable for two-class tasks. Therefore, algorithms that reduce the multi-class task to several binary problems have to be applied; see the multi-class SVM section. • Parameters of a solved model are difficult to interpret. ## Extensions ### Multiclass SVM Multiclass SVM aims to assign labels to instances by using support vector machines, where the labels are drawn from a finite set of several elements. The dominant approach for doing so is to reduce the single multiclass problem into multiple binary classification problems.[8] Common methods for such reduction include:[8] [9] • Building binary classifiers which distinguish between (i) one of the labels and the rest (one-versus-all) or (ii) between every pair of classes (one-versus-one). Classification of new instances for the one-versus-all case is done by a winner-takes-all strategy, in which the classifier with the highest output function assigns the class (it is important that the output functions be calibrated to produce comparable scores). For the one-versus-one approach, classification is done by a max-wins voting strategy, in which every classifier assigns the instance to one of the two classes, then the vote for the assigned class is increased by one vote, and finally the class with the most votes determines the instance classification. • Directed Acyclic Graph SVM (DAGSVM)[10] • error-correcting output codes[11] Crammer and Singer proposed a multiclass SVM method which casts the multiclass classification problem into a single optimization problem, rather than decomposing it into multiple binary classification problems.[12] See also Lee, Lin and Wahba.[13][14] ### Transductive support vector machines Transductive support vector machines extend SVMs in that they could also treat partially labeled data in semi-supervised learning by following the principles of transduction. Here, in addition to the training set $\mathcal{D}$, the learner is also given a set $\mathcal{D}^\star = \{ \mathbf{x}^\star_i | \mathbf{x}^\star_i \in \mathbb{R}^p\}_{i=1}^k \,$ of test examples to be classified. Formally, a transductive support vector machine is defined by the following primal optimization problem:[15] Minimize (in ${\mathbf{w}, b, \mathbf{y^\star}}$) $\frac{1}{2}\|\mathbf{w}\|^2$ subject to (for any $i = 1, \dots, n$ and any $j = 1, \dots, k$) $y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1,\,$ $y^\star_j(\mathbf{w}\cdot\mathbf{x^\star_j} - b) \ge 1,$ and $y^\star_j \in \{-1, 1\}.\,$ Transductive support vector machines were introduced by Vladimir N. Vapnik in 1998. ### Structured SVM SVMs have been generalized to structured SVMs, where the label space is structured and of possibly infinite size. ### Regression A version of SVM for regression was proposed in 1996 by Vladimir N. Vapnik, Harris Drucker, Christopher J. C. Burges, Linda Kaufman and Alexander J. Smola.[16] This method is called support vector regression (SVR). The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training points that lie beyond the margin. Analogously, the model produced by SVR depends only on a subset of the training data, because the cost function for building the model ignores any training data close to the model prediction (within a threshold $\epsilon$). Another SVM version known as least squares support vector machine (LS-SVM) has been proposed by Suykens and Vandewalle.[17] ## Implementation The parameters of the maximum-margin hyperplane are derived by solving the optimization. There exist several specialized algorithms for quickly solving the QP problem that arises from SVMs, mostly relying on heuristics for breaking the problem down into smaller, more-manageable chunks. A common method is Platt's Sequential Minimal Optimization (SMO) algorithm, which breaks the problem down into 2-dimensional sub-problems that may be solved analytically, eliminating the need for a numerical optimization algorithm. Another approach is to use an interior point method that uses Newton-like iterations to find a solution of the Karush–Kuhn–Tucker conditions of the primal and dual problems.[18] Instead of solving a sequence of broken down problems, this approach directly solves the problem as a whole. To avoid solving a linear system involving the large kernel matrix, a low rank approximation to the matrix is often used in the kernel trick. ## See also • In situ adaptive tabulation • Kernel machines • Polynomial kernel • Fisher kernel • Predictive analytics • Relevance vector machine, a probabilistic sparse kernel model identical in functional form to SVM • Sequential minimal optimization • Winnow (algorithm) • Regularization perspectives on support vector machines ## References 1. ^ a b c 2. *Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, B. P. (2007). "Section 16.5. Support Vector Machines". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. 3. Aizerman, Mark A.; Braverman, Emmanuel M.; and Rozonoer, Lev I. (1964). "Theoretical foundations of the potential function method in pattern recognition learning". Automation and Remote Control 25: 821–837. 4. Boser, Bernhard E.; Guyon, Isabelle M.; and Vapnik, Vladimir N.; A training algorithm for optimal margin classifiers. In Haussler, David (editor); 5th Annual ACM Workshop on COLT, pages 144–152, Pittsburgh, PA, 1992. ACM Press 5. Meyer, David; Leisch, Friedrich; and Hornik, Kurt; The support vector machine under test, Neurocomputing 55(1–2): 169–186, 2003 http://dx.doi.org/10.1016/S0925-2312(03)00431-4 6. ^ a b Duan, Kai-Bo; and Keerthi, S. Sathiya (2005). "Which Is the Best Multiclass SVM Method? An Empirical Study". Proceedings of the Sixth International Workshop on Multiple Classifier Systems. Lecture Notes in Computer Science 3541: 278. doi:10.1007/11494683_28. ISBN 978-3-540-26306-7. 7. Hsu, Chih-Wei; and Lin, Chih-Jen (2002). "A Comparison of Methods for Multiclass Support Vector Machines". IEEE Transactions on Neural Networks. 8. Platt, John; Cristianini, N.; and Shawe-Taylor, J. (2000). "Large margin DAGs for multiclass classification". In Solla, Sara A.; Leen, Todd K.; and Müller, Klaus-Robert; eds. Advances in Neural Information Processing Systems. MIT Press. pp. 547–553. 9. Dietterich, Thomas G.; and Bakiri, Ghulum; Bakiri (1995). "Solving Multiclass Learning Problems via Error-Correcting Output Codes". Journal of Artificial Intelligence Research, Vol. 2 2: 263–286. arXiv:cs/9501101. Bibcode:1995cs........1101D.  Unknown parameter `|class=` ignored (help) 10. Crammer, Koby; and Singer, Yoram (2001). "On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines". J. of Machine Learning Research 2: 265–292. 11. Lee, Y.; Lin, Y.; and Wahba, G. (2001). "Multicategory Support Vector Machines". Computing Science and Statistics 33. 12. Lee, Y.; Lin, Y.; and Wahba, G. (2004). "Multicategory Support Vector Machines, Theory, and Application to the Classification of Microarray Data and Satellite Radiance Data". Journal of the American Statistical Association 99 (465): 67–81. doi:10.1198/016214504000000098. 13. Joachims, Thorsten; "Transductive Inference for Text Classification using Support Vector Machines", Proceedings of the 1999 International Conference on Machine Learning (ICML 1999), pp. 200-209. 14. Drucker, Harris; Burges, Christopher J. C.; Kaufman, Linda; Smola, Alexander J.; and Vapnik, Vladimir N. (1997); "Support Vector Regression Machines", in Advances in Neural Information Processing Systems 9, NIPS 1996, 155–161, MIT Press. 15. Suykens, Johan A. K.; Vandewalle, Joos P. L.; Least squares support vector machine classifiers, Neural Processing Letters, vol. 9, no. 3, Jun. 1999, pp. 293–300. 16. Ferris, Michael C.; and Munson, Todd S. (2002). "Interior-point methods for massive support vector machines". SIAM Journal on Optimization 13 (3): 783–804. doi:10.1137/S1052623400374379.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 105, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8816642761230469, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2012/02/24/the-meaning-of-the-speed-of-light/?like=1&_wpnonce=401286028f
# The Unapologetic Mathematician ## The Meaning of the Speed of Light Let’s pick up where we left off last time converting Maxwell’s equations into differential forms: $\displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\d\beta&=0\\d\epsilon&=-\frac{\partial\beta}{\partial t}\\{}*d*\beta&=\mu_0\iota+\frac{1}{c^2}\frac{\partial\epsilon}{\partial t}\end{aligned}$ Now let’s notice that while the electric field has units of force per unit charge, the magnetic field has units of force per unit charge per unit velocity. Further, from our polarized plane-wave solutions to Maxwell’s equations, we see that for these waves the magnitude of the electric field is $c$ — a velocity — times the magnitude of the magnetic field. So let’s try collecting together factors of $c\beta$: $\displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\d(c\beta)&=0\\d\epsilon&=-\frac{1}{c}\frac{\partial(c\beta)}{\partial t}\\{}*d*(c\beta)&=\mu_0c\iota+\frac{1}{c}\frac{\partial\epsilon}{\partial t}\end{aligned}$ Now each of the time derivatives comes along with a factor of $\frac{1}{c}$. We can absorb this by introducing a new variable $\tau=ct$, which is measured in units of distance rather than time. Then we can write: $\displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\d(c\beta)&=0\\d\epsilon&=-\frac{\partial(c\beta)}{\partial\tau}\\{}*d*(c\beta)&=\mu_0c\iota+\frac{\partial\epsilon}{\partial\tau}\end{aligned}$ The easy thing here is to just write $t$ instead of $\tau$, but this hides a deep insight: the speed of light $c$ is acting like a conversion factor from units of time to units of distance. That is, we don’t just say that light moves at a speed of $c=299\,792\,457\frac{\mathrm{m}}{\mathrm{s}}$, we say that one second of time is 299,792,457 meters of distance. This is an incredibly identity that allows us to treat time and space on an equal footing, and it is borne out in many more or less direct experiments. I don’t want to get into all the consequences of this fact — the name for them as a collection is “special relativity” — but I do want to use it. This lets us go back and write $\beta$ instead of $c\beta$, since the factor of $c$ here is just an artifact of using some coordinate system that treats time and distance separately; we see that the electric and magnetic fields in a propagating electromagnetic plane-wave are “really” the same size, and the factor of $c$ is just an artifact of our coordinate system. We can also just write $t$ instead of $c t$ for the same reason. Finally, we can collect $c\rho$ together to put it on the exact same footing as $\iota$. $\displaystyle\begin{aligned}*d*\epsilon&=\mu_0c\rho\\d\beta&=0\\d\epsilon&=-\frac{\partial\beta}{\partial t}\\{}*d*\beta&=\mu_0c\iota+\frac{\partial\epsilon}{\partial t}\end{aligned}$ The meanings of these terms are getting further and further from familiarity. The $1$-form $\epsilon$ is still made of the same components as the electric field; the $2$-form $\beta$ is $c$ times the Hodge star of the $1$-form whose components are those of the magnetic field; the function $\rho$ is $c$ times the charge density; and the vector field $\iota$ is the current density. ## 4 Comments » 1. [...] The Meaning of the Speed of Light (unapologetic.wordpress.com) [...] Pingback by | February 26, 2012 | Reply 2. [...] The Meaning of the Speed of Light [...] Pingback by | March 6, 2012 | Reply 3. [...] If we have vectors and — with time here measured in the same units as space by using the speed of light as a conversion factor — then we calculate the metric [...] Pingback by | March 7, 2012 | Reply 4. [...] off, I’m going to use a coordinate system where the speed of light is 1. That is, if my unit of time is seconds, my unit of distance is light-seconds. Mostly this helps [...] Pingback by | July 17, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9308443069458008, "perplexity_flag": "head"}
http://neciklopedio.wikia.com/wiki/Vorto
# Aaaaa ### El Neciklopedio (Alidirektita el Vorto) Iri al: navigado, serĉi Esperanta alfabeto A B C Ĉ D E F G Ĝ H Ĥ I J Ĵ K L M N O P R S Ŝ T U Ŭ V Z `"Por ĉiu ago venas la tempo de pago"` `~ Zamenhof ` AAAAAAAAAAAAAA AA AAAAAAAAAAA AAAA AAAA AAA AA AAAAAAAAAA A AAAAA AAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAA AAAAAAAA AAAAA AAAAAAAAA AAAAAA AAAAAAA AA AAAA AAAAAAA AAA AAAA AAA AAAA AAA AAA AAAAAAAA AAAAAAAA AA AAAAA AAAAAAA AAAAA AAAAAA AAAA AAAAA AAAAAAAAAAAAA AAA AAAAAAAAAA AAAAAAAAAAAAAAA AAAAAAAAAAAAAAAA AA AA AAAAA AAA AAA AAAA AAAAAA AAAA AAAAAAA AA AAAAAAAA, AAAAAAAAAAAA AAAAAAA. AAA AAAAAAAAA AA AAA AAAA AAAAAA, AAA AAAAAA AA AAA AAAAA.AA AAA AAAAAAA AA AAAA AAAA, AAA AAAA AAAA AAAAAA AAAAAA AAA AAAA AAAAAA AAAAAAA.AA AAA AAAAAAAAA AAAA AAA AAAA AAAAAA. `"AAAAAAAAA!"` `-AAAAA AAAAA AA AAAAAAAAA!` AAAAA AA AAAAAAAA [AAAA] AAAAAAAAA! (AAAA?) AAAAAA AA AAAA AAAAA. (AAAA?) AAAAAAAAAAAAAAAAAAAA A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A AAAAAAAA A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A . ### AAAAAAAAA A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A AAAAAAAAA AAAAA AA AAA AAA AA A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A ### AAAA A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A: $A = \frac{\frac{A}{A} (A_A + A_A) ^ A}{A \sqrt{A_A} - A ^ {-A} } * \frac{{\,}^A A}{A} A - A_A$ A A A A A A A A A A A A A : $A = A \int_A \left( A(A) - \frac{A}{A ^A} \vert A \vert^A \right) \;\mbox{A}(A)$ A A A A A A A A A A A A A AAAA A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A ## AAAAAAAAA AAAAAAAAAAAAAA • A. AAAAAAAAAAAAAAAAAAAAAAAA A. AAAAAAAAAAAA A. AAAAAAA • AAAAAAAAAAAAAA • AA. AAAAAAAAAAAAAAAAAAA A. AAAAAAAAAA A. AAAAAAAAA AAAAAAAAAAAAAAAAAAAAA ### AAAAAAAAAAAAAAAAAAAAAAAA A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A AAAAAAAA!!!!! A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A AAAAAAAAAAAA! AAA AAAAAA AA AAAAAA, AAAA AAAAAAAA AA AAAAAA. A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A! ## AAAAAA(A): AAAAA AA AAA AA AAAAAA AAAAA? AAA AAAAAAAA AAAA AAAAAA AA AAAAA A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A AA A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A AAAA AAAA AAAAA AAAAAAAAAAAA AAAAAAAAAAAAA AAAAAAAA. A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A. ## AA AA AA AA AA A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A! ``` AAA AAA AAA AAAAA AAAAA AAAAA AAAAA AAAAA AAAAA AAAAA AAA AAA AAA AAA AAA AAA AAAAA AAAAA AAA AAA AAA AAA AAA AAA AAAAA AAAAA AAA AAA AAA AAA AAA AAA AAA AAA AAAAAAAAAAAAA AAAAAAAAAAAAA AAAAAAAAAAAAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAA AAAAA AAAAA AAA AAA AAA AAA AAA AAA AAA AAA ``` ## AAAAA Dosiero:Fonz.jpg AAAAAAAAAA! AAA AAAA AA AAAA! A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A A. ## AAAAAA AAAAAA (A.A.A AAAAAA AAAAAAAA) AAAAAAA AA AAA AAAAAAAAA AAAAA AA AAAAAA AAAAAA, AAAAAAA AA AAAAAA AAAAAA, AAAAA AAAAAAA AAA! AA AAA AAAA AAAA AAAAAAAA AAAA AA AAAAAA AAAAA • AAAA AAAA AAAAAA AA AAA! • AAA AAAA AAAA AAA! • AAAAAAA AAAAA AAA! • AAAAA AAAA AAA! • AAAA AAAAA AAA! • AAA AAAAAA "A" AAAA AAA! AAAAA AAA AAA, AAAAAAA AAA AAAA, AA AAAAAA AAAAAA, AAAA AAAAA AAA! AAAA, AA AAAAAA AAAAAA, AAAA AAAA(A) AAA! ### AA AAA AAAA AA AAAAA AAAAAAA AAAAAAA? AAAAAAA AA AAAAAA AAAAAA , AAAAAA AAAAA AA AAAAA AAAA AAA! AAAAAA, AA AAAAA AAAAAAAAA AAAAAAA, AAA AAAAAAA AAAAAAAA. (AA AAAAAA AAAAAA-AA, AAAAA AAAA.) AAAA, AA AAAA AAAAA'A AA AAA AAAAA, AAAA AAAA AA AAA AAA AAAAAAA, AAAAAAAAAAAA AAA AAAAA (AA AAAAAAA AA AA, AA, AA AAA), AA AAAAA AAAAAA AAAA AAA AA AAA AAA. AA AAAA AA AAA AAA AAAAA AAAAAA AAAAAAA. ## AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA ## AAAAAAAA AAAAA AAAAAAAAAAAA:AAA.}} AAAA AAAAAAA AA AAAAAA AA AA AAAAA AA A AAAA. A AAAAAAA AAA AAA AAAA AAAA aa AAA AAAAAAA AAAAAAA AAAAAA AA AAAA AAA AAA. ## Read more Elŝutita el "http://neciklopedio.wikia.com/wiki/Aaaaa?oldid=70750"
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.970839262008667, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4257462
Physics Forums ## Determining the Cut-off Frequency of a MEMS Accelerometer Greetings Forum, I just bought an Invensense MPU-6050, I need to design an LPF to filter the noise. How do I analyse the signal and how do I determine the frequency beyond which the signal is altered by noise. Thanks in advance for your help. Cheers PhysOrg.com engineering news on PhysOrg.com >> Researchers use light projector and single-pixel detectors to create 3-D images>> GPS solution provides 3-minute tsunami alerts>> Single-pixel power: Scientists make 3-D images without a camera Mentor Quote by rahlk Greetings Forum, I just bought an Invensense MPU-6050, I need to design an LPF to filter the noise. How do I analyse the signal and how do I determine the frequency beyond which the signal is altered by noise. Thanks in advance for your help. Cheers Can you post a link to the datasheet? Are there any application notes at the manufacturer's website? What are you going to be connecting this accelerometer to? I will be using it to estimate the tilt angle. I am going to interface it to a microcontroller. In fact, I need to submit a term paper on this, so I'll have to describe my filter quite clearly. The Datasheet - http://www.cdiweb.com/datasheets/inv...-MPU-6000A.pdf Mentor ## Determining the Cut-off Frequency of a MEMS Accelerometer Quote by rahlk I will be using it to estimate the tilt angle. I am going to interface it to a microcontroller. In fact, I need to submit a term paper on this, so I'll have to describe my filter quite clearly. The Datasheet - http://www.cdiweb.com/datasheets/inv...-MPU-6000A.pdf Since this is for a school project, you will need to describe the filter quite clearly to *us*! We don't do your schoolwork projects for you here. We can offer some hints and tutorial help, but you must do the bulk of the work on your schoolwork projects. So tell us what you see in the datasheet. Are there any app notes? What kind of noise do you expect in your data acquisition setup? What order and polynomial do you think you will want to use in your project, and *why*? Well, accelerometer is subject to several sources of high frequency noise, including thermal, electrical and mechanical vibrations. I thought of developing a simple first order, single-pole infinite impulse response LPF, given by, y(n) = α.y(n-1) + (1-α).x(n), where, x(n) = current accelerometer reading, y(n) = current estimate; y(n-1) = previous estimate. My issue is with determination of alpha. If sampling frequency is Fs then, α = $\frac{\tau Fs}{1+\tau Fs}$. and, $\tau$ = $\frac{1}{2\pi Fc}$. Once I determine Fc, I can justify my choice of α. Now, If the device were analog in nature, I could have designed an RC LPF, got the state equations, taken a laplace transform and applied Bilinear Z Transform to get the digital equivalent. But this device gives acceleration in digital format, a 16 bit number. I need to be able to somehow use Fourier analysis or some such technique to work on this digital data directly. I would be delighted to get some help on clarifying this conundrum. Quote by rahlk Once I determine Fc, I can justify my choice of α. Now, If the device were analog in nature, I could have designed an RC LPF, got the state equations, taken a laplace transform and applied Bilinear Z Transform to get the digital equivalent. But this device gives acceleration in digital format, a 16 bit number. Why would that be a problem? You've gone through the process of designing a digital LPF, starting with an analogue prototype. If you were applying your filter to an analogue signal you would have to digitise the signal then filter it. But here the signal has already been digitised for you so you just need to read the signal out of the MEMS sensor (remembering to keep up with the Nyquist sampling rate requirements) then run it through your digital LPF implementation. I need to be able to somehow use Fourier analysis or some such technique to work on this digital data directly. What makes you think you need the complexity of using Fourier analysis? It might be worth trying out a simpler digital LPF first and moving up to more complex designs later if you find the filtering isn't adequate. I need to design an LPF to filter the noise. How do I analyse the signal and how do I determine the frequency beyond which the signal is altered by noise. Try considering what maximum frequency you need for your accelerometer signal to make your application work. You can then choose a LPF to filter off noise above this frequency. If you find it hard to get good estimates for your filter requirements it could be worthwhile building a rough prototype and experimenting with it to see how your estimates work in practice. Tags accelerometer, cut-off frequency, low pass filter Thread Tools | | | | |--------------------------------------------------------------------------------|----------------------------------------------|---------| | Similar Threads for: Determining the Cut-off Frequency of a MEMS Accelerometer | | | | Thread | Forum | Replies | | | General Physics | 1 | | | Introductory Physics Homework | 4 | | | General Physics | 5 | | | Introductory Physics Homework | 3 | | | Engineering, Comp Sci, & Technology Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9144262671470642, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/10581/list
## Return to Question 3 added 144 characters in body; edited title; edited title # Kontsevichdeformation,andGeometric,Quantization and the Podles sphere There exist a large family of noncommutative spaces that arise from the quantum matrices. These algebraic objects $q$-deform the coordinate rings of certain varieties. For example, take quantum $SU(2)$, this is the algebra $< a,b,c,d >$ quotiented by the ideal generated by ab−qba, ~~ ac−qca, ~~ bc−cb, ~~ bd−qdb, ~~ cd−qdc, ~~ ad−da−(q−q^{−1})bc, $$and the "q-det" relation$$ ad−qbc−1 where $q$ is some complex number. Clearly, when $q=1$ we get back the coordinate ring of $SU(2)$. In the classical case $S^2 = SU(2)/U(1)$ (the famous Hopf fibration). This generalises to the q-case: the $U(1)$-action generalises to a $U(1)$-coaction with an invariant subalgebra that q-deforms the coordinate algebra of $S^2$ - the famous Podles sphere. There exist such q-matrix deformations of all flag manifolds. Since all such manifolds are Kahler, we can also apply Kontsevich deformation to them to obtain a q-defomation. My question is: What is the relationship between these two approaches? Alternatively, we can apply Kostant-Souriau geometric quantization to a flag manifold. How does alegbra relate to its q-matrix deformation? 2 added 3 characters in body There exist a large family of noncommutative spaces that arise from the quantum matrices. These algebraic objects $q$-deform the coordinate rings of certain varieties. For example, take quantum $SU(2)$, this is the algebra $< a,b,c,d >$ quotiented by the ideal generated by ab−qba, ~~ ac−qca, ~~ bc−cb, ~~ bd−qdb, ~~ cd−qdc, ~~ ad−da−(q−q−1)bcad−da−(q−q^{−1})bc, $$and the "q-det" relation$$ ad−qbc−1 where $q$ is some complex number. Clearly, when $q=1$ we get back the coordinate ring of $SU(2)$. In the classical case $S^2 = SU(2)/U(1)$ (the famous Hopf fibration). This generalises to the q-case: the $U(1)$-action generalises to a $U(1)$-coaction with an invariant subalgebra that q-deforms the coordinate algebra of $S^2$ - the famous Podles sphere. There exist such q-matrix deformations of all flag manifolds. Since all such manifolds are Kahler, we can also apply Kontsevich deformation to them to obtain a q-defomation. My question is: What is the relationship between these two approaches? 1 # Kontsevich deformation and the Podles sphere There exist a large family of noncommutative spaces that arise from the quantum matrices. These algebraic objects $q$-deform the coordinate rings of certain varieties. For example, take quantum $SU(2)$, this is the algebra $< a,b,c,d >$ quotiented by the ideal generated by ab−qba, ~~ ac−qca, ~~ bc−cb, ~~ bd−qdb, ~~ cd−qdc, ~~ ad−da−(q−q−1)bc, $$and the "q-det" relation$$ ad−qbc−1 where $q$ is some complex number. Clearly, when $q=1$ we get back the coordinate ring of $SU(2)$. In the classical case $S^2 = SU(2)/U(1)$ (the famous Hopf fibration). This generalises to the q-case: the $U(1)$-action generalises to a $U(1)$-coaction with an invariant subalgebra that q-deforms the coordinate algebra of $S^2$ - the famous Podles sphere. There exist such q-matrix deformations of all flag manifolds. Since all such manifolds are Kahler, we can also apply Kontsevich deformation to them to obtain a q-defomation. My question is: What is the relationship between these two approaches?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8872026205062866, "perplexity_flag": "middle"}
http://xianblog.wordpress.com/2012/07/21/le-monde-puzzle-783/
# Xi'an's Og an attempt at bloggin, from scratch… ## Le Monde puzzle [#783] In a political party, there are as many cells as there are members and each member belongs to at least one cell. Each cell has five members and an arbitrary pair of cells only shares one member. How many members are there in this political party? Back to the mathematical puzzles of Le Monde (science leaflet of the weekend edition)! In addition to a tribune by Cédric Villani celebrating the 100th anniversary of the death of Henri Poincaré, this issue [now of last week] offers this interesting challenge. So much interesting that I could only solve it for three (instead of five) members and could not see the mathematical notion behind the puzzle… Let us denote by n the number of both the cells and the number of members. Then, when picking an arbitrary order on the sets, if ij denotes the number of members in set j already seen in sets with lower indices, we have the following equality on the total number of members $n = 5n -i_2-\cdots-i_n$ and the constraints are that i2<2, i3<3, i4<4, i5<5, and ij<6, for j>5. Hence, i2+i3+i4+i5+…+in≤5n-15, which implies n≥15. Now, in terms of analytics, I could not go much further and thus turned to an R code to see if I could find a solution by brute force. Here is my code (where the argument a is the number of elements in each set): ```lemond=function(a){ obj=1:a set=matrix(obj,1,a) newset=c(1,(a+1):(2*a-1)) obj=sort(unique(c(obj,newset))) set=rbind(set,as.vector(newset)) stob=FALSE while (!stob){ newset=sample(set[1,],1) prohib=set[1,] # ensuring intersections of one and one only for (i in 2:nrow(set)){ chk=length(intersect(newset,set[i,])) if (chk>1){ #impossibile newset=1:(10*a)}else{ if (chk==0){ #no common point yet but #can't increase the intersection size with previous sets locprob=setdiff(set[i,],prohib) if ((length(locprob)==0)){newset=1:(10*a)}else{newset= c(newset,sample(c(locprob,locprob),1))} }} #else do nothing, restriction already satisfied prohib=unique(as.vector(set[1:i,])) } if (length(newset)a)||(max(obj)>9*a)||(max(obj)==nrow(set))) } list(set=set,sol=((length(newset)==a)&&(max(obj)==nrow(set)))) } ``` I build the sets and the collection of members by considering the constraints and stopping when (a) it is impossible to satisfy all constraints, (b) the current number of sets is the current number of members, or (c) there are too many members. (The R programming is very crude, witness the selection of possible values in the inside loop…) Running the code for a=2 and a=3 led to the results ```$set [,1] [,2] [1,] 1 2 [2,] 1 3 [3,] 2 3 ``` and ```$set [,1] [,2] [,3] [1,] 1 2 3 [2,] 1 4 5 [3,] 3 5 6 [4,] 2 4 6 [5,] 2 5 7 [6,] 3 4 7 [7,] 1 6 7 ``` and ```$set       [,1] [,2] [,3] [,4] [1,]    1    2    3    4 [2,]    1    5    6    7 [3,]    3    5    8    9 [4,]    2    6    8   10 [5,]    4    7    8   11 [6,]    1    9   10   11 [7,]    4    5   10   12 [8,]    1    8   12   13 [9,]    3    7   10   13 [10,]    3    6   11   12 [11,]    4    6    9   13 [12,]    2    5   11   13 [13,]    2    7    9   12 ``` which are indeed correct (note that the solution for a=3 is n=7 which allows for a symmetric allocation, rather than n=6 which is the simple upper bound). However, the code does not return a solution for a=5, which makes me wonder if the solution can be reached by brute force…. ### Share: This entry was posted on July 21, 2012 at 12:12 am and is filed under R, Travel, University life with tags combinatorics, integer set, Le Monde, mathematical puzzle. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 4 Responses to “Le Monde puzzle [#783]” 1. Thanks for posting and translating this puzzle! N=3 SPOILER(?): For the n=3 case, the solution is beautifully represented by the Fano plane. Each sect is a line, each point a person (or visa-versa). I think the correct place to look for the n=5 solution will be the study of error-correcting codes, especially hamming codes. Eg. The Fano plane is hamming(7,4). Try representing each sect as a binary number, with a 1 in position n if it contains person n, and a 0 otherwise (or the other way round). This will coincide with the hamming code in the n=3 case. • Thanks, if you get a chance, I would be interested by the solution for n=5, obviously! As my computer code is still running…. Having used 40,000 CPU minutes so far! • You can stop!: The N=5 solution is (Check it!) 21. The k-th sect is defined as follows. Label the people i, i=1 to 20: Sect(k)={0, 2, 7, 8, 11}+k (mod 21). The set (with or without 21) is a golomb ruler:if s(i)-s(j)=n=s(l)-s(m), then i=l and j=m, that is, each difference can only be made one way. This means that if two sects have a person in common, then s(i)+n=s(l) for some n. Then there can be only one overlap: if s(j)+n=s(m), then rearranging, we get the Golomb condition applying. There will always be an overlap because the Circle Golomb ruler ( or modular) is perfect: each number can be made in some way, as twice 5C2 is 20. You can also express the solution as a Finite Projective Plane, like the Fano plane. Here’s a visual solution: http://www.maa.org/editorial/mathgames/21lines.gif The finite projective plane has the same definition as the problem. • Thanks, terrific!!! (As it happens, my mainframe administrator killed my running program a few hours ago…) Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931641161441803, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/104528/int-dx-x10000-1
# $\int dx/(x^{10000}-1)$ Is there any way to evaluate this indefinite integral using pencil and paper? A closed-form solution exists, because $1/(x^{10000}-1)$ can be expressed as a partial fraction decomposition of the form $\sum c_m/(x-a_m)$, where the $a_m$ are the 10,000-th roots of unity. But brute-force computation of the $c_m$ is the kind of fool's errand that a human would never embark on, and that software stupidly attempts and fails to accomplish. (Maxima, Yacas, and Wolfram Alpha all try and fail.) This is not homework. - For |x|<1 you can use power series to get a very good approximation. – Potato Feb 1 '12 at 7:37 You know of others that software would fail at, like something a first year Calc student could understand? – yiyi Nov 9 '12 at 4:57 ## 2 Answers You can use the fact that, in a partial fraction decomposition, for a simple root $\alpha$ of the denominator (say $F = {P \over Q}$ where $(P,Q) = 1$) then the coefficient of ${1 \over X-\alpha}$ is ${P(\alpha) \over Q'(\alpha)}$. Since $X^{1000} - 1$ only has simple roots (the 1000th powers of unity), which you can express easily as $\omega^k, k \in \{0,\dots,999\}$ where $\omega = e^{2i\pi \over 1000}$. Then it's just a matter of computing a sum, since the integral of ${1 \over x-\alpha}$ is easy enough to compute. Beware though, $\alpha$ is complex here, so the antiderivative is not just $\log(x-\alpha)$…. But another trick you can use is that you can naturally pair the roots of unity, as $\bar{\zeta} = \zeta^{-1}$ for $|\zeta| = 1$. - What do you mean by $(P,Q)=1$? – Ben Crowell Feb 1 '12 at 16:26 1 It means $P$ and $Q$ are relatively prime (as polynomials over the complex numbers). Equivalently (by the Fundamental Theorem of Algebra) $P$ and $Q$ have no roots in common. – Robert Israel Feb 1 '12 at 22:55 1 To expand a bit on zulon's last paragraph: if $\alpha$ is a non-real root and the partial fraction decomposition includes $c/(z-\alpha)$ then it also includes $\overline{c}/(z - \overline{\alpha})$, and an antiderivative of $$\frac{c}{z-\alpha} + \frac{\overline{c}}{z - \overline{\alpha}} = \frac{2 \text{Re}(c)(z-\text{Re}(\alpha)) - 2 \text{Im}(c)\text{Im}(\alpha)}{(z-\alpha)(z-\overline{\alpha})}$$ is $\text{Re}(c) \ln((z - \text{Re}(\alpha))^2 + \text{Im}(\alpha)^2) - 2 \text{Im}(c) \arctan\left(\frac{z-\text{Re}(\alpha)}{\text{Im}(\alpha)}\right)$ – Robert Israel Feb 1 '12 at 23:14 1 An advantage of this over the form $c \ln(z-\alpha) + \overline{c} \ln(z - \overline{\alpha})$ is that (once you have the real and imaginary parts of $c$ and $\alpha$) it doesn't explicitly involve complex quantities when $z$ is real. From the point of view of complex analysis, both forms are equally valid, but (if you use the principal branches) their branch cuts are in different places. – Robert Israel Feb 1 '12 at 23:24 Two great answers! I'm marking this one as accepted because it taught me something new about partial fractions. – Ben Crowell Feb 2 '12 at 0:14 For $|x|<1$ we have $1/(x^n - 1) = - \sum_{k=0}^\infty x^{nk}$, so an antiderivative of this is $- \sum_{k=0}^\infty \frac{x^{nk+1}}{nk+1}$. This can be written as a hypergeometric function: $- x \ {}_2F_1\left(\frac{1}{n},1; 1+\frac{1}{n}; x^n\right)$. - Cool! I'm curious about your thought process. Did you write out the series and then look it up in a table to identify it as a hypergeometric? – Ben Crowell Feb 2 '12 at 0:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430845975875854, "perplexity_flag": "head"}
http://cms.math.ca/cmb/browse/vn/54/3
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CMB « Vol. 54 No. 2 Vol. 54 No. 4 » Volume 54 Number 3 (Sep 2011) Looking for a printed back issue? Page Contents 385 Blackadar, Bruce; Kirchberg, Eberhard It is shown that a separable $C^*$-algebra is inner quasidiagonal if and only if it has a separating family of quasidiagonal irreducible representations. As a consequence, a separable $C^*$-algebra is a strong NF algebra if and only if it is nuclear and has a separating family of quasidiagonal irreducible representations. We also obtain some permanence properties of the class of inner quasidiagonal $C^*$-algebras. 396 Cho, Jong Taek; Inoguchi, Jun-ichi; Lee, Ji-Eun We give explicit parametrizations for all parabolic geodesics in 3-dimensional Sasakian space forms. 411 Davidson, Kenneth R.; Wright, Alex We show that every free semigroup algebra has a (strongly) unique Banach space predual. We also provide a new simpler proof that a weak-$*$ closed unital operator algebra containing a weak-$*$ dense subalgebra of compact operators has a unique Banach space predual. 422 Pérez, Juan de Dios; Suh, Young Jin We classify real hypersurfaces in complex projective space whose structure Jacobi operator satisfies two conditions at the same time. 430 DeLand, Matthew We prove that every complete family of linearly non-degenerate rational curves of degree $e > 2$ in $\mathbb{P}^{n}$ has at most $n-1$ moduli. For $e = 2$ we prove that such a family has at most $n$ moduli. The general method involves exhibiting a map from the base of a family $X$ to the Grassmannian of $e$-planes in $\mathbb{P}^{n}$ and analyzing the resulting map on cohomology. 442 García, Esther; Lozano, Miguel Gómez; Neher, Erhard We study the transfer of nondegeneracy between Lie triple systems and their standard Lie algebra envelopes as well as between Kantor pairs, their associated Lie triple systems, and their Lie algebra envelopes. We also show that simple Kantor pairs and Lie triple systems in characteristic $0$ are nondegenerate. 456 Gustafson, Karl We comment on domain conditions that regulate when the adjoint of the sum or product of two unbounded operators is the sum or product of their adjoints, and related closure issues. The quantum mechanical problem PHP essentially selfadjoint for unbounded Hamiltonians is addressed, with new results. 464 Hwang, Tea-Yuan; Hu, Chin-Yuan In this paper, a fixed point equation of the compound-exponential type distributions is derived, and under some regular conditions, both the existence and uniqueness of this fixed point equation are investigated. A question posed by Pitman and Yor can be partially answered by using our approach. 472 Iacono, Donatella We study infinitesimal deformations of holomorphic maps of compact, complex, Kähler manifolds. In particular, we describe a generalization of Bloch's semiregularity map that annihilates obstructions to deform holomorphic maps with fixed codomain. 487 Kong, Xiangjun In this paper, another relationship between the quasi-ideal adequate transversals of an abundant semigroup is given. We introduce the concept of a weakly multiplicative adequate transversal and the classic result that an adequate transversal is multiplicative if and only if it is weakly multiplicative and a quasi-ideal is obtained. Also, we give two equivalent conditions for an adequate transversal to be weakly multiplicative. We then consider the case when $I$ and $\Lambda$ (defined below) are bands. This is analogous to the inverse transversal if the regularity condition is adjoined. 498 Mortad, Mohammed Hichem We prove, under some conditions on the domains, that the adjoint of the sum of two unbounded operators is the sum of their adjoints in both Hilbert and Banach space settings. A similar result about the closure of operators is also proved. Some interesting consequences and examples "spice up" the paper. 506 Neamaty, A.; Mosazadeh, S. In this paper, we are going to investigate the canonical property of solutions of systems of differential equations having a singularity and turning point of even order. First, by a replacement, we transform the system to the Sturm-Liouville equation with turning point. Using of the asymptotic estimates provided by Eberhard, Freiling, and Schneider for a special fundamental system of solutions of the Sturm-Liouville equation, we study the infinite product representation of solutions of the systems. Then we transform the Sturm-Liouville equation with turning point to the equation with singularity, then we study the asymptotic behavior of its solutions. Such representations are relevant to the inverse spectral problem. 519 Neeb, K. H.; Penkov, I. We correct an oversight in the the paper Cartan Subalgebras of $\mathrm{gl}_\infty$, Canad. Math. Bull. 46(2003), no. 4, 597-616. doi: 10.4153/CMB-2003-056-1 520 Polishchuk, A. Building on the work of Nogin, we prove that the braid group $B_4$ acts transitively on full exceptional collections of vector bundles on Fano threefolds with $b_2=1$ and $b_3=0$. Equivalently, this group acts transitively on the set of simple helices (considered up to a shift in the derived category) on such a Fano threefold. We also prove that on threefolds with $b_2=1$ and very ample anticanonical class, every exceptional coherent sheaf is locally free. 527 Preda, Ciprian; Sipos, Ciprian We establish a discrete-time criteria guaranteeing the existence of an exponential dichotomy in the continuous-time behavior of an abstract evolution family. We prove that an evolution family ${\cal U}=\{U(t,s)\}_{t \geq s\geq 0}$ acting on a Banach space $X$ is uniformly exponentially dichotomic (with respect to its continuous-time behavior) if and only if the corresponding difference equation with the inhomogeneous term from a vector-valued Orlicz sequence space $l^\Phi(\mathbb{N}, X)$ admits a solution in the same $l^\Phi(\mathbb{N},X)$. The technique of proof effectively eliminates the continuity hypothesis on the evolution family (i.e., we do not assume that $U(\,\cdot\,,s)x$ or $U(t,\,\cdot\,)x$ is continuous on $[s,\infty)$, and respectively $[0,t]$). Thus, some known results given by Coffman and Schaffer, Perron, and Ta Li are extended. 538 Srinivasan, Gopala Krishna; Zvengrowski, P. Writing $s = \sigma + it$ for a complex variable, it is proved that the modulus of the gamma function, $|\Gamma(s)|$, is strictly monotone increasing with respect to $\sigma$ whenever $|t| > 5/4$. It is also shown that this result is false for $|t| \leq 1$. 544 Strungaru, Nicolae In this paper we characterize the positive definite measures with discrete Fourier transform. As an application we provide a characterization of pure point diffraction in locally compact Abelian groups. 556 Teragaito, Masakazu We show that there is an infinite family of hyperbolic knots such that each knot admits a cyclic surgery $m$ whose adjacent surgeries $m-1$ and $m+1$ are toroidal. This gives an affirmative answer to a question asked by Boyer and Zhang. 561 Uren, James J. In this note we give a brief review of the construction of a toric variety $\mathcal{V}$ coming from a genus $g \geq 2$ Riemann surface $\Sigma^g$ equipped with a trinion, or pair of pants, decomposition. This was outlined by J. Hurtubise and L.~C. Jeffrey. A. Tyurin used this construction on a certain collection of trinion decomposed surfaces to produce a variety $DM_g$, the so-called Delzant model of moduli space, for each genus $g.$ We conclude this note with some basic facts about the moment polytopes of the varieties $\mathcal{V}.$ In particular, we show that the varieties $DM_g$ constructed by Tyurin, and claimed to be smooth, are in fact singular for $g \geq 3.$ 566 Zhou, Xiang-Jun; Shi, Lei; Zhou, Ding-Xuan We consider approximation of multivariate functions in Sobolev spaces by high order Parzen windows in a non-uniform sampling setting. Sampling points are neither i.i.d. nor regular, but are noised from regular grids by non-uniform shifts of a probability density function. Sample function values at sampling points are drawn according to probability measures with expected values being values of the approximated function. The approximation orders are estimated by means of regularity of the approximated function, the density function, and the order of the Parzen windows, under suitable choices of the scaling parameter.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8967297673225403, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/24034/how-to-know-when-to-stop-reducing-dimensions-with-pca/24038
# How to know when to stop reducing dimensions with PCA? I'm using PCA to reduce dimensionality before I feed the data into a classifier. My bootstrap/cross-validation has shown a significant reduction in test error as a result of applying PCA and keeping the PCs whose standard deviation is a fraction (say, 0.05) of the standard deviation of the first PC. My features are actually histograms (i.e. vector-valued), so instead of applying PCA once globally to the whole dataset, I applied it locally to some features, which I preselected manually based on the number of features (picking the ones with the most columns). I've tried adjusting the aforementioned tolerance, and tried applying PCA to higher and lower numbers of these histogram features. My question is, can someone please describe a more precise way of finding the optimal amount of dimensionality reduction via PCA as applied above which leads to the highest test accuracy of my classifier? Does it come down to running a loop with a sequence of tolerances and different PCA-treated features and computing the test error for each setting? This would be very computationally expensive. - ## 4 Answers Note that ridge penalisation/regularisation is basically doing model selection using pca. Although it does it smoothly by shrinking along each principal component axis rather than discretely by dropping small.variance pcs. Note that because you are doing pca on different subsets of variables this would roughly correspond to having different regularising parameters for each group rather than one for all the betas. The elements of statistical learning explains ridging quite well and provides some comparisons. - Thanks for that. Would lasso relate to PCA even more strongly? – marcos Mar 3 '12 at 17:11 Lasso is unrelated to pca, as it is not a rotationally invariant algorithm. Ridge regression is rotationally invariate as the penalty depends on the length of the beta vector (which is unchanged under rotations). However if you use the lasso with squared error loss, and the predictors are uncorrelated, then lasso will drop the variables with smallest correlation and shrink the coefficients of the non-zero variables (compared to least squares). – probabilityislogic Mar 4 '12 at 1:18 This is an open problem in psychology/psychometrics. There are a number of methods commonly used. Two that I have found useful are parallel analysis and the minimum avaerage partial criterion. Both of these are implemented in the `psych` package for R. Typically, the MAP criterion suggests less factors (say $l$) than does parallel analysis $m$. In that case, I typically use all the solutions between $l$ and $m$ factors. If you were willing to carry out a cross validation process, you could also investigate structural equation modelling, which is implemented in the packages `lavaan`, `sem` and `OpenMx`. - I know of a few common ways to select PCs. With regards to feeding it to downstream analysis it is commonly done with a cut-off value $\epsilon$ like so. So you have $n$ eigenvalues from the decomposition sorted decreasingly. One can construct a pareto curve where each successive entry i is given by the normalized cumulative sum, e.g. $P(i) = (\sum \lambda_1,..,\lambda_i)/(\sum \lambda_1,...,\lambda_n$). Each entry tells you the fraction of the variance explained by considering up to the $ith$ eigenvector. You pick some value $\epsilon \in (0,1)$ to cut that curve and use all the eigenvectors that are required to explain say, 99% of the variance. Common values are 90%, 95%, 99% and 99.9%. Intuitively this is just a orthogonal component denoising step. This is a rather small loop to try out values on. It requires only one eigendecomposition and 4 runs of your downstream analysis. Furthermore this method naturally extends to kernel PCA methods. I know of another way to select PCs, that to my knowledge isn't popular for feature extraction, but is popular for other analysis. Your pareto curve is monotonically increasing and contains a elbow defined by $max(P(i) - P(i-1))$. The idea being that everything before the elbow is the signal and everything after is the noise. You can try it, but it always gave me bad results for downstream analysis. If you want to consider your labels you can get pretty fancy with factor analysis. There's also the option of testing the dependency of your PC's against your labels, where you keep the ones that are above a threshold for shuffled data. I'm not a big fan of feature selection in this context though because it ignores interaction between features, as in the case of say a XOR gate. Also it sounds like the decomposition was a fruitful effort, you might pursue this further actually using the (weighted?) labels to find the decomposition, i.e. studying cross-covariance matrix decompositions upstream to your classifier. - 1 One potential issue with apply normal PCA to variable selection is that you implicitly assume that the low variance PCs are poor predictors of the response. There is no intrinsic reason why this should be the case, apart from the fact that low variance PCs are more prone to extrapolation issues (predicting outside the space of the sample) compared to high variance PCs. – probabilityislogic Mar 4 '12 at 4:20 Right you're very correct. It is highly dependent on that assumption. Discarding any of the nonsingular eigenvectors requires making assumptions about the manifold that the data lies in. If there isn't a extreme class imbalance then it is reasonable to assume that a low-dimensional manifold will describe the relevant signal. – Jacob Mick Mar 4 '12 at 23:23 I don't think it is possible to find the "optimal" amount of dimensionality reduction without testing for accuracy at every level. The complexity will increase as more and more dimensions are included which not neccessarly means that the classifier will perform better. But without prior knowledge of your data, you could try to look at the eigenvalues of the covariance matrix and include, let's say, 95% of your data by taking the dimensions there describes your data with 95% accuracy. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9361105561256409, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/94458-matrix-trace-relations.html
# Thread: 1. ## Matrix trace relations The trace of a matrix is the sum of the entries on the leading diagonal. So for a 2*2 matrix a11 a12 a21 a22 the trace is a11+a22. Let a and b be matrices in SL(2,C) i.e. 2*2 matrices with entries that are complex numbers and with determinant 1. Let us call the inverse matrices A,B Preliminaries: show that trace is invariant under conjugation and inversion i.e. that trace(B*a*b) = trace(a)=trace(A) Show that trace(a*b) + trace(a*B) = trace(a)*trace(b) Show that if a*a = -I if and only if a has trace 0 (where I is the identity matrix). Show that a^3 = I if and only if a has trace -1 and a^6=I if and only if a has trace 1. Let x = trace(a),y = trace(b),z = trace(a*b),w = trace(a*B) Show that trace(a*b*A*B) = x*x+y*y+z*z-x*y*z-2 = x*x+y*y+w*w-x*y*w-2 = x*x+y*y-z*w-2 (The above relations, especially the first, are very important for geometry and in the theory of Kleinian Groups) Now the hard part: Suppose x^2 <> 4. Show that for n>=1 (trace(a^n*b*A^N*B)-2)/(trace^2(a^n)-4) is constant. I got this result from the book "Outer Circles" which says that you can prove it for n=2 and then use induction. But I could not do it by induction. Instead I found a general expression for calculating trace(a^n*w) given knowledge of trace(a), trace(w) and trace(a*w) by using a difference equation, and then proved it by brute force. 2. The book suggests that for $\frac{\mathop{tr}(A^nBA^{-n}B^{-1})-2}{(\mathop{tr}A^n)^2-4}$ we should write $A$ in standard form. For a $2\times2$ matrix $A$ of determinant $1$ with $(\mathop{tr}A)^2\neq4$ this means a matrix of the form $\begin{pmatrix}t&0\\0&t^{-1}\end{pmatrix}$. Since $\mathop{tr} CD=\mathop{tr}(U^{-1}CUU^{-1}DU)$ for any invertible matrix $U$, we may assume that $A$ is standard and $B$ is any $\textrm{SL}(2,\mathbb C)$ matrix. Thus $A^nBA^{-n}B^{-1}=\begin{pmatrix}t^n&0\\0&t^{-n}\end{pmatrix}\begin{pmatrix}a&b\\c&d\end{pmatrix }\begin{pmatrix}t^{-n}&0\\0&t^n\end{pmatrix}\begin{pmatrix}d&-b\\-c&a\end{pmatrix}$ $\ =\begin{pmatrix}ad-t^{2n}bc&\dots\\\dots&ad-t^{-2n}bc\end{pmatrix}$. Thus $\mathop{tr}(A^nBA^{-n}B^{-1})-2=2ad-bc(t^{2n}+t^{-2n})-2=-bc(t^{2n}+t^{-2n}-2)$ using $ad-bc=1$. Also $(\mathop{tr}A^n)^2-4=(t^n+t^{-n})^2-4=t^{2n}+t^{-2n}-2$. Therefore $\frac{\mathop{tr}(A^nBA^{-n}B^{-1})-2}{(\mathop{tr}A^n)^2-4}=-bc$, a constant. (Of course this assumes $t^{2n}\neq 1$.) 3. That's much shorter than my proof and will teach me not to assume avoiding multiplying matrices and using trace identities instead is always is a good plan. I notice you haven't proved it by induction either. Your remark shows the result is not actually true if A has finite order.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8877968788146973, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/44102?sort=newest
## Is the analysis as taught in universities in fact the analysis of definable numbers? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Ten years ago when I studied in the university I had no idea about definable numbers, but I came to this concept myself. My thoughts were as follows: • All numbers are divided into two classes: those which can be unambiguously defined by a limited set of their properties (definable) and such that for any limited set of their properties there is at least one other number which also satisfies all these properties (undefinable). • It is evident that since the number of properties is countable, the set of definable numbers is countable. So the set of undefinable numbers forms a continuum. • It is impossible to give an example of an undefinable number and one researcher cannot communicate an undefinable number to the other. Whatever number of properties he communicates there is always another number which satisfies all these properties so the researchers cannot be confident whether they are speaking about the same number. • However there are probability based algorithms which give an undefinable number in a limit, for example, by throwing dice and writing consecutive numbers after the decimal point. But the main question that bothered me was that the analysis course we received heavily relied on constructs such as 'let's $a$ to be a number that...", "for each $s$ in interval..." etc. These seemed to heavily exploit the properties of definable numbers and as such one can expect the theorems of analysis to be correct only on the set of definable numbers. Even the definitions of arithmetic operations over reals assumed the numbers are definable. Unfortunately one cannot take an undefinable number to bring a counter-example just because there is no example of undefinable number, but still how to know that all those theorems of analysis are true for the whole continuum and not just for a countable subset? - 10 I just wrote a long answer to this question, but it was closed just as I was about to click submit. Can we re-open please? I think that there are a number of very interesting issues here. – Joel David Hamkins Oct 29 2010 at 12:48 4 Meta thread: meta.mathoverflow.net/discussion/729/… . I have also voted to reopen. – Qiaochu Yuan Oct 29 2010 at 13:07 13 I disagree with the continuing votes to close. The topic of definability is mathematically rich and forms the basis of huge parts of model theory, particularly where it connects with algebra and algebraic geometry, such as in the deep work of o-minimality. In the set-theoretic context, various technical meta-mathematical issues become prominent. The question is well-motivated, sincere and has mathematically interesting answers. – Joel David Hamkins Oct 30 2010 at 22:43 2 In particular, I can imagine further technical answers arguing the line that in a model of $V=HOD$, the definable objects indeed form an elementary substructure of the universe, fulfilling the OPs observation that statements of analysis can be viewed as ultimately about definable objects. – Joel David Hamkins Oct 30 2010 at 22:43 2 Similar issues with definability lay at the heart of this MO question: mathoverflow.net/questions/34710/… – Joel David Hamkins Oct 30 2010 at 23:27 show 11 more comments ## 2 Answers The concept of definable real number, although seemingly easy to reason with at first, is actually laden with subtle metamathematical dangers to which both your question and the Wikipedia article to which you link fall prey. In particular, the Wikipedia article contains a number of fundamental errors and false claims about this concept. The naive treatment of definability goes something like this: In many cases we can uniquely specify a real number, such as $e$ or $\pi$, by providing an exact description of that number, by providing a property that is satisfied by that number and only that number. More generally, we can uniquely specify a real number $r$ or other set-theoretic object by providing a description $\varphi$, in the formal language of set theory, say, such that $r$ is the only object satisfying $\varphi(r)$. The naive account continues by saying that since there are only countably many such descriptions $\varphi$, but uncountably many reals, there must be reals that we cannot describe or define. But this line of reasoning is flawed in a number of ways and ultimately incorrect. The basic problem is that the naive definition of definable number does not actually succeed as a definition. One can see the kind of problem that arises by considering ordinals, instead of reals. That is, let us suppose we have defined the concept of definable ordinal; following the same line of argument, we would seem to be led to the conclusion that there are only countably many definable ordinals, and that therefore some ordinals are not definable and thus there should be a least ordinal $\alpha$ that is not definable. But if the concept of definable ordinal were a valid set-theoretic concept, then this would constitute a definition of $\alpha$, making a contradiction. In short, the collection of definable ordinals either must exhaust all the ordinals, or else not itself be definable. The point is that the concept of definability is a second-order concept, that only makes sense from an outside-the-universe perspective. Tarski's theorem on the non-definability of truth shows that there is no first-order definition that allows us a uniform treatment of saying that a particular particular formula $\varphi$ is true at a point $r$ and only at $r$. Thus, just knowing that there are only countably many formulas does not actually provide us with the function that maps a definition $\varphi$ to the object that it defines. Lacking such an enumeration of the definable objects, we cannot perform the diagonalization necessary to produce the non-definable object. This way of thinking can be made completely rigorous in the following observations: • If ZFC is consistent, then there is a model of ZFC in which every real number and indeed every set-theoretic object is definable. This is true in the minimal transitive model of set theory, by observing that the collection of definable objects in that model is closed under the definable Skolem functions of $L$, and hence by Condensation collapses back to the same model, showing that in fact every object there was definable. • More generally, if $M$ is any model of ZFC+V=HOD, then the set $N$ of parameter-free definable objects of $M$ is an elementary substructure of $M$, since it is closed under the definable Skolem functions provided by the axiom V=HOD, and thus every object in $N$ is definable. These models of set theory are pointwise definable, meaning that every object in them is definable in them by a formula. In particular, it is consistent with the axioms of set theory that EVERY real number is definable, and indeed, every set of reals, every topological space, every set-theoretic object at all is definable in these models. • The pointwise definable models of set theory are exactly the prime models of the models of ZFC+V=HOD, and they all arise exactly in the manner I described above, as the collection of definable elements in a model of V=HOD. In recent work (soon to be submitted for publication), Jonas Reitz, David Linetsky and I have proved the following theorem: Theorem. Every countable model of ZFC and indeed of GBC has a forcing extension in which every set and class is definable without parameters. In these pointwise definable models, every object is uniquely specified as the unique object satisfying a certain property. Although this is true, the models also believe that the reals are uncountable and so on, since they satisfy ZFC and this theory proves that. The models are simply not able to assemble the definability function that maps each definition to the object it defines. And therefore neither are you able to do this in general. The claims made in both in your question and the Wikipedia page on the existence of non-definable numbers and objects, are simply unwarranted. For all you know, our set-theoretic universe is pointwise definable, and every object is uniquely specified by a property. Update. Since this question was recently bumped to the main page by an edit to the main question, I am taking this opportunity to add a link to my very recent paper "Pointwise Definable Models of Set Theory", J. D. Hamkins, D. Linetsky, J. Reitz, which explains some of these definability issues more fully. The paper contains a generally accessible introduction, before the more technical material begins. - 14 @Anixx: No, this is not what Joel was saying. He did not say that it is consistent to "postulate in ZFC that undefinable numbers do not exist". What he was saying was that ZFC cannot even express the notion "is definable in ZFC". And no, this has absolutely nothing to do with constructivism (also please note that even in constructivism uncountable means "not countable", whereas you stated that it means "no practical enumeration" whatever that might mean). – Andrej Bauer Oct 29 2010 at 14:50 18 Joel made a very fine answer, please study it carefully. Joel states that there are models of ZFC such that every element of the model is definable. This does not mean that inside the model the statement "every element is definable" is valid. The statement is valid externally, as a meta-statement about the model. Internally, inside the model, we cannot even express the statement. – Andrej Bauer Oct 29 2010 at 15:13 3 This is off-topic, but: it makes no sense to claim that "constructivist continuum is countable in ZFC sense". What might be the case is that there is a model of constructive mathematics in ZFC such that the continuum is interpreted by a countable set. Indeed, we can find such a model, but we can also find a model in which this is not the case. Moreover, any model of ZFC is a model of constructive set theory. You see, constructive mathematics is more general than classical mathematics, and so in particular anything that is constructively valid is also classically valid. – Andrej Bauer Oct 29 2010 at 15:15 3 A minor technical comment on the first bullet point in Joel's answer: To use the minimal transitive model, one needs to assume that ZFC has well-founded models, not just that it's consistent. The main claim there, that there is a pointwise definable model of ZFC, is nevertheless correct on the basis of mere consistency, essentially by the second bullet point plus the consistency of V=HOD relative to ZFC. – Andreas Blass Oct 29 2010 at 16:44 4 Following the comment of Andreas got me thinking: as a topos theorist (which I am not) I would look at the syntactic model of ZFC (the "Lindenabaum algebra") in order to get to both the minimal model and the one in which every set is definable. But I suppose set theorists don't like that kind of model too much because they prefer transitive models that are "really made of sets". Is that so? Historically, where does this tendency come from? – Andrej Bauer Oct 29 2010 at 21:02 show 19 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. "Definable numbers" are numbers that are definable in terms of first-order logic over set theory. There are perfectly intelligible numbers that cannot be defined in your sense. For example, suppose you have a sequence of definable numbers, $a_n$ that is bounded by a constant $C$. Then $b = \sup a_n$ is a number that is unique and has an unambiguous meaning, but $b$ is not necessarily definable. Each $a_n$ is given by a formula $\phi_n(x)$, but if the formulas are sufficiently different then there is no way to write down a single formula $\phi$ for $b$. The Wikipedia page you link to talks about this issue in terms of Cantor diagonalization. - 4 I'm not clear what you're after. You said that a researcher cannot give an example of an undefinable number, and that one researcher cannot communicate an undefinable number to another. I pointed you towards a counterexample to both claims. You can give a completely explicit family of formulas, $\phi_n$, so explicit that they can be generated by a computer program, that gives you a number that's not definable. We can't say much about that number, but it still have a description that identifies it uniquely. – arsmath Oct 29 2010 at 13:22 4 Arsmath, you haven't actually described a non-definable number, and it is impossible to do so for the reasons expressed in my answer. The basic difficulty is that the conept of definable number is not itself expressible. – Joel David Hamkins Oct 29 2010 at 13:29 1 Either the family of formulas $\phi_n(x)$ finite and can be communicated to the other researcher, then the number b is definable. Or it is infinite, then it cannot be communicated to the other researcher. You actually did not give an example of b since you say nothing about the defining formulas. Thus b is not defined so far. – Anixx Oct 29 2010 at 13:35 17 @Anixx: Your last comment is based on the erroneous assumption that one can define how to pass from a definition to the thing it defines. Arsmath described a situation where a sequence of formulas might be definable (and even computable) but the sequence of real numbers they define is not definable. Joel explained why your assumption is wrong. I second Andrej's earlier suggestion that you study Joel's answer carefully, and I add my own suggestion that you assume that Joel meant exactly what he said, not what you think he should have meant or must have meant. – Andreas Blass Oct 29 2010 at 16:50 7 It feels like we are going in circles now. – Andres Caicedo Oct 29 2010 at 23:43 show 13 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513221979141235, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/157621-solving-exponent-within-multiplication-radical-expressions.html
# Thread: 1. ## Solving for an exponent within a multiplication of radical expressions Thumbnail below, problem #99. Solve for k. I don't really know how to write as far as I got. :[ But basically I started by combining like terms, making it the 5th root of 32a^2k+8 = 2a^4 (after the ^ those are part of the exponent). Then from there I took the 5th root of 32, 2, and then divided both sides by two making it: 5th root of a^2k+8 = a^4 From there I don't know what to do, or if the previous steps of combining the exponents were correct. 2. Originally Posted by Kyrie Thumbnail below, problem #99. Solve for k. I don't really know how to write as far as I got. :[ But basically I started by combining like terms, making it the 5th root of 32a^2k+8 = 2a^4 (after the ^ those are part of the exponent). Then from there I took the 5th root of 32, 2, and then divided both sides by two making it: 5th root of a^2k+8 = a^4 From there I don't know what to do, or if the previous steps of combining the exponents were correct. Even after clicking on the thumbnail, it's still too small for me to read. Please type the equation (click on the relevant link in my signature to learn how to format equations). 3. This image should work. http://a.yfrog.com/img843/4405/scan0001c.jpg I couldn't figure out how to get the addition of exponents after looking at the tutorial images. :[ Sorry. But that image, if you click on it, should blow up quite big. 4. $\displaystyle \sqrt[5]{4a^{3k+2}} \sqrt[5]{8a^{6-k}}=2a^4$ Taking both sides to the power of 5 gives $\displaystyle 4a^{3k+2}\times 8a^{6-k}=(2a^4)^5$ $\displaystyle 32a^{(3k+2)+(6-k)}=32a^{20}$ $\displaystyle a^{(3k+2)+(6-k)}=a^{20}$ $\displaystyle (3k+2)+(6-k)=20$ Can you take it from here?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9644035696983337, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/01/18/properties-of-garnir-elements-from-tableaux-1/?like=1&_wpnonce=67418a1937
# The Unapologetic Mathematician ## Properties of Garnir Elements from Tableaux 1 Pick a Young tableau $t$, and sets $A$ and $B$ as we did last time. If there are more entries in $A\uplus B$ than there are in the $j$th column of $t$ — the one containing $A$ — then $g_{A,B}e_t=0$. In particular, if we pick $A$ and $B$ by selecting a row descent, letting $A$ be the entries below the left entry, and letting $B$ be the entries above the right entry, then this situation will hold. As a first step, I say that ${S_{A\uplus B}}^-e_t=0$. That is, if we allow all the permutations of entries in these two sets (along with signs) then everything cancels out. Indeed, let $\sigma\in C_t$ be any column-stabilizing permutation. Our hypothesis on the number of entries in $A\uplus B$ tells us that we must have some pair of $a\in A$ and $b\in B$ in the same row of $\sigma t$. Thus the swap $(a\,b)\in S_{A\uplus B}$. The sign lemma then tells us that ${S_{A\uplus B}}^-\{\sigma t\}=0$. Since this is true for every summand $\{\sigma t\}$ of $e_t=C_t^-\{t\}$, it is true for $e_t$ itself. Now, our assertion is not that this is true for all of $S_{A\uplus B}$, but rather that it holds for our transversal $\Pi$. We use the decomposition $\displaystyle S_{A\uplus B}=\biguplus\limits_{\pi\in\Pi}\pi(S_A\times S_B)$ This gives us a factorization $\displaystyle{S_{A\uplus B}}^-=\Pi^-(S_A\times S_B)^-=g_{A,B}(S_A\times S_B)^-$ And so we conclude that $g_{A,B}(S_A\times S_B)^-e_t=0$. But now we note that $S_A\times S_B\subseteq C_t$. So if $\sigma\in S_A\times S_B$ we use the sign lemma to conclude $\displaystyle\mathrm{sgn}(\sigma)\sigma e_t=\mathrm{sgn}(\sigma)\sigma C_t^-\{t\}=C_t^-\{t\}=e_t$ Thus $(S_A\times S_B)^-e_t=\lvert S_A\times S_B\rvert e_t$, and so $\displaystyle 0=g_{A,B}(S_A\times S_B)^-e_t=g_{A,B}\lvert S_A\times S_B\rvert e_t=\lvert S_A\times S_B\rvert g_{A,B}e_t$ which can only happen if $g_{A,B}e_t=0$, as asserted. This result will allow us to pick out a row descent in $t$ and write down a linear combination of polytabloids that lets us rewrite $e_t$ in terms of other polytabloids. And it will turn out that all the other polytabloids will be “more standard” than $e_t$. ## 6 Comments » 1. [...] for the last couple posts I’ve talked about using Garnir elements to rewrite nonstandard polytabloids — those [...] Pingback by | January 20, 2011 | Reply 2. [...] must be a row descent — we’ve ruled out column descents already — and so we can pick our Garnir element to write as the sum of a bunch of other polytabloids , where in the column dominance order. But [...] Pingback by | January 21, 2011 | Reply 3. There seems to be a typo at the end of the 1st sentence—shouldn’t it be $g_{A,B}e_T=0$ there? Comment by | April 3, 2011 | Reply 4. I’m not sure I understand your question. Comment by | April 3, 2011 | Reply • oops, I meant “at the end of the 2nd sentence” Comment by | April 3, 2011 | Reply 5. oh, I dropped the “=0″, sorry. Comment by | April 3, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9050247669219971, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/150074-need-help-basic-statistics-question.html
# Thread: 1. ## Need help with a basic statistics question The mean price of new homes from a sample of houses is 155,000 with a standard deviation of 15,000. The data is bell shaped distribution . Between what two prices do 95% of the houses fall? 2. Do you know the significance of 95%? If you have sample data that is normally distributed than 95% of the data would be within two standard deviations from the mean. Can you figure out the prices now? 3. I am confused as to what steps I need to take to figure this out. 4. $\mu = 155000,\sigma= 15000$ 95% of values will fall between $\mu - 2\sigma <X< \mu + 2\sigma$ Go get em! 5. Thanks!!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465948939323425, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/108698-factoring-trinomials-grouping.html
# Thread: 1. ## factoring trinomials by grouping im having some trouble with these, heres an example of a problem... 3q^2+4q-4 sorry i have to ask on here, my math teacher is the worse math teacher in the world and never explains to us how to do anything, she just expects everyone to already know it. when we ask for help she says if you have to ask then you dont understand and if you dont understand you shouldnt be in her class. thanks for the help 2. Originally Posted by formex im having some trouble with these, heres an example of a problem... 3q^2+4q-4 sorry i have to ask on here, my math teacher is the worse math teacher in the world and never explains to us how to do anything, she just expects everyone to already know it. when we ask for help she says if you have to ask then you dont understand and if you dont understand you shouldnt be in her class. thanks for the help $3q^2 + 6q - 2q - 4 = (3q^2 + 6q) - (2q + 4) = 3q{\color{red}(q + 2)} - 2{\color{red}(q + 2)} = {\color{red}(q + 2)} (3q - 2)$. 3. Originally Posted by mr fantastic $3q^2 + 6q - 2q - 4 = (3q^2 + 6q) - (2q + 4) = 3q{\color{red}(q + 2)} - 2{\color{red}(q + 2)} = {\color{red}(q + 2)} (3q - 2)$. thanks a lot man.. the thing is though im having troubles understanding it, so do u mind if i ask u a few questions on how you got your answer? ok, where did the 6q come from? also, how did 2 get a q beside it? 4. Originally Posted by formex thanks a lot man.. the thing is though im having troubles understanding it, so do u mind if i ask u a few questions on how you got your answer? ok, where did the 6q come from? also, how did 2 get a q beside it? Read this: Algebra Help - Factoring a Quadratic Trinomial by Grouping 5. ahh, excellent man, awesome link, thanks! 6. there wouldn't happen to be a formula to finding out the common monomial would there? for example, if the problem was like this 9v^2 + 5v - 4 (9)(-4)=-36 i'd then need a set of integers that's sum is 5 and product is -36, which is sometimes difficult to find. is there a formula you can use to get the common integers? thanks again 7. Originally Posted by formex there wouldn't happen to be a formula to finding out the common monomial would there? for example, if the problem was like this 9v^2 + 5v - 4 (9)(-4)=-36 i'd then need a set of integers that's sum is 5 and product is -36, which is sometimes difficult to find. is there a formula you can use to get the common integers? thanks again There isn't really a shortcut - it just comes with practice. 3v * 3v will not work with -2*2 because it will be the difference of two squares. As 5 > 0 it follows that the minus sign will go next to the 9v giving (9v-4)(v+1)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9610816240310669, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/282462/upper-bound-for-the-absolute-value-of-an-inner-product/282478
# Upper bound for the absolute value of an inner product I am trying to prove the inequality $$\left|\sum\limits_{i=1}^n a_{i}x_{i} \right| \leq \frac{1}{2}(x_{(n)} - x_{(1)}) \sum\limits_{i=1}^n \left| a_{i} \right| \>,$$ where $x_{(n)} = \max_i x_i$ and $x_{(1)} = \min_i x_i$, subject to the condition $\sum_i a_i = 0$. I've tried squaring and applying Samuelson's inequality to bound the distance between any particular observation and the sample mean, but am making very little headway. I also don't quite understand what's going on with the linear combination of observations out front. Can you guys point me in the right direction on how to get started with this thing? - – cardinal Jan 20 at 3:36 I guess this is tagged statistics, because this deals with order statistics. – ACARCHAU Jan 20 at 3:38 ## 2 Answers Hint: $$\left|\sum_i a_i x_i\right| = \frac{1}{2} \left|\sum_i a_i x_i\right| + \frac{1}{2} \left|\sum_i a_i \cdot (-x_i)\right| \>.$$ Now, 1. What do you know about $\sum_i a_i x_{(1)}$ and $\sum_i a_i x_{(n)}$? (Use your assumptions.) 2. Recall the old saw: "There are only three basic operations in mathematics: Addition by zero, multiplication by one, and integration by parts!" (Hint: You won't need the last one.) Use this and the most basic properties of absolute values and positivity to finish this off. - nice, i've added my own solution as well (pretty much the same as yours, minus 1 step). i'd appreciate it if you could check it. – pootieman Jan 20 at 7:05 We have: $$\left|\sum_i a_i x_i\right| = \left|\sum_i a_i(x_i - C)\right|$$ for any constant C. Set $$C:= \frac{1}{2}(x_{(1)} + x_{(n)})$$ Then $$\left| x_{i} - C \right| \leq \frac{1}{2}(x_{(n)} - x_{(1)})$$ and so $$\left|\sum_i a_i x_i\right| = \left | \sum_i (x_{i} - C) a_i \right| \leq \sum_i \left | (x_{i} - C) a_i \right| \leq \sum_i \left | \frac{1}{2}(x_{(n)} - x_{(1)}) a_i \right| = \frac{1}{2}(x_{(n)} - x_{(1)}) \sum_i \left | a_i \right|$$ - Unfortunately, this argument does not work. Note that since the $a_i$ sum to zero, you've argued that $|\sum_i a_i x_i| \leq 0$, which clearly cannot be the case in general. – cardinal Jan 20 at 14:28 @cardinal ok, how about now? – pootieman Jan 20 at 15:05 Unfortunately, the argument in your new edit is the same (in a slight disguise) as the old. You can't claim $|\sum_i a_i x_i| \leq |\sum_i \frac{1}{2}(x_{(n)}-x_{(1)}) a_i|$ based on what you wrote previously. Indeed, $|\sum_i \frac{1}{2}(x_{(n)}-x_{(1)}) a_i| = 0$. It may help to think through what the absolute value is doing and draw a few diagrams and/or look at some numerical examples. Cheers. :) – cardinal Jan 20 at 23:16 @cardinal ok let's try this again... – pootieman Jan 20 at 23:39 If you replace the third display with $|x_i - C| \leq \frac{1}{2}(x_{(n)} - x_{(1)})$ then the argument works. But, make sure you understand why (and, also, why the suggested change is necessary!), especially as concerns the very last inequality in your post. With that edit, I'll be happy to upvote. Cheers. – cardinal Jan 20 at 23:49 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946002721786499, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/11/does-99999-1?answertab=active
# Does .99999… = 1? I'm told by smart people that `0.999... = 1` and I believe them but is there a proof that explains why? - 7 Seriously people what's with all the duplicates? Check and see if someone has already given your answer first! – Noah Snyder Jul 20 '10 at 20:28 17 Let me say a word about why 1/3 = .33333... is not obvious at all. What does it mean to say that some number is 1/3? Well division is the operation that undoes multiplication, so it means that it's a number that when you multiply by 3 you get 1. Well what happens when you multiply .3333... times 3? You get .9999... So unless you already know that .99999...=1 then you can't prove that 1/3 = .3333... The fact that .9999...=1 is more basic than the fact that 1/3 = .33333....! – Noah Snyder Jul 20 '10 at 21:32 2 Reopened -d this is a valid question – Casebash Jul 21 '10 at 7:43 9 @Harry tagging this question [cauchy-sequences] is about as useful as tagging the question "What's the best color for a Porsche?" as [combustion-engine]. – balpha♦ Jul 21 '10 at 19:44 3 I think it should be tagged "crank magnet". – Tom Stephens Jul 23 '10 at 2:26 show 6 more comments ## 19 Answers What does it mean when you refer to $.99999...$? Symbols don't mean anything in particular until you've defined what you mean by them. In this case the definition is that you're taking the limit of $.9$, $.99$, $.999$, $.9999$, etc. What does it mean to say that limit is $1$? Well, it means that no matter how small a number $x$ you pick, I can show you a point in that sequence such that all further numbers in the sequence are within distance $x$ of $1$. But certainly whatever number you chose your number is bigger than $10^{-k}$ for some $k$. So I can just pick my point to be the $k$th spot in the sequence. A more intuitive way of explaining the above argument is that the reason $.99999... = 1$ is that their difference is zero. So let's subtract $1.0000... -.99999... = .00000... = 0$. That is, $1.0 -.9 = .1$ $1.00-.99 = .01$ $1.000-.999=.001$, $...$ $1.000... -.99999... = .000... = 0$ - 24 Finite ones do, but infinite ones don't! The whole reason that this question confuses people is that defining what an infinite decimal means is difficult and confusing! – Noah Snyder Jul 20 '10 at 20:23 6 I think it's pretty standard to define infinite decimals as the infinite series where the terms are the individual digits divided by the appropriate power of the base. That is, 0.99999... = 9/10 + 9/10^2 + 9/10^3 + ... – Isaac Jul 20 '10 at 20:57 8 But that's exactly my point, taking the limit of series (and in fact, determining when such a limit exists) is a difficult and confusing concept! – Noah Snyder Jul 20 '10 at 21:30 13 @Doug: I don't understand what you're talking about. Could you try to clarify? What do you mean by an infinite sum if you don't mean the limit of the partial sums? – Noah Snyder Jul 23 '10 at 3:07 6 @Doug: Then your reading is incorrect. – user126 Aug 17 '10 at 22:14 show 9 more comments The argument against this is that 0.99999999... is "somewhat" less than 1. How much exactly? ```` 1 - 0.999999... = ε (0) ```` If the above is true, the following also must be true: ````9 × (1 - 0.999999...) = ε × 9 ```` Let's calculate: ````0.999... × 9 = ─────────── 8.1 81 81 . . . ─────────── 8.999... ```` Thus: ```` 9 - 8.999999... = 9ε (1) ```` But: ```` 8.999999... = 8 + 0.99999... (2) ```` Indeed: ````8.00000000... + 0.99999999... = ──────────────── 8.99999999... ```` Now let's see what we can deduce from `(0)`, `(1)` and `(2)`. ````9 - 8.999999... = 9ε because of (2) 9 - 8.999999... = 9 - (8 + 0.99999...) = because of (1) = 9 - 8 - (1 - ε) because of (0) = 1 - 1 + ε = ε. ```` Thus: ````9ε = ε 8ε = 0 ε = 0 1 - 0.999999... = ε = 0 ```` Quod erat demostrandum. Pardon my unicode. - I did my best to avoid `0.00000...`, but this made the calculations not as strikingly simple as I'd have liked to. – badp Jul 20 '10 at 21:10 Why was this voted down? It seems reasonable to this amateur math enjoyer. – ErikE Sep 25 '10 at 7:07 @Emtucifor I guess this sounds like "nonsense" to people that disagree on the basic premise of `0.999... = 1` :) – badp Sep 25 '10 at 7:21 2 This is going to sound bad, but honestly I think there's a basic intelligence threshold that prevents certain people from grasping that infinity is transcendent, and when you multiply an infinitely repeating sequence by 10 you still have an infinitely repeating sequence (without a zero entering somewhere to the right). Really. I think it's at some level an IQ thing. – ErikE Sep 27 '10 at 5:33 8ε = 0 instead of 10ε = 0 – CutieKrait May 9 at 9:37 show 4 more comments What I really don't like about all the above answers, is the underlying assumption that `1/3=0.3333...`, How do you know that?. It seems to me like assuming the something which is already known. A proof I really like is: ````0.9999... × 10 = 9.999999... 0.9999... × (9+1) = 9.999999... [by distribution rule:] 0.9999... × 9 + 0.99999.... × 1 = 9.99999.... 0.9999... × 9 = 9.9999....-0.9999.... = 9 0.9999... × 9 = 9 0.9999... = 1 ```` The only things I need to assume is, that `9.999.... - 0.9999... = 9` and that `0.999... x 10 = 9.999...`. These seems to me intuitive enough to take for granted. The proof is from an old highschool level math book of the Open University in Israel. - 7 You're also assuming that you can multiply 0.9999... by 10 and get 9.9999... (or, rather, that arithmetic with infinite decimals works normally), which is not at all unreasonable to assume. – Isaac Jul 20 '10 at 20:59 @Issac, correct, fixed now. – Elazar Leibovich Jul 21 '10 at 3:58 0.9999... x 9 = 8.999999... – Sklivvz♦ May 3 '11 at 20:51 1 @Sklivvz, true, since 8.999....=9. Did you have trouble understanding the algebra I did? – Elazar Leibovich May 4 '11 at 5:55 3 Sklivvz, but my answer does not rely on the fact that 8.999...=9, did it? Or am I missing something (tried to make it a bit more clear) – Elazar Leibovich May 4 '11 at 5:59 show 1 more comment Given (by long division): $\frac{1}{3} = 0.\bar{3}$ Multiply by 3: $3\times \left( \frac{1}{3} \right) = \left( 0.\bar{3} \right) \times 3$ Therefore: $\frac{3}{3} = 0.\bar{9}$ QED. - This should have been a comment; but I post as an answer anyway. I just want to remark that in spite of all the criticisms, all the statements in the most popular proof in the list above are true. That is, in the real number system, it is indeed true that: 1/3 = 0.333333... It is again true that: 0.33333.... * 3 = 0.99999.... Etc.. I would still stand by the superiority of Noah Snyder's answer, however. That's the one addressing the subtleties. - I do not think that 0.999... = 1 unless we're defining the equality operator in some strange way. Assuming you allow N decimals in the 0.999... term, you can take N as large as you want and it will still be less than 1. There is no N for which 0.999... is not less than 1. Sure, if you take the limit as N approaches infinity then you get 1, but "lim 0.999... as N approaches infinity" is not the same as "0.999..." by itself. By itself, there is always some infinitesimal difference. I think the whole point of writing 0.999... is to indicate that it is NOT equal to 1 and to indicate that there is some infinitesimal difference. Otherwise you would just write 1. My opinion is that most people say 0.999... = 1 because that's what they were taught in college and they accepted it without really thinking about it. All the arguments I've heard or read so far are just contrived ways of making the equality hold, but there is always some subtle flaw that takes a long time to point out, that the proponents will not accept anyway (for reasons I will not get into here). From a comment I made: 10x - x != 9. 10x - x would be 8.9999...1. However infinite the extent of 9s is in x, if we multiply it by 10, the nines are shifted left by one position and a zero inserted at the "last" place, and then when you subtract the other number there is a nine subtracted from a zero at the far right. Otherwise we'd have to give 0.999.. some unusual properties like automatically increasing the number of nines when it is multiplied. It would not be just an ordinary number. Maybe that's the problem. 0.999... might just not be an ordinary type number as some people are using it. The current winning response as of 7/22/10 treats 0.999... as being equivalent to lim (0.999...) as the number of decimals goes to infinity. But the sequence and the limit of the sequence are not the same thing. - 8 The meaning of the symbol "..." is that we can't choose such an N, as you propose. The idea is that 0.999... means that the nines go on forever. The arguments that you've heard or read so far constitute mathematical proof - regardless whether or not you believe in them, or find them to appear contrived. – Tom Stephens Jul 23 '10 at 2:17 I see no proof, mathematical, logical, or otherwise, in the above. – Doug Treadwell Jul 25 '10 at 2:56 13 What is "0.999…" according to you if not "lim 0.999...to N digits as N approaches infinity"? Is your "0.999…" a number? If you want to define it as being not a fixed real number but standing for something else (e.g. "0.999… stands for some unspecified number starting with a bunch of 9s, like 0.999999983" or "0.999… stands for 0 followed by some finite number of 9s") then you're right, it won't mean the same thing as 1, but this would be contrary to the usual mathematical convention, and not very useful. – ShreevatsaR Aug 16 '10 at 22:03 5 My first downvote. Not only is this response incorrect, it is incorrigible. The problem is well identified by Shreevatsa. If 0.999... does not represent a limit, what DOES it represent? Is there a single real number represented here? This response wants it to represent any rational number in some small neighbourhood of some unspecified rational number that is slightly smaller than one. – yasmar Nov 12 '10 at 21:04 1 I downvoted it. But I felt a bit guilty since the answer was sort of useful to me. It is very illuminating to learn how people confuse things. I really like the reasoning that $1 - 0.999\ldots = 0.000\ldots1$. – André Caldas Mar 25 '12 at 14:28 show 1 more comment `.999... = 1` because `.999...` is a concise symbolic representation of "the limit of some variable as it approaches one." Therefore, `.999... = 1` for the same reason the limit of x as x approaches 1 equals 1. - ok, so based on this method, you could say that any number is equal to another number, because I could argue, that 8.999999 = 9, and then argue that 9.0000= 9.1 and so on so if 8.9999 = 9 and 9 = 9.1 therefore 8.999 must be also equal to 9.1, you can also state that 3=6 as 3=7, so that thinking must be wrong in so many levels :) – Val Jan 24 at 21:47 – Squirtle Mar 5 at 5:37 Okay I burned a lot of reputation points (at least for me) on MathOverflow to gain clarity on how to give some intuition into this problem, so hopefully this answer will be at least be somewhat illuminating. To gain a deeper understanding of what is going on, first we need to answer the question, "What is a number?" There are a lot of ways to define numbers, but in general numbers are thought of as symbols that represent sets. This is easy for things like the natural numbers. So 10 would correspond to the set with ten things -- like a bag of ten stones. Pretty straight forward. The tricky part is that when we consider ten a subset of the real numbers, we actually redefine it. This is not emphasized even in higher mathematics classes, like real analysis; it just happens when we define the real numbers. So what is 10 when constructed in the real numbers? Well, at least with the Dedekind cut version of the real numbers, all real numbers correspond to a set with an infinite amount of elements. This makes 10 under the hood look drastically different, although in practice it operates exactly the same. So let's return to the question: Why is 10 the same as 9.99999? Because the real numbers have this completely surprising quality, where there is no next real number. So when you have two real numbers that are as close together as possible, they are the same. I can't think of any physical object that has this quality, but it's how the real numbers work (makes "real" seem ironic). With integers (bag of stones version) this is not the same. When you have two integers as close to each other as possible they are still different, and they are distance one apart. Put another way, 10 bag of stones are not the same as 9.9999999 but 10 the natural number, where natural numbers are a subset of the real numbers is. The bottom line is that the real numbers have these tricky edge cases that are hard to understand intuitively. Don't worry, your intuition is not really failing you. :) I didn't feel confident answering until I got this Terence Tao link: http://www.google.com/buzz/114134834346472219368/RarPutThCJv/In-the-foundations-of-mathematics-the-standard. - For any digit n = 0,1,2,3,4,5,6,7,8,9 the fraction ````n/9 = 0.nnnnnnnn... (periodically). ```` Therefore 9/9 is equal to 0.999999999.... but also since 9 divided by 9 equals one, ````9/9 = 0.9999999.... = 1 ```` - This is a valid proof to me, but pretty similar to the other division-based answers. There are serial down-voters aroudn here, beware! – Noldorin Jul 21 '10 at 8:36 @Noldorin: doesn't matter so much for CW. Although, in case of a down-vote I'd at least like to know what were wrong with this answer beside a "don't like". But thanks for the warning :7 – Tobias Kienzler Jul 29 '10 at 14:26 Indeed this is true. The underlying reason is that decimal numbers are not unique representations of the reals. (Technically, there does not exist a bijection between the set of all decimal numbers and the reals.) Here's a very simple proof: ````1 / 3 = 0.333... (by long division) => 0.333 * 3 = 0.999... (multiplying each digit by 3) But then we already know 0.333... * 3 = 1 Therefore 0.999... = 1 ```` - 13 -1. This is not a proof at all! Why is 1/3=0.333...? Seriously folks, for the private beta, let's try to maintain a little correctiness. – Scott Morrison Jul 20 '10 at 20:03 1 @Scott: Sure it is. You can prove it easily by long division. This is about algorithms for mathematical methods really. – Noldorin Jul 20 '10 at 20:12 2 @Scott: Might help to stop whining and post what you think is the 'correct' answer then. – Noldorin Jul 20 '10 at 20:17 7 Just to nitpick, there is a bijection between the set of decimal expansions and reals because they are sets with the same cardinality. It's just that the natural map taking expansions to real numbers isn't injective. – Simon Nickerson Jul 20 '10 at 21:15 1 @Scott, I see it would not be obvious that 1/3=0.333..., but as by Noldorins comment regarding long division, what would be wrong with this as a proof, if the first line is annotated with 'by long division' ? – Sami Jul 21 '10 at 4:58 show 2 more comments If you take two real numbers `x` and `y` then there per definition of the real number `z` for which `x < z < y` or `x > z > y` is true. For `x = 0.99999...` and `y = 1` you can't find a `z` and therefore `0.99999... = 1`. - Assuming: 1. infinite decimals are series where the terms are the digits divided by the proper power of the base 2. the infinite geometric series a + a * r + a * r^2 + a * r^3 + ... has sum a/(1 - r) as long as |r|<1 0.99999... = 9/10 + 9/10^2 + 9/10^3 + ... This is the infinite geometric series with first term a = 9/10 and common ratio r = 1/10, so it has sum (9/10) / (1 - 1/10) = (9/10) / (9/10) = 1. - 2 Your method is a simple way of converting the decimal representation of a rational number into a fraction, e.g. $0.150150150...=\sum_{n\geq 1}\frac{150}{10^{3n}}=\frac{0.150}{1-10^{-3}}=\frac{50}{333}$ – Américo Tavares Aug 16 '10 at 22:02 ````x = 0.999... 10x = 9.999... 10x - x = 9.999... - 0.999... = 9 -> 9x = 9 -> x = 1 thus, 0.999... = 1 ```` - This is the most intuitive argument, although some might say "But 10x-x isn't 9, because there's going to be a mismatch all the way to the right" - Noah's more complex deals with that. – Charles Stewart Jul 21 '10 at 10:55 10x - x != 9. 10x - x would be 8.9999...1. However infinite the extent of 9s is in x, if we multiply it by 10, the nines are shifted left by one position and a zero inserted at the "last" place, and then when you subtract the other number there is a nine subtracted from a zero at the far right. Otherwise we'd have to give 0.999.. some unusual properties like automatically increasing the number of nines when it is multiplied. It would not be just an ordinary number. Maybe that's the problem. 0.999... might just not be an ordinary type number as some people are using it. – Doug Treadwell Jul 23 '10 at 2:19 3 @Doug It's incorrect to talk about the "number of nines" because infinity minus a number = infinity. Infinity is transcendent. It means "uncountable". If you take infinity and slide it left a little bit, it's still infinity long. – ErikE Sep 25 '10 at 7:00 Suppose this was not the case, i.e. `0.9999... != 1`. Then `0.9999... < 1` (I hope we agree on that). But between two distinct real numbers, there's always another one (say `x`) in between, hence `0.9999... < x < 1`. The decimal representation of `x` must have a digit somewhere that is not `9` (otherwise `x == 0.9999...`). But that means it's actually smaller – `x < 0.9999...`, contradicting the definition of `x`. Thus, the assumption that there's a number between `0.9999...` and `1` is false, hence they're equal. - 10 This proof also relies on the assumption that every real number can be represented by a (potentially infinite) decimal, which might or might not be accepted by someone asking the original question. – bryn Jul 22 '10 at 2:21 If you want to go the calculus route, then basically you're gonna follow this pattern and use mathy magic and limits to follow it up to its logical conclusion: 0.9 0.99 0.999 0.9999 And so on. This pattern is a sum of 9*10^-i from i=1 to i=infinity. Every iteration, the difference between the pattern and 1 is a bit less. So at i=infinity, the difference is so small that it is literally zero. - 1 This is closest to the proof I had in mind. First, $$.999... = 9 \sum_{k=1}^{\infty} 10^{-k}$$ This seems rather noncontroversial. Let's call this series $S$. This is a geometric series with common ratio $1/10$. Therefore, $$S = 9 \cdot \frac{1/10}{1 - 1/10} = 9 \cdot \frac{1}{9} = 1$$ What's wrong with this proof? – user41583 Sep 18 '12 at 22:40 Here's a collection of proofs: - 1 This exhibits why link-only answers are not good. This link has died. – robjohn♦ Mar 20 at 10:52 You can visualise it by thinking about it in infinitesimals. The more 9's you have on the end of 0.999, the closer you get to 1. When you add an infinite number of 9's to the decimal expansion, you are infinitely close to 1 (or an infinitesimal distance away). And this isn't a rigorous proof, just an aid to visualisation of the result. - The proof I've always seen was that if 1/3 == 0.333... then 0.333... x 3 must be equal to 1, but at the same time, calculating it on a digit level gives 0.999... - 9 -1. Not a proof, see my comments on all the other duplicate answers. – Scott Morrison Jul 20 '10 at 20:03 2 @Scott Also, that proof happens to appear on the Wikipedia page for 0.999... too, so I'd suggest you "fix it" there, after you've provided your own answer. – Rowland Shaw Jul 20 '10 at 20:11 5 Whatever proof you could give that 1/3 = .3333... would also prove directly that 1 = .999999... – Noah Snyder Jul 20 '10 at 20:24 4 But how do you know that long division works? How do you know the pattern goes on forever? You can do long division for 1/1 and get .999999... Try it out! – Noah Snyder Jul 20 '10 at 20:33 8 This isn't just philosophical, why isn't 1/3 = ...33333367. (That's going infinitely to the left and with the decimal point at the right. Decimals going infinitely to the left work just as well formally as ones going to the right.) – Noah Snyder Jul 20 '10 at 22:18 show 3 more comments well the simplest is `3x(1/3) = 1` where `1/3 = 0.33·` - 12 -1. Uhh, everyone knows this answer is nonsense, right? And is just upvoting it for a lark? – Scott Morrison Jul 20 '10 at 19:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946603000164032, "perplexity_flag": "middle"}
http://www.citizendia.org/Linear_particle_accelerator
A linear particle accelerator (also called a linac) is an electrical device for the acceleration of subatomic particles. This sort of particle accelerator has many applications, from the generation of X-rays in a hospital environment, to an injector into a higher energy synchrotron at a dedicated experimental particle physics laboratory. X-radiation (composed of X-rays) is a form of Electromagnetic radiation. A synchrotron is a particular type of cyclic Particle accelerator in which the magnetic field (to turn the particles so they circulate and the electric field (to accelerate Particle physics is a branch of Physics that studies the elementary constituents of Matter and Radiation, and the interactions between them The design of a linac depends on the type of particle that is being accelerated: electron, proton or ion. The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J The proton ( Greek πρῶτον / proton "first" is a Subatomic particle with an Electric charge of one positive An ion is an Atom or Molecule which has lost or gained one or more Valence electrons giving it a positive or negative electrical charge They range in size from a cathode ray tube to the 2-mile long Stanford Linear Accelerator Center in California. The cathode ray tube (CRT is a Vacuum tube containing an Electron gun (a source of electrons and a Fluorescent screen with internal or The Stanford Linear Accelerator Center ( SLAC) is a United States Department of Energy National Laboratory operated by Stanford University under California ( is a US state on the West Coast of the United States, along the Pacific Ocean. ## Construction and operation A linear particle accelerator consists of the following elements: • The particle source. The design of the source depends on the particle that is being accelerated. Electrons are generated by a cold cathode, a hot cathode, a photocathode, or RF ion sources. The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J A cold cathode is an element used within some Nixie tubes Gas discharge lamps Gas filled tubes and Vacuum tubes Cold cathodes do not Hot cathode is also a name for a Hot filament ionization gauge, a vacuum measuring device In a Photomultiplier or Phototube, a photocathode is a negatively charged Electrode coated with a Photosensitive compound An RF antenna ion source (or Radio frequency antenna ion source is an internal multi- Cusp design that can produce a Particle beam of about ~30 to Protons are generated in an ion source, which can have many different designs. The proton ( Greek πρῶτον / proton "first" is a Subatomic particle with an Electric charge of one positive An ion source is an electro-magnetic device that is used to create charged particles If heavier particles are to be accelerated, (e. g. uranium ions), a specialized ion source is needed. Uranium (jʊˈreɪniəm is a silvery-gray Metallic Chemical element in the An ion is an Atom or Molecule which has lost or gained one or more Valence electrons giving it a positive or negative electrical charge An ion source is an electro-magnetic device that is used to create charged particles • A high voltage source for the initial injection of particles. • A hollow pipe vacuum chamber. The length will vary with the application. If the device is used for the production of X-rays for inspection or therapy the pipe may be only 0. 5 to 1. 5 meters long. If the device is to be an injector for a synchrotron it may be about ten meters long. A synchrotron is a particular type of cyclic Particle accelerator in which the magnetic field (to turn the particles so they circulate and the electric field (to accelerate If the device is used as the primary accelerator for nuclear particle investigations, it may be several thousand meters long. • Within the chamber, electrically isolated cylindrical electrodes whose length varies with the distance along the pipe. The length of each electrode is determined by the frequency and power of the driving power source and the nature of the particle to be accelerated, with shorter segments near the source and longer segments near the target. The mass of the particle has a large effect on the length of the cylindrical electrodes; for example an electron is considerably lighter than a proton and so will generally require a much larger section of cylindrical electrodes as it accelerates very quickly - think about a concrete ball and a tennis ball; it is easier to accelerate the tennis ball from rest (this comes about because of the kinetic energy ($\frac{1}{2}mv^2$ being equal to the energy gained by the electron as it is accelerated through the potential difference, usually in the region of 5KV. ) • One or more sources of radio frequency energy, used to energize the cylindrical electrodes. A very high power accelerator will use one source for each electrode. The sources must operate at precise power, frequency and phase appropriate to the particle type to be accelerated to obtain maximum device power. • An appropriate target. If electrons are accelerated to produce X-rays then a water cooled tungsten target is used. X-radiation (composed of X-rays) is a form of Electromagnetic radiation. Various target materials are used when protons or other nuclei are accelerated, depending upon the specific investigation. The proton ( Greek πρῶτον / proton "first" is a Subatomic particle with an Electric charge of one positive For particle-to-particle collision investigations the beam may be directed to a pair of storage rings, with the particles kept within the ring by magnetic fields. The beams may then be extracted from the storage rings to create head on particle collisions. As the particle bunch passes through the tube it is unaffected (the tube acts as a Faraday cage), while the frequency of the driving signal and the spacing of the gaps between electrodes are designed so that the maximum voltage differential appears as the particle crosses the gap. A Faraday cage or Faraday shield is an enclosure formed by conducting material, or by a mesh of such material This accelerates the particle, imparting energy to it in the form of increased velocity. At speeds near the speed of light, the incremental velocity increase will be small, with the energy appearing as an increase in the mass of the particles. In portions of the accelerator where this occurs, the tubular electrode lengths will be almost constant. • Additional magnetic or electrostatic lens elements may be included to ensure that the beam remains in the center of the pipe and its electrodes. • Very long accelerators may maintain a precise alignment of their components through the use of servo systems guided by a laser beam. ## Types of Accelerator The Stanford superconducting linear accelerator, housed on campus below the Hansen Labs until 2007. This facility is separate from SLAC The acceleration of the particles can be made with three general methods: • Electrostatically: The particles are accelerated by the electric field between two different fixed potentials. The Stanford Linear Accelerator Center ( SLAC) is a United States Department of Energy National Laboratory operated by Stanford University under In Physics, the space surrounding an Electric charge or in the presence of a time-varying Magnetic field has a property called an electric field (that can Examples include the Van de Graaf, Pelletron and Tandem accelerators. A Van de Graaff generator is an electrostatic machine which uses a moving belt to accumulate very high electrostatically stable Voltages on a hollow metal globe Pelletron is a type of electrostatic Particle accelerator similar to a Van de Graaff generator. • Induction: A pulsed voltage is applied around magnetic cores. Faraday's law of induction describes an important basic law of electromagnetism which is involved in the working of Transformers Inductors and many forms of The electric field produced by this voltage is used to accelerate the particles. In Physics, the space surrounding an Electric charge or in the presence of a time-varying Magnetic field has a property called an electric field (that can • Radio Frequency (RF): The electric field component of radio waves accelerates particles inside a partially closed conducting cavity acting as a RF cavity resonator. Radio waves are electromagnetic waves occurring on the Radio frequency portion of the Electromagnetic spectrum. Radio waves are electromagnetic waves occurring on the Radio frequency portion of the Electromagnetic spectrum. A resonator is a device or system that exhibits Resonance or resonant behavior that is it naturally oscillates at some frequencies, called its resonance Examples include the travelling wave, Alvarez, and Wideroe cavity type accelerators. ## Advantages Linacs of appropriate design are capable of accelerating heavy ions to energies exceeding those available in ring-type accelerators, which are limited by the strength of the magnetic fields required to maintain the ions on a curved path. High power linacs are also being developed for production of electrons at relativistic speeds, required since fast electrons traveling in an arc will lose energy through synchrotron radiation; this limits the maximum power that can be imparted to electrons in a synchrotron of given size. This article concerns the physical phenomenon of synchrotron radiation Linacs are also capable of prodigious output, producing a nearly continuous stream of particles, whereas a synchrotron will only periodically raise the particles to sufficient energy to merit a "shot" at the target. (The burst can be held or stored in the ring at energy to give the experimental electronics time to work, but the average output current is still limited. ) The high density of the output makes the linac particularly attractive for use in loading storage ring facilities with particles in preparation for particle to particle collisions. The high mass output also makes the device practical for the production of antimatter particles, which are generally difficult to obtain, being only a small fraction of a target's collision products. In Particle physics and Quantum chemistry, antimatter is the extension of the concept of the Antiparticle to Matter, where antimatter is composed These may then be stored and further used to study matter-antimatter annihilation. As there are no primary bending magnets, this cost of an accelerator is reduced. Medical grade linacs accelerate electrons using tuned-cavity waveguide in which the RF power creates a standing wave. A standing wave, also known as a stationary wave, is a Wave that remains in a constant position Some linacs have short, vertically mounted waveguides, while higher energy machines tend to have a horizontal, longer waveguide and a bending magnet to turn the beam vertically towards the patient. Medical linacs utilise monoenergetic electron beams between 4 and 25 MeV, giving an x-ray output with a spectrum of energies up to and including the electron energy when the electrons are directed at a high-density (such as tungsten) target. Tungsten (ˈtʌŋstən also known as wolfram (/ˈwʊlfrəm/ is a Chemical element that has the symbol W and Atomic number 74 The electrons or x-rays can be used to treat both benign and malignant disease. The reliability, flexibility and accuracy of the radiation beam produced has largely supplanted cobalt therapy as a treatment tool. Radiation therapy (or radiotherapy) is the medical use of Ionizing radiation as part of Cancer treatment to control Malignant In addition, the device can simply be powered off when not in use; there is no source requiring heavy shielding. ## Disadvantages • The device length limits the locations where one may be placed. • A great number of driver devices and their associated power supplies are required, increasing the construction and maintenance expense of this portion. • If the walls of the accelerating cavities are made of normally conducting material and the accelerating fields are large, the wall resistivity converts electric energy into heat quickly. On the other hand superconductors have various limits and are too expensive for very large accelerators. Therefore, high energy accelerators such as SLAC, still the longest in the world, (in its various generations) are run in short pulses, limiting the average current output and forcing the experimental detectors to handle data coming in short bursts. The Stanford Linear Accelerator Center ( SLAC) is a United States Department of Energy National Laboratory operated by Stanford University under ## Wake fields The electrons from the klystron build up the driving field. A klystron is a specialized linear-beam Vacuum tube (evacuated electron tube The driven particles also generate a field, called the wakefield. For strong wakefields high frequencies are used, which also allow higher field strengths. A small dielectrically loaded waveguide or coupled cavity waveguides are used instead of large waveguides with small drift tubes. At the end all fields are absorbed by a dummy load or cavity losses. A dummy load is a device used to simulate an electrical load usually for testing purposes
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184761047363281, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/18678/linear-independency-before-and-after-linear-transformation?answertab=votes
# Linear independency before and after Linear Transformation If we are given some linearly dependent vectors, would the T of those vectors necessarily be dependent (given a transformation from $R^n$ to $R^p$)? And if we are given some linearly independent vectors, would T of those vectors necessarily be independent (given a transformation from $R^n$ to $R^p$)? The answers are presumably no and no, but I am struggling to figure out why. Thanks! - The only set whose linear independence is preserved under all linear transformations is the empty set. – Jonas Meyer Jan 24 '11 at 2:18 ## 4 Answers If a set of vectors is dependent, there is a non-trivial combination of some of them that equals 0: $$a_1v_1+\dots+a_n v_n=0,$$ where not all the scalars $a_i$ are $0$. Linearity of $T$ should give you at once that the $Tv_i$ are also linearly dependent (as witnessed by the same $a_i$). Linear independence, on the other hand, does not need to be preserved. For example, consider the linear transformation that maps all the vectors to 0. Now, under some additional conditions, a linear transformation may preserve independence. For example, suppose that $T$ is injective (i.e., the only solution to $Tv=0$ is $v=0$). Then $T$ preserves linear independence: Suppose that $Tv_1,\dots,Tv_n$ are dependent, so there are scalars $b_1,\dots,b_n$, not all zero, such that $$b_1Tv_1+\dots+b_nTv_n=0.$$ By linearity, you get $T(b_1v_1+\dots+b_nv_n)=0$ and, if $T$ is injective, then in fact $b_1v_1+\dots+b_nv_n=0$, so the $v_i$ are dependent. (Or, if you prefer: If the $v_i$ are independent, and we have $b_1Tv_1+\dots+b_nTv_n=0$, then as above we get $b_1v_1+\dots+b_nv_n=0$, and independence gives us that $b_1=\dots=b_n=0$, so the $Tv_i$ are independent.) Injectivity of $T$ is the only way to ensure that any independent set is mapped to an independent set. However, even if $T$ is not injective, there may be some independent sets whose independence is preserved. For example, consider the linear transformation $T:{\mathbb R}^3\to{\mathbb R}^2$ given by $T(x,y,z)=(x,y)$. Then $T$ maps both $(1,0,0)$ and $(1,0,1)$ to $(1,0)$. The original vectors are independent, the resulting vectors obviously are not (they coincide!). But, on the other hand, $T$ preserves the linear independence of any two (independent) vectors whose last coordinate is 0. - Just to add to Arturo Magidin's answer, if you are considering sets of vectors rather than lists, then both statements are false: Consider $T:\mathbb{R}^2 \to \mathbb{R}, \: T(x,y) = x+y,$ and sets $A = \{(2,0),(0,2),(1,1)\}$ and $B = \{(1,-1)\}$. Then $A$ is linearly dependent, but $T(A) = \{2\}$ is linearly independent; and $B$ is linearly independent, but $T(B) = \{0\}$ is linearly dependent. - Suppose $v_1,\cdots v_n$ are linearly independent and T is a linear transformation. Suppose ker(T) only intersects linear span $W$ of $v_1, \cdots v_n$ only at $\{0\}$. Then T preserves the linear independence of $v_1, \cdots v_n$. This condition is necessary and sufficient. - You want to be a bit careful with the statements; the main difficulty lies in how you deal with collections of sets that include repetitions. Most of the time, when we think about vectors and vector spaces, a list of vectors that includes repetitions is considered to be linearly dependent, even though as a set it may technically not be. For example, in $\mathbb{R}^2$, the list $\mathbf{v}_1=(1,0)$, $\mathbf{v}_2=(1,0)$ is considered linearly dependent, because $\mathbf{v}_1 - \mathbf{v}_2 = \mathbf{0}$. But as sets, $\{\mathbf{v}_1,\mathbf{v}_2\} = \{(1,0)\} = S$, you have that $S$ is linearly independent (it contains a single element and the element is nonzero). If you are considering lists, and repetitions are allowed (and make a list linearly dependent), then Andres's answer is completely correct. If you are considering sets, and repetitions are an issue, then the problem is much more difficult. I suspect this is not the case, since it is almost never the case; but you might want to double check (and try to figure how you can have a linearly dependent set map to a set of vectors that is linearly independent when considered as a set, where repetitions are ignored). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9524877071380615, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/83144-path-connectedness.html
# Thread: 1. ## path connectedness This is a pretty trivial question (I think?). I just want to see anyone do it with a bit of detail to determine if I am on the right track. thanks! Let X={a,b} and assume that X has the topology T={0, {a},X}. Show that X is path connected in this topology. 2. Try the map $p:[0,1]\to X,$ $p(t)=a$ if $t\ne1$ and $p(1)=b.$ Then $p^{-1}(\O)=\O,$ $p^{-1}(\{a\})=[0,\,1),$ $p^{-1}(X)=[0,\,1]$ which are all open in $[0,\,1].$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363536238670349, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/130419-question-about-calculating-mean.html
# Thread: 1. ## Question about calculating a mean I need to calculate the mean with the following data set. But for some reason I am completely blank. I know it's supposed to be easy, but when I tried solving it I got an answer that wasn't possible. Please help if you can, thank you. Fifty college students selected from at random from the student directory were asked to respond to the question, “What is the ideal number of children for a family to have?” The responses were shown in the following table. Ideal Number of Children........Frequency 0......................................4 1.................................... 9 2......................................25 3.................................... 8 4............ .........................3 5......................................1 I need to calculate sample mean Y, the standard deviation, the standard error, 95% confidence interval for the Mean of the population, and the probability for M to the estimated range of of a +-0.1 margin of error. You don't need to do all of that for me because I have the formulas, but I know I just need the basic mean to be able to do all that and for some reason I can't solve for the mean. Can you help me out? And if possible, tell me how you did it? Thank you so much. PS: I put this in basic probability and statistics because it seems that calculating a mean would be rather elementary, but if you think it would be better for me to put it in the advanced forum, please let me know. 2. Mean will be the sum of (no of children times frequency) then divide by number of samples $\frac{0\times 4+ 1\times 9+ \dots}{n}$ 3. Thank you!! I see what I did wrong! I was dividing by 6 as the N instead of 50 (since there are six values). So the answer is 2, where as before I was getting 16.6 which didn't make sense. My bad! Thanks so much! If I have any more questions I'll ask here
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9562543630599976, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/33513/wavefunction-collapse-and-uncertainty-principle?answertab=oldest
# wavefunction collapse and uncertainty principle 1. We all know that wavefunction collapse when it is observed. Uncertainty principle states that $\sigma_x \sigma_p \geq \frac {\hbar}{2}$. When wavefunction collapse, doesn't $\sigma_x$ become $0$?, as we will know the location of the particle. Or does standard deviation just become smaller? 2. After collapse occurs, what happens to the particle? Does the particle resurrect into a wavefunction form? 3. What can be an observer that triggers wavefunction collapse? (electron wavefunction does not collapse when meeting with electrons; but some macroscopic objects seem to become observers....) 4. What happens to the energy of a particle/wave packet after the collapse? - ## 1 Answer The terminology of collapse of the wavefunction is an unfortunate one . Take an oscillating AC line and use a scope to measure it and display it. Is the AC 50 herz wavefunction collapsed because we observe it on the scope? The AC wave function is just a mathematical description of the voltage and current on the line and allows us to calculate the amplitude and time dependance of the energy it carries. An equally unfortunate concept is the matter wave. The particle is not a continuous soup distributing its matter in space and time the way of an AC voltage or other classical wave. You will never find 1/28th of a particle, it is either there in your measuring instruments or it is not, and it is governed by a probability wave mathematical description, not a "matter wave" Even more so, the wavefunction manifestation of a particle does not collapse when we measure it the way a balloon collapses when pierced by a pin, because it is just a mathematical description of the probability to find a particle in a particular (x,y,z) with a particular (p_x,p_y,p_z) within the constraints of the Heisenber Uncertainty Principle. When wavefunction collapse, doesn't σx become 0?, as we will know the location of the particle. Or does standard deviation just become smaller? We know the location of the particle at that specific coordinate where we had our measuring instrument with the specifice momentum that our insturments measured, within the instrument errors. The probability of finding it there after the fact is 1. It is the nature of all probability distributions that after the detection they become one. example: the probability I will die in the next ten years is 50%. At the instant of my death the probability is one that I am dead. σx is not a standard deviation in the error sense. σxσp≥ℏ2 says that: if I want to know the location of my particle within a region about the x point with uncertainty/accuracy σx , the σp I can measure simultaneously is constrained to be within an uncertainty that follows the constraint σxσp≥ℏ2. Does the particle resurrect into a wavefunction form? The particle keeps it dual nature of particle or probability wave according to the momentum it still carries and will be appropriately detected as a particle or a probability wave by the next experimenter. It is not a balloon to have been destroyed by the measurement. What can be an observer that triggers wavefunction collapse? (electron wavefunction does not collapse when meeting with electrons; but some macroscopic objects seem to become observers....) In principle, any interaction of a particle that changes its momentum and position is an observer except that some interactions are quantum mechanical because of the HUP and the nature of the interaction and some are macroscopic manifestations in our instruments of the passage of a particle or probability wave of a particle. We usually call observers the classical macroscopic detectors, be they people or instruments. At the microcosm quantum level we have interactions governed by the probability wave functions. What happens to the energy of a particle/wave packet after the collapse? Energy and momentum are conserved absolutely, so it will depend on what sort of detection of the particle took place. Some will be carried off by the particle if it has not been absorbed into the detector, as for example these particles in this bubble chamber photograph which continually interact with the transparent liquid of the bubble chamber. In this case a tiny bit of the energy is taken by kicked off electrons (first detector atom of liquid, final detector photographic plate) which show by the ionisation the passage of the particle, which is certainly not idiotically "collapsing" . -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9254363179206848, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/52969/check-my-proof-that-ab-1-b-1-a-1
# Check my proof that $(ab)^{-1} = b^{-1} a^{-1}$ The following question is problem Pinter's Abstract Algebra. And to put things in context: $G$ is a group and $a, b$ are elements of $G$. I want to show $(ab)^{-1}$ = $b^{-1}a^{-1}$. I originally thought of proving the fact in the following manner: \begin{align*} (ab)^{-1}(ab) &= e \newline (ab)^{-1}(ab)(b^{-1}) &= (e)(b^{-1}) \newline (ab)^{-1}(a)(bb^{-1}) &= (b^{-1}) \newline (ab)^{-1}(a)(e) &= (b^{-1}) \newline (ab)^{-1}(a) &= (b^{-1}) \newline (ab)^{-1}(a)(a^{-1}) &= (b^{-1})(a^{-1}) \newline (ab)^{-1}(e) &= (b^{-1})(a^{-1}) \newline (ab)^{-1} &= (b^{-1})(a^{-1}) \newline \end{align*} I know this may seem extremely inefficient to most, and I know there is a shorter way. But would this be considered a legitimate proof? Thanks in advance! - Looks ok to me. Ideally I would like to see the implication arrows between the various equations as well as descriptions of exactly what you did to the previous equation to get the next, but I'm a bit old-fashioned in this respect. If this style of presentations is fine with your teacher, then you have nothing to worry. – Jyrki Lahtonen Jul 21 '11 at 21:21 It seems likely that this is a group theory question - it would be good to have this stated. – Mark Bennet Jul 21 '11 at 21:28 Ah - I implicitly assumed that it was a group as well. – mixedmath♦ Jul 21 '11 at 21:37 @Mark: Thanks for the advice. The post has been edited which makes it clear that this question is a group theory question. – Student Jul 21 '11 at 22:38 ## 3 Answers Your way is absolutely fine. As you note, there is in fact an easier way. It would be enough to show that the element $c$ such that $(ab)c = e$ is in fact $c = b^{-1} a^{-1}$: $$\begin{align} (ab)b^{-1} a^{-1} &= a (b b^{-1}) a^{-1} \\ &= a e a^{-1} \\ &= a a^{-1} \\ &= e. \end{align}$$ - 1 If there is a question about $ab$ being invertible e.g. are we talking about a group here (as seems likely), there is also one about the uniqueness of the inverse. If you have a multiplication table where the product of any pair of elements is the identity, for example, the inverse would not be unique. Also $ab$ may be invertible without either $a$ or $b$ being invertible. If it's a group we don't have these problems. – Mark Bennet Jul 21 '11 at 21:27 @Mark: Good Point. – JavaMan Jul 21 '11 at 21:34 @Mark,JavaMan When doing group theory for the first time-that's how most students begin a serious study of algebra,with groups-you don't have to assume the inverse and/or identity is unique. When I first learned it from Nick Metas, his definition of a group required only that it possess AT LEAST ONE left identity and each element had at least one left inverse.We then proceeded to tediously prove the uniqueness of the left and the right identity(inverse) and then ultimately uniqueness. The advantage of JavaMan's approach is it can work within the weaker group axioms.So can Jon's,in fact. – Mathemagician1234 Dec 27 '11 at 7:00 These questions are standardly done by going straightforward, definition-based. So for the element $ab$ we seek the element $x$ s.t. $abx = xab = e$. Sure. We know such an element is unique (if not - prove this too). So let's just do it. $ab b^{-1} a^{-1} = a (b b ^{-1}) a^{-1} = a e a^{-1} = a a^{-1} = e$. That's one direction. $b^{-1}a^{-1} * ab = b^{-1} (a^{-1}a) b = b^{-1}b = e$ As for your method above - it looks great. Well done. - Note in my response to JavaMan's proof above-your proof has the major flaw in that it assumes the inverse is 2 sided and unique and the definition of a group doesn't require that. But if the definition of a group uses the stronger axioms(2 sidedness and uniqueness are assumed), then it'll work fine.In my experiences,though,the proof method given by Jon above is a bit more straightforward and will probably be easier for an absolute beginner to do on their own. – Mathemagician1234 Dec 27 '11 at 6:59 @Mathemagician1234: What you say is a major flaw is not major and does not exist here. mixedmath's answer shows that it is a 2 sided inverse rather than assuming it. And the answer also points out that uniqueness is something that ultimately requires proof, although this proof is omitted here. – Jonas Meyer Dec 28 '11 at 3:55 I must say that was so impressive, it may look long but it looks excellent in the same time, it shows how Mathematicians can think out of the box, it was unique and clever I must say. I would suggest your method to someone who needs help... - 1 This looks like a comment rather than an answer. I can convert it, but I am not sure to which post it replies. – robjohn♦ Feb 13 at 19:55 @robjohn: It looks like (misplaced) sarcasm to me. – gnometorule Feb 13 at 20:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9491041302680969, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/16181/partition-function-as-characteristic-function-of-energy?answertab=active
# Partition Function as characteristic function of energy? I'm going through a book on statistical mechanics and there it says that the partition function $$Z = \sum_{\mu_S} e^{-\beta H(\mu_S)}$$ where $\mu_S$ denotes a microstate of the system and $H(\mu_S)$ the Hamiltonian, is proportional to the characteristic function $\hat p(\beta)$ of the energy probability distribution function. This allows us to make then the next step and conclude that $\ln Z$ is the cumulant generating function with the nice result that $$\langle H \rangle = -\frac{\partial \ln Z}{\partial \beta}$$ and $$\langle H - \langle H \rangle \rangle^2 = \frac{\partial^2 \ln Z}{\partial \beta^2}$$ but I fail to see why $Z$ is proportional to characteristic function. Also, if I imagine that $Z$ is the characteristic function to the energy, then wouldn't I have to evaluate the derivative at $\beta = 0$? I know that the two formulas above can also be obtained by explicitly doing the calculation using the definition of $Z$ in the first line, but I'd like to generalize this result to the momenta and cumulants of all orders. - I see the confusion. You can just do a shift of variables, to separate the temperature 1/beta from the the parameter in your characteristic function, which is taken to zero after the derivatives. – user1631 Oct 13 '12 at 1:13 ## 3 Answers I think the confusion is simple. Let $\gamma$ be the transform parameter of the generating function. The generating function is $G(\gamma) = <exp(-\gamma H)> \propto \sum_{\mu_S} e^{-(\beta+\gamma) H(\mu_S)}$ for the Gibbs distribution. If we take $\gamma ->0$ we get the partition function. Taking the derivative w.r.t. $\gamma$ is equivalent to taking derivative w.r.t $\beta$ for this particular distribution, since they appear only in their sum. - The statement is a purely mathematical one. Let $p(E)$ be the probability distribution function for energy. The characteristic function of this distribution will be $\hat{p}(\beta) = \mathbb{E}[e^{i \beta E}] = \sum_E e^{i \beta E} p(E)$. So if the distribution is the Gibbs one $p(E) \propto e^{-\beta E}$ then we see that $Z$ is proportional to $\hat{p}$. The rest then follows from standard probability theory. See: http://en.wikipedia.org/wiki/Characteristic_function_(probability_theory) - If it is still actual for you. The probability density $$e^{-\beta H\left( \mu\right) }$$ is the so called canonical (Gibbs) distribution.There are plenty of methods how to derive it. I can reproduce the simplest one. Let's imagine that your system has the Hamiltonian $H\left( \mu\right)$ and you would like to study it for a certain temperature. In order to set the temperature you put your system inside a thermostat, so that your system exchanges only energy with the thermostat but the volume and the number of particles are constant. Let's suppose that the thermostat is a big tank filled by an ideal gas so that its energy: $$h=\sum_{i=1}^{N}\frac{P_{i}^{2}}{2m}.$$ The total system (your system+thermostat) is isolated thus the total energy is fixed. Therefore, the distribution with respect to the total energy is a delta-function: $$\rho\left( E\right) =\Lambda\delta\left( h+H-E\right) ,$$ where $\Lambda$ is a some normalization factor such that $$\int \rho\left( E\right)\,d\Gamma =1,\qquad\left( 1\right)$$ where $d\Gamma$ is an element of the full phase space: $$d\Gamma=d\mu\prod_{i=1}^{N}d^{3}P_{i}\,d^{3}Q_{i}.$$ Let's integrate out all degrees of freedom of the thermostat: $$\rho\left( H\right) =\int\prod_{i=1}^{N}d^{3}P_{i}\,d^{3}Q_{i} \,\Lambda\delta\left( H+\sum_{i=1}^{N}\frac{P_{i}^{2}}{2m}-E\right) =\Lambda V^{N}\int\prod_{i=1}^{N}d^{3}P_{i}\,\,\delta\left( H+\sum_{i=1}^{N} \frac{P_{i}^{2}}{2m}-E\right) ,$$ where $V$ is the volume of thermostat. The integration measure can be simplified as follows: $$\int\left[\prod_{i=1}^{N}d^{3}P_{i}\right] f(\epsilon)=\frac{2\pi^{3N/2}}{\Gamma\left( 3N/2\right) }\int\left[\epsilon^{3N-1}d\epsilon\right] f(\epsilon),$$ where $$\epsilon^{2}=\sum_{i=1}^{N}P_{i}^{2}.$$ Hence the integration can be preformed as follows: $$\rho\left( H\right) =\Lambda V^{N}\frac{2\pi^{3N/2}}{\Gamma\left( 3N/2\right) }\int\,d\epsilon\,\epsilon^{3N-1}\,\delta\left( H+\frac {\epsilon^{2}}{2m}-E\right) =\Lambda V^{N}\frac{2m\pi^{3N/2}}{\Gamma\left( 3N/2\right) }\,\left( E-H\right) ^{\frac{3N}{2}-1}.$$ Let's now consider the $N\rightarrow\infty$ limit so that $$\frac{E}{N}\approx\frac{h}{N}=\frac{3T}{2}.$$ The distribution takes the form: $$\rho\left( H\right) \sim\left( 1-\frac{H}{E}\right) ^{3N-2}\approx\left( 1-\frac{H}{\frac{3N}{2}T}\right) ^{\frac{3N}{2}-1}\approx\exp\left( -\frac{H}{T}\right) .$$ The normalization factor can be found from the normalization condition (1). Finally the probability density for the energy of your system takes the form: $$\rho\left( H\right) =\frac{e^{-\beta H\left( \mu\right) }}{Z},\quad Z=\int d\mu\,e^{-\beta H\left( \mu\right) }.$$ In fact, the result is independent of the nature of the thermostat, see e.g. L.D. Landau, E.M. Lifshitz, Volume 5 of Course of Theoretical Physics, Statistical physics Part 1 Ch. III, The Gibbs distribution. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9059749841690063, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/tagged/group-theory+post-quantum-cryptography
# Tagged Questions 3answers 324 views ### What exactly is the impact of the hidden subgroup problem on cryptography? I understand my group theory (allegedly), so I can make partial sense of The Hidden Subgroup problem: Given a group $G$, a subgroup $H \leq G$, and a set $X$, we say a function \$f : G \Rightarrow ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8200883269309998, "perplexity_flag": "head"}
http://medlibrary.org/medwiki/Symmetry_(physics)
# Symmetry (physics) Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below: First Brillouin zone of FCC lattice showing symmetry labels In physics, symmetry includes all features of a physical system that exhibit the property of symmetry—that is, under certain transformations, aspects of these systems are "unchanged", according to a particular observation. A symmetry of a physical system is a physical or mathematical feature of the system (observed or intrinsic) that is "preserved" under some change. A family of particular transformations may be continuous (such as rotation of a circle) or discrete (e.g., reflection of a bilaterally symmetric figure, or rotation of a regular polygon). Continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups (see Symmetry group). Symmetries are frequently amenable to mathematical formulations such as group representations and can be exploited to simplify many problems. An important example of such symmetry is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations. ## Symmetry as invariance Invariance is specified mathematically by transformations that leave some quantity unchanged. This idea can apply to basic real-world observations. For example, temperature may be constant throughout a room. Since the temperature is independent of position within the room, the temperature is invariant under a shift in the measurer's position. Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve how the sphere "looks". ### Invariance in force The above ideas lead to the useful idea of invariance when discussing observed physical symmetry; this can be applied to symmetries in forces as well. For example, an electric field due to a wire is said to exhibit cylindrical symmetry, because the electric field strength at a given distance r from the electrically charged wire of infinite length will have the same magnitude at each point on the surface of a cylinder (whose axis is the wire) with radius r. Rotating the wire about its own axis does not change its position or charge density, hence it will preserve the field. The field strength at a rotated position is the same. Suppose some configuration of charges (may be non-stationary) produce an electric field in some direction, then rotating the configuration of the charges (without disturbing the internal dynamics that produces the particular field) will lead to a net rotation of the direction of the electric field.These two properties are interconnected through the more general property that rotating any system of charges causes a corresponding rotation of the electric field. In Newton's theory of mechanics, given two bodies, each with mass m, starting from rest at the origin and moving along the x-axis in opposite directions, one with speed v1 and the other with speed v2 the total kinetic energy of the system (as calculated from an observer at the origin) is 1⁄2m(v12 + v22) and remains the same if the velocities are interchanged. The total kinetic energy is preserved under a reflection in the y-axis. The last example above illustrates another way of expressing symmetries, namely through the equations that describe some aspect of the physical system. The above example shows that the total kinetic energy will be the same if v1 and v2 are interchanged. ## Local and global symmetries Main articles: Global symmetry and Local symmetry Symmetries may be broadly classified as global or local. A global symmetry is one that holds at all points of spacetime, whereas a local symmetry is one that has a different symmetry transformation at different points of spacetime; specifically a local symmetry transformation is parameterised by the spacetime co-ordinates. Local symmetries play an important role in physics as they form the basis for gauge theories. ## Continuous symmetries The two examples of rotational symmetry described above - spherical and cylindrical - are each instances of continuous symmetry. These are characterised by invariance following a continuous change in the geometry of the system. For example, the wire may be rotated through any angle about its axis and the field strength will be the same on a given cylinder. Mathematically, continuous symmetries are described by continuous or smooth functions. An important subclass of continuous symmetries in physics are spacetime symmetries. ### Spacetime symmetries Main article: Spacetime symmetries Continuous spacetime symmetries are symmetries involving transformations of space and time. These may be further classified as spatial symmetries, involving only the spatial geometry associated with a physical system; temporal symmetries, involving only changes in time; or spatio-temporal symmetries, involving changes in both space and time. • Time translation: A physical system may have the same features over a certain interval of time $\delta t$; this is expressed mathematically as invariance under the transformation $t \, \rightarrow t + a$ for any real numbers t and a in the interval. For example, in classical mechanics, a particle solely acted upon by gravity will have gravitational potential energy $\, mgh$ when suspended from a height $h$ above the Earth's surface. Assuming no change in the height of the particle, this will be the total gravitational potential energy of the particle at all times. In other words, by considering the state of the particle at some time (in seconds) $t_0$ and also at $t_0 + 3$, say, the particle's total gravitational potential energy will be preserved. • Spatial translation: These spatial symmetries are represented by transformations of the form $\vec{r} \, \rightarrow \vec{r} + \vec{a}$ and describe those situations where a property of the system does not change with a continuous change in location. For example, the temperature in a room may be independent of where the thermometer is located in the room. • Spatial rotation: These spatial symmetries are classified as proper rotations and improper rotations. The former are just the 'ordinary' rotations; mathematically, they are represented by square matrices with unit determinant. The latter are represented by square matrices with determinant −1 and consist of a proper rotation combined with a spatial reflection (inversion). For example, a sphere has proper rotational symmetry. Other types of spatial rotations are described in the article Rotation symmetry. • Poincaré transformations: These are spatio-temporal symmetries which preserve distances in Minkowski spacetime, i.e. they are isometries of Minkowski space. They are studied primarily in special relativity. Those isometries that leave the origin fixed are called Lorentz transformations and give rise to the symmetry known as Lorentz covariance. • Projective symmetries: These are spatio-temporal symmetries which preserve the geodesic structure of spacetime. They may be defined on any smooth manifold, but find many applications in the study of exact solutions in general relativity. • Inversion transformations: These are spatio-temporal symmetries which generalise Poincaré transformations to include other conformal one-to-one transformations on the space-time coordinates. Lengths are not invariant under inversion transformations but there is a cross-ratio on four points that is invariant. Mathematically, spacetime symmetries are usually described by smooth vector fields on a smooth manifold. The underlying local diffeomorphisms associated with the vector fields correspond more directly to the physical symmetries, but the vector fields themselves are more often used when classifying the symmetries of the physical system. Some of the most important vector fields are Killing vector fields which are those spacetime symmetries that preserve the underlying metric structure of a manifold. In rough terms, Killing vector fields preserve the distance between any two points of the manifold and often go by the name of isometries. ## Discrete symmetries Main article: Discrete symmetry A discrete symmetry is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called reflections or interchanges. • Time reversal: Many laws of physics describe real phenomena when the direction of time is reversed. Mathematically, this is represented by the transformation, $t \, \rightarrow - t$. For example, Newton's second law of motion still holds if, in the equation $F \, = m \ddot {r}$, $t$ is replaced by $-t$. This may be illustrated by recording the motion of an object thrown up vertically (neglecting air resistance) and then playing it back. The object will follow the same parabolic trajectory through the air, whether the recording is played normally or in reverse. Thus, position is symmetric with respect to the instant that the object is at its maximum height. • Spatial inversion: These are represented by transformations of the form $\vec{r} \, \rightarrow - \vec{r}$ and indicate an invariance property of a system when the coordinates are 'inverted'. Said another way, these are symmetries between a certain object and its mirror image. • Glide reflection: These are represented by a composition of a translation and a reflection. These symmetries occur in some crystals and in some planar symmetries, known as wallpaper symmetries. ### C, P, and T symmetries The Standard model of particle physics has three related natural near-symmetries. These state that the actual universe about us is indistinguishable from one where: • Every particle is replaced with its antiparticle. This is C-symmetry (charge symmetry); • Everything appears as if reflected in a mirror. This is P-symmetry (parity symmetry); • The direction of time is reversed. This is T-symmetry (time symmetry). T-symmetry is counterintuitive (surely the future and the past are not symmetrical) but explained by the fact that the Standard model describes local properties, not global ones like entropy. To properly reverse the direction of time, one would have to put the big bang and the resulting low-entropy state in the "future." Since we perceive the "past" ("future") as having lower (higher) entropy than the present (see perception of time), the inhabitants of this hypothetical time-reversed universe would perceive the future in the same way as we perceive the past. These symmetries are near-symmetries because each is broken in the present-day universe. However, the Standard Model predicts that the combination of the three (that is, the simultaneous application of all three transformations) must be a symmetry, called CPT symmetry. CP violation, the violation of the combination of C- and P-symmetry, is necessary for the presence of significant amounts of baryonic matter in the universe. CP violation is a fruitful area of current research in particle physics. ### Supersymmetry Main article: Supersymmetry A type of symmetry known as supersymmetry has been used to try to make theoretical advances in the standard model. Supersymmetry is based on the idea that there is another physical symmetry beyond those already developed in the standard model, specifically a symmetry between bosons and fermions. Supersymmetry asserts that each type of boson has, as a supersymmetric partner, a fermion, called a superpartner, and vice versa. Supersymmetry has not yet been experimentally verified: no known particle has the correct properties to be a superpartner of any other known particle. If superpartners exist they must have masses greater than current particle accelerators can generate. ## Mathematics of physical symmetry Main article: Symmetry group The transformations describing physical symmetries typically form a mathematical group. Group theory is an important area of mathematics for physicists. Continuous symmetries are specified mathematically by continuous groups (called Lie groups). Many physical symmetries are isometries and are specified by symmetry groups. Sometimes this term is used for more general types of symmetries. The set of all proper rotations (about any angle) through any axis of a sphere form a Lie group called the special orthogonal group $\, SO(3)$. (The 3 refers to the three-dimensional space of an ordinary sphere.) Thus, the symmetry group of the sphere with proper rotations is $\, SO(3)$. Any rotation preserves distances on the surface of the ball. The set of all Lorentz transformations form a group called the Lorentz group (this may be generalised to the Poincaré group). Discrete symmetries are described by discrete groups. For example, the symmetries of an equilateral triangle are described by the symmetric group $\, S_3$. An important type of physical theory based on local symmetries is called a gauge theory and the symmetries natural to such a theory are called gauge symmetries. Gauge symmetries in the Standard model, used to describe three of the fundamental interactions, are based on the SU(3) × SU(2) × U(1) group. (Roughly speaking, the symmetries of the SU(3) group describe the strong force, the SU(2) group describes the weak interaction and the U(1) group describes the electromagnetic force.) Also, the reduction by symmetry of the energy functional under the action by a group and spontaneous symmetry breaking of transformations of symmetric groups appear to elucidate topics in particle physics (for example, the unification of electromagnetism and the weak force in physical cosmology). ### Conservation laws and symmetry Main article: Noether's theorem The symmetry properties of a physical system are intimately related to the conservation laws characterizing that system. Noether's theorem gives a precise description of this relation. The theorem states that each continuous symmetry of a physical system implies that some physical property of that system is conserved. Conversely, each conserved quantity has a corresponding symmetry. For example, the isometry of space gives rise to conservation of (linear) momentum, and isometry of time gives rise to conservation of energy. The following table summarizes some fundamental symmetries and the associated conserved quantity. | | | | |----------------------------------------------------------|--------------------------------------|--------------------------| | Class | Invariance | Conserved quantity | | Proper orthochronous Lorentz symmetry | translation in time   (homogeneity) | energy | | | translation in space   (homogeneity) | linear momentum | | | rotation in space   (isotropy) | angular momentum | | Discrete symmetry | P, coordinate inversion | spatial parity | | | C, charge conjugation | charge parity | | | T, time reversal | time parity | | | CPT | product of parities | | Internal symmetry (independent of spacetime coordinates) | U(1) gauge transformation | electric charge | | | U(1) gauge transformation | lepton generation number | | | U(1) gauge transformation | hypercharge | | | U(1)Y gauge transformation | weak hypercharge | | | U(2) [U(1) × SU(2)] | electroweak force | | | SU(2) gauge transformation | isospin | | | SU(2)L gauge transformation | weak isospin | | | P × SU(2) | G-parity | | | SU(3) "winding number" | baryon number | | | SU(3) gauge transformation | quark color | | | SU(3) (approximate) | quark flavor | | | S(U(2) × U(3)) U(1) × SU(2) × SU(3)] | Standard Model | ## Mathematics Continuous symmetries in physics preserve transformations. One can specify a symmetry by showing how a very small transformation affects various particle fields. The commutator of two of these infinitessimal transformations are equivalent to a third infinitessimal transformation of the same kind hence they form a Lie algebra. A general coordinate transformation (also known as a diffeomorphism) has the infinitessimal effect on a scalar, spinor and vector field for example: $\delta\phi(x) = h^{\mu}(x)\partial_{\mu}\phi(x)$ $\delta\psi^\alpha(x) = h^{\mu}(x)\partial_{\mu}\psi^\alpha(x) + \partial_\mu h_\nu(x) \sigma_{\mu\nu}^{\alpha \beta} \psi^{\beta}(x)$ $\delta A_\mu(x) = h^{\nu}(x)\partial_{\nu}A_\mu(x) + A_\nu(x)\partial_\nu h_\mu(x)$ for a general field, $h(x)$. Without gravity only the Poincaré symmetries are preserved which restricts $h(x)$ to be of the form: $h^{\mu}(x) = M^{\mu \nu}x_\nu + P^\mu$ where M is an antisymmetric matrix (giving the Lorentz and rotational symmetries) and P is a general vector (giving the translational symmetries). Other symmetries affect multiple fields simultaneously. For example local gauge transformations apply to both a vector and spinor field: $\delta\psi^\alpha(x) = \lambda(x).\tau^{\alpha\beta}\psi^\beta(x)$ $\delta A_\mu(x) = \partial_\mu \lambda(x)$ where $\tau$ are generators of a particular Lie group. So far the transformations on the right have only included fields of the same type. Supersymmetries are defined according to how the mix fields of different types. Another symmetry which is part of some theories of physics and not in others is scale invariance which involve Weyl transformations of the following kind: $\delta \phi(x) = \Omega(x) \phi(x)$ If the fields have this symmetry then it can be shown that the field theory is almost certainly conformally invariant also. This means that in the absence of gravity h(x) would restricted to the form: $h^{\mu}(x) = M^{\mu \nu}x_\nu + P^\mu + D x_\mu + K^{\mu} |x|^2 - 2 K^\nu x_\nu x_\mu$ with D generating scale transformations and K generating special conformal transformations. For example N=4 super-Yang-Mills theory has this symmetry while General Relativity doesn't although other theories of gravity such as conformal gravity do. The 'action' of a field theory is an invariant under all the symmetries of the theory. Much of modern theoretical physics is to do with speculating on the various symmetries the Universe may have and finding the invariants to construct field theories as models. In string theories, since a string can be decomposed into an infinite number of particle fields, the symmetries on the string world sheet is equivalent to special transformations which mix an infinite number of fields. ## References ### General readers • Leon Lederman and Christopher T. Hill (2005) Symmetry and the Beautiful Universe. Amherst NY: Prometheus Books. • Schumm, Bruce (2004) Deep Down Things. Johns Hopkins Univ. Press. • Victor J. Stenger (2000) Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpt. 12 is a gentle introduction to symmetry, invariance, and conservation laws. • Anthony Zee (2007) Fearful Symmetry: The search for beauty in modern physics, 2nd ed. Princeton University Press. ISBN 978-0-691-00946-9 [Amazon-US | Amazon-UK]. 1986 1st ed. published by Macmillan. ### Technical • Brading, K., and Castellani, E., eds. (2003) Symmetries in Physics: Philosophical Reflections. Cambridge Univ. Press. • -------- (2007) "Symmetries and Invariances in Classical Physics" in Butterfield, J., and John Earman, eds., Philosophy of Physic Part B. North Holland: 1331-68. • Debs, T. and Redhead, M. (2007) Objectivity, Invariance, and Convention: Symmetry in Physical Science. Harvard Univ. Press. • John Earman (2002) "Laws, Symmetry, and Symmetry Breaking: Invariance, Conservations Principles, and Objectivity." Address to the 2002 meeting of the Philosophy of Science Association. • Mainzer, K. (1996) Symmetries of nature. Berlin: De Gruyter. • Thompson, William J. (1994) Angular Momentum: An Illustrated Guide to Rotational Symmetries for Physical Systems. Wiley. ISBN 0-471-55264. • Bas Van Fraassen (1989) Laws and symmetry. Oxford Univ. Press. • Eugene Wigner (1967) Symmetries and Reflections. Indiana Univ. Press. Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Symmetry (physics)", available in its original form here: http://en.wikipedia.org/w/index.php?title=Symmetry_(physics) • ## Finding More You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library. • ## Questions or Comments? If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. • ## About This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8975201845169067, "perplexity_flag": "middle"}
http://mathhelpforum.com/number-theory/144719-relatively-prime-set-integers.html
# Thread: 1. ## Relatively Prime Set of Integers Hello all, Does anyone know of a set of 4 integers S=[a,b,c,d] where where a,b,c,d >0, and that 3 of the integers in S have a common divisor 'x' > 1 , but also that the GCD(a,b,c,d)=1 ? This is essentially plugging and playing in my mind, but I'm sure there is a more sophisticated method to this madness. 2. Originally Posted by 1337h4x Hello all, Does anyone know of a set of 4 integers S=[a,b,c,d] where where a,b,c,d >0, and that 3 of the integers in S have a common divisor 'x' > 1 , but also that the GCD(a,b,c,d)=1 ? This is essentially plugging and playing in my mind, but I'm sure there is a more sophisticated method to this madness. $a = 2\times 3\times 5,$ $b = 2\times 3\times 7,$ $c = 2\times 5\times 7,$ $d = 3\times 5\times 7.$ 3. How did you reach this conclusion? Is there some sort of generic way to solve this? For example, if there was a set of 5 integers where 4 had a common divisor > 1 but the GCD was still equal to 1. 4. Originally Posted by 1337h4x How did you reach this conclusion? Is there some sort of generic way to solve this? For example, if there was a set of 5 integers where 4 had a common divisor > 1 but the GCD was still equal to 1. The pattern is this. Suppose you want to find a set of n positive integers $a_1,a_2,\ldots,a_n$ for which every subset containing n–1 of them has a common divisor > 1, but the GCD of the whole set of n integers is equal to 1. The method is to take n distinct prime numbers $p_1,p_2,\ldots,p_n$. For $1\leqslant k\leqslant n$, let $a_k$ be the product of all the $p$s except for $p_k$. Then $p_k$ will divide all the $a$s except for $a_k$. But there will be no common divisor (greater than 1) of all the $a$s.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9678915739059448, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/38170/what-is-the-longest-distance-that-can-be-jumped-after-swinging-from-a-rope
# What is the longest distance that can be jumped after swinging from a rope? In the movie Mission Impossible 3, the main character Ethan Hunt tries to enter a building in Shanghai by swing through the sky, as shown below: The jump consists of 2 sections, the red part, which is an arc, and the orange part, which is a parabola. A question naturally arises: For a target building with a given height, what is the furthest horizontal distance it can be placed such that we can jump to its roof using this method? Before answering this question, we must consider the following fact: Observing the path when we release at different heights, we see that if we release very late, we can reach great heights, but with reduced horizontal distance. If we release early, we increase horizontal distance, but with reduced height. Because of this trade off, the area that can be reached is restricted within the yellow line。 The question is then: (1) What is the equation x(z) that describes the orange line above? (2) For a given point (x,z) that is inside the region bounded by the orange line, what is the proper release height h(x,z) such that the parabola will go through (x,z). The answer should be a set of h. This is the same as asking what is the proper release height if we want to reach a building with height z and is located at x distance away from the center building. For the ease of discussion, I propose that we use the follow coordinate system and notation : Here are some calculations that are already done: At the release point, $$\text{Upward shooting angle}=\psi$$ $$v=\sqrt{2gh}$$ $$h(\psi)=r\cos\psi-H$$ $$\psi(h)=\arccos\frac{h+H}{r}$$ Upward and horizontal speeds are: $$v_{u}=v\sin\psi$$ $$v_{h}=v\cos\psi$$ Time for Ethan to fly through the orange part is: $$t=\frac{2v_{u}}{g}=\frac{2v\sin\psi}{g}$$ Thus $$d_{h}=v_{h}\cdot t$$ $$=v\cos\psi\frac{2v\sin\psi}{g}$$ $$=\frac{v^{2}}{g}\sin2\psi$$ If we were to use $h$ as the variable, then $$d(h)=\frac{v^{2}}{g}\sin\left(2\arccos\frac{h+H}{r}\right)$$ $$d(h)=\frac{\left(\sqrt{2gh}\right)^{2}}{2g}\left(\frac{h+H}{r}\right)\sqrt{1-\left(\frac{h+H}{r}\right)^{2}}$$ $$d(h)=h\left(\frac{h+H}{r}\right)\sqrt{1-\left(\frac{h+H}{r}\right)^{2}}$$ - 1 – David Zaslavsky♦ Sep 24 '12 at 23:38 Thank you very much David! – Xiaowen Li Sep 25 '12 at 1:48 ## 1 Answer Amazingly, not only has someone recently done an analysis of the Tarzan swing, but they've posted it on the Arxiv. See http://arxiv.org/abs/1208.4355 for the article or the Arxiv Blog for a summary. To quote the Arxiv Blog article: In fact there is no simple rule for maximising the horizontal flight distance. It turns out this depends on a number of factors, such as rope swing's distance off the ground, the length of the rope and the angle of the rope when Tarzan begins his swing as well as the angle of the rope at the point of release. However the optimal angle of release is always less than 45 degrees. - thank you very much for the answer. The paper provided some very useful analysis. Now question(2) is kind of trivial(purely mathematical problem). But I believe question 1 is unanswered, and remained interesting. – Xiaowen Li Sep 24 '12 at 10:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420653581619263, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/36092/recursive-set-that-contains-in-a-way-all-the-other-recursive-ones
# Recursive set that contains in a way all the other recursive ones? I am wondering, whether there a exists a recursive set $S\subseteq \mathbb{N}^2$, such that for every recursive set $T \subseteq \mathbb{N} \ \exists c \in \mathbb{N}: \ T=\left\{n \in \mathbb{N }| (c,n) \in S\right\}$ ? (And by "recursive set" I mean a set that is a projection of a recursive set; and by "recursive set" one whose characteristic function is recursive) I know a similar proposition holds, if we replace "recursive with "recursively enumerable", but somehow I can't figure this one out... - I assume you mean $(c,n)\in S$? Otherwise you don't use $S$ at all. – Apostolos Apr 30 '11 at 21:25 1 You may want to think about universal Turing machines. – André Nicolas Apr 30 '11 at 22:18 And how sure are you about recursive? – André Nicolas Apr 30 '11 at 22:25 I am so sorry - being tired I confused "recursive" with "recursively enumerable" and asked a wrong question: It should have been "recursive" everywhere I wrote "recursive enumerable" - I edited the question now, so that it is really what I was asking. – temo May 1 '11 at 6:03 ## 1 Answer Let \$c_A\$ be a natural number that corresponds to any effective coding (for example: a Truing machine) of a recursively enumerable set \$A\$ and consider \$S = \bigcup_A {c_A} \times A\$. The answer to your modified question is: no. Let us try to imitate the usual liar paradox. Suppose, that there exists a total Turing machine $U$ that computes $S$. And consider another Turing machine: $$N(x) = \mathit{not}\; U(x, x)$$ By the closure property $N$ is also recursive. So, there is a natural number \$c_N\$ corresponding to $N$ under $U$. Let us see what we get by applying $N$ to itself: $$N(c_N) = \mathit{not}\; U(c_N, c_N) = \mathit{not}\; N(c_N)$$ where the first equality is the definition of $N$ and the second holds by the definition of $S$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281447529792786, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/6459?sort=oldest
## (Non-trivial) presentation of general linear and symplectic group over Z/mZ? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I know that there exists a nice presentation (generators and relations) of the general linear group over a finite field (by Steinberg, I think). Is there also a nice presentation of $GL(n,\mathbb{Z}/m\mathbb{Z})$ for an arbitrary integer $m$? And perhaps also for the symplectic group over $\mathbb{Z}/m\mathbb{Z}$? I want to do some calculations in these groups with a computer and my first problem was to determine these groups. One solution (my current) is of course to determine both by brute force. But already for small m,n this takes too much time. - Any finite group has a finite presentation - just take generators all elements of the group, and relations from the multiplication table. GAP probably computes directly with matrices. Maybe you should clarify your question - what sort of presentation are you looking for? – Agol Nov 22 2009 at 16:30 Well, good point. I'm looking of course for a presentation with less elements than the group order. Something that makes it possible (at least for small n and m) to determine/handle these groups with a computer without doing this by brute force. – S1 Nov 22 2009 at 17:17 This comment is woefully late, but let me make it anyway for the benefit of future readers: the integer symplectic groups Sp(2n,Z) are generated by 2 matrices for all n. References are Stanek (Math Review MR0153748) for n>3 and Ishibashi (MR1367845) for n=2,3. In my experience of computer experiments, it seems to speed things up to have as small a generating set as possible. – Artie Prendergast-Smith May 10 2011 at 14:11 ## 2 Answers Sure, you can certainly work something out if you like. It may well be that an off-the-rack answer to your question is in the literature somewhere, but let me say how I would go about getting there from here. Note that $\textrm{GL}(n,\mathbb{Z}/m\mathbb{Z})$ is the product over all prime powers $p^a$ exactly dividing $m$ of $\textrm{GL}(n,\mathbb{Z}/p^a\mathbb{Z})$, so you're reduced to the case where $m$ is a prime power. Second, there is an exact sequence $1 \to U_1 \to \textrm{GL}(n,\mathbb{Z}/p^a\mathbb{Z}) \to \textrm{GL}(n,\mathbb{Z}/p\mathbb{Z}) \to 1$ where I'll use $U_i$ to denote all mod $p^a$ matrices that are congruent to the identity modulo $p^i$. Given the Steinberg generators for $\textrm{GL}(n,\mathbb{Z}/p\mathbb{Z})$, you're reduced to giving generators/relations for $U_1$ (as well as working out how lifts of the Steinberg generators act by conjugation on the generators for $U_1$, as well as what elements of $U_1$ you get by applying the Steinberg relations to the lifts of your generators). Note that $U_1$ is a big $p$-group, and each $U_i/U_{i+1}$ up to $i=a-1$ is just isomorphic to $\text{M}_n(\mathbb{Z}/p\mathbb{Z})$, so again you can produce generators and relations by devissage. In practice it shouldn't require so many generators as you get out of this. For instance for $n=2$, the group $U_1$ is generated by four elements: the upper-triangular unipotent matrix with $p$ in the upper-right, its transpose, and diagonal matrices with diagonal entries $c,1$ and $1,c$, where $c$ is a generator for the multiplicative group $1 + p\mathbb{Z}/p^a\mathbb{Z}$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Two things. I'll talk about SL_n rather than GL_n, but that's just a technicality. 1. There is a short exact sequence $1 \rightarrow SL_n(\mathbb{Z},m) \rightarrow SL_n(\mathbb{Z}) \rightarrow SL_n(\mathbb{Z}/m\mathbb{Z}) \rightarrow 1,$ where $SL_n(\mathbb{Z},m)$ is the level $m$ congruence subgroup of $SL_n(\mathbb{Z})$. There is a nice presentation $\langle S|R \rangle$ for $SL_n(\mathbb{Z})$ due essentially to Magnus; it can be found in Milnor's book on algebraic K-theory (look for the chapter that calculates $K_2(\mathbb{Z})$). The generating set $S$ is just the set of elementary matrices. To get from there to $SL_n(\mathbb{Z}/m\mathbb{Z})$, you just need to add a normal generator for $SL_n(\mathbb{Z},m)$ to $R$. By the solution to the congruence subgroup problem for $SL_n(\mathbb{Z})$ (due to Mennicke, but you are better off looking at Bass-Milnor-Serre's paper), this congruence subgroup is normally generated by the mth power of a single elementary matrix. A similar idea also works for the symplectic group. To find a presentation for $Sp_{2g}(\mathbb{Z})$, look at Theorem 9.2.13 of Hahn and O'Meara's book "The Classical Groups and K-Theory". 1. I also want to make a comment on DLS's answer. There is a related short exact sequence $1 \rightarrow V \rightarrow SL_n(\mathbb{Z}/p^{k+1}\mathbb{Z}) \rightarrow SL_n(\mathbb{Z}/p^k\mathbb{Z}) \rightarrow 1.$ The group $V$ has a beautiful description, due essentially to Lee and Szczarba. Namely, it is isomorphic to the additive group underlying the special lie algebra over $\mathbb{Z} / p\mathbb{Z}$ (in particular, it is abelian). Moreover, the action of $SL_n(\mathbb{Z}/p^k\mathbb{Z})$ on $V$ factors through the adjoint representation of $SL_n(\mathbb{Z}/p\mathbb{Z})$ on the special lie algebra. Lee and Szczarba did not write this down in quite this form, but I wrote out a similar result for the symplectic group in Lemma 3.1 of my paper "The Picard group of the moduli space of curves with level structures". -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947017252445221, "perplexity_flag": "head"}
http://mathoverflow.net/questions/73533/local-cohomology-and-maximal-cohen-macaulay-modules
## Local Cohomology and Maximal-Cohen-Macaulay modules ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Checking a recent article [this one, specifically section 3.1] I found the following claim (I'm paraphrasing, of course): Let $A$ be a graded connected noetherian algebra (not necessarily commutative), and suppose it is AS-Cohen-Macaulay of depth $d$. If $M$ is a finitely generated graded module over $A$, and it is Maximal Cohen Macaulay (MCM, ie, its only non-zero local cohomology module is precisely the $d$-th), then its first syzygy is also MCM. I have a proof for this in the commutative ungraded case, but it deppends on the fact that $\lbrace i|H^i_{\mathfrak m}(M) \neq 0 \rbrace$ is non-empty and contained in the interval $[0,d]$ (consider the short exact sequence involving $M$ and its first syzygy and look at the long exact sequence of local cohomology). I found results regarding the non-vanishing of this groups in the non-commutative case, but they demand much more strict conditions than in the paper (finite GK-dimension, enough normal elements, etc.). Any idea on how to prove this in this more general context? - ## 1 Answer Well, I don't know if I'm supposed to, but since I found a solution, I'll write the general idea here. [This is from an unpublished manuscript by P. Smith, the first author of the paper]: If $A$ is CM, let $\omega_A = H^d_\mathfrak m(A)^*$ be its dualizing module. Then there is a spectral sequence $$E^{pq}_2 = \underline{Ext}^p_A(\underline{Ext}^q(M, \omega_A),\omega_A) \Rightarrow \begin{cases}M&\mbox{ if p = q} \\ 0 &\mbox{ otherwise}\end{cases}$$ Since $H_\mathfrak{m}^i(M)^* \cong \underline{Ext}_A^i(M,\omega_A)$, the convergence of this SS to a non-zero result guarantees that there must be a non-zero local cohomology module (and in fact, that there is a non-zero one with $i \leq d$) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174730181694031, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/electron?page=2&sort=newest&pagesize=50
# Tagged Questions Negatively charged particle with spin 1/2. A component of mundane terrestrial matter, and part of all neutral atoms and molecules. It has a mass about 1/1800 that of a proton. Its antiparticle is the positron. 1answer 41 views ### Regarding “Holes” in bands, and Photons So from learning Band theory, and PN Junction and such, I've learned that photons are created when "holes" are filled in a band, and this is what can create light (Isn't this how LEDs work?) Anyways, ... 3answers 137 views ### Do electrons in multi-electron atoms really have definite angular momenta? Since the mutual repulsion term between electrons orbiting the same nucleus does not commute with either electron's angular momentum operator (but only with their sum), I'd assume that the electrons ... 2answers 144 views ### Do the energy levels of electron orbitals change relativistically? When an electron emits a photon from changing energy levels, the frequency of the photon depends on the difference between the energy levels. But if someone is moving with respect to the atom, the ... 1answer 132 views ### What happens to 5 electrons on a sphere? Let's suppose we put 5 electrons on a perfectly conducting (no resistance at all) sphere. There's no equilibrium configuration with 5 (though there is with 2, 3, 4 or 6). So will they keep moving on ... 0answers 35 views ### Special conditions at layer F2 ionosphere I saw this graph about the electrons density in different altitudes and difference between night and day, the difference between the 2 electron densities (day and night) decreases till 300 Km (F2 ... 0answers 38 views ### frequency of the photon [closed] If an electron is accelerated within a cathode ray tube using a voltage difference of 3000 V then what is the maximal frequency for the photon that can be radiated from the electron ? 1answer 171 views ### Elastic collisions in Franck-Hertz experiment Looking at a Franck-Hertz experimental setup, and given a potential difference such as $4.0\ V$ which is too small to excite out the first electron orbital, the electrons moving through the tube will ... 2answers 121 views ### Energy source of electrons? I am aware that electrons are moving in an empty space so basically there is no friction to slow it down and its velocity stays the same. However where did the electron get its energy from in the ... 3answers 188 views ### What does a subatomic charge actually mean? I was recently reading a popular science book called The Canon - The Beautiful Basics of Science by Natalie Angier, and it talks about subatomic particles like protons, neutrons and electrons in ... 2answers 132 views ### How does an electrical field really work? A little bit of background information: I'm planning to write a little booklet or web page about CPU/computer architecture, basically for my own education, because we didn't cover it in depth in ... 3answers 385 views ### Where do electrons in electricity come from? Where do the electrons come from when an electric generator is making electricity? Is from the air? Would a generator work in a vacuum? Electrons have mass so where would they be pulled from if ... 1answer 182 views ### How many states can a n qubit quantum computer store? A classical computer composed of '0' or '1' transistors stores $2^n$ states. Is it true that a quantum computer composed of '0' or '1' or '0 & 1' qubits stores $3^n$ states? 0answers 34 views ### Working out the electron mobility from the transfer rate (1/s) I have an electric field value for a uniform structure through which an electron travels. Given that I've calculated a transfer rate (frequency) for the electron when it goes from one molecule to the ... 1answer 90 views ### Working out the electric field from applied energy I have created a simulation of one electron bouncing through a 3D mesh of molecules. The electron hopping is determined by a calculation of electron transfer rate using the Marcus equation (a result ... 4answers 472 views ### How do electrons jump orbitals? My question isn't how they receive the energy to jump, but why. When someone views an element's emission spectrum, we see a line spectrum which proves that they don't exist outside of their orbitals ... 0answers 119 views ### Electron hopping among molecules - Marcus equation I'm running out of professors to talk to, and I need to clarify a couple of things for the sake of making a realistic model of electron travel through a mesh. This is about calculations of electron ... 2answers 237 views ### How exactly does static discharge work? Assume I have built up a pretty high charge by rubbing the floor or something. I want to understand these situations: I almost always get shocked when I touch a metal doorknob with my bare hand. I ... 1answer 68 views ### Ejected Electrons with 0 KE? So I was taught that: Kinetic Energy (of electron) = Energy (of photon) - Ionization Energy If E(photon) = IE, then KE=0 of the electron. What does this physically/theoretically mean? My current ... 3answers 600 views ### is it possible to flow current in open circuit? First , i don't know much about chemistry and physics. I'm just a graphic designer but i have this question in my mind. I'm sorry if this question is too basic and use 'generic' language. As i know ... 3answers 89 views ### A problem concerning the force between currents or moving electrons Concerning two identical wires carrying the same current (same direction, speed and magnitude), they will be attracted because of the Ampere force. But when I was in the frame moving with the same ... 1answer 203 views ### Dipole moment of the electron I've read that there are some restrictions on the value of a possible intrinsic electric dipole possessed by the electron, but isn't the dipole value dependent on the electron's wavefunction? Assuming ... 0answers 73 views ### The Deflecting System in a Hot Cathode Ray Tube In an HCR-Tube, the deflecting system used to deviate the electron beam is made of positively charged plates. How is this justified? If, due to some malfunction, the electron beam deflects from its ... 1answer 64 views ### How does the specific frequency of EM Radiation relate to displacing electrons from their orbits? I've only a general grasp on how all this works, so it could be I'm asking this poorly or misunderstanding what happens. With that said: The energy of EM radiation is a function of its frequency. ... 0answers 111 views ### Research in Quantum Physics [closed] I am Suvankar, a student of engienering. My branch is Electrical & Electronics Engineering. Although this is a Physics oriented website, I want to know whether it is possible to do M.Sc. after ... 1answer 145 views ### Why photons are emitted because of changes to electron behavior Explanations I have read of why photons are emitted from atoms mention electrons being 'excited' to another energy level, and then returning to their base level, releasing a photon. I have also seen ... 4answers 175 views ### Find total energy and momentum of an moving electron in a rest frame I have an electron moving with speed $u'$ in a frame $S'$ moving with speed $v'$ relative to a rest frame $S$. How do I find the total energy and momentum of the electron in the rest frame $S$? I ... 2answers 598 views ### How many photons can an electron absorb and why? How many photons can an electron absorb and why? Can all fundamental particles that can absorb photons absorb the same amount of photons and why? If we increase the velocity of a fundamental ... 3answers 159 views ### What if $\gamma$-rays in Electron microscope? I was referring Electron microscopes and read that the electrons have wavelength way less than that of visible light. But, the question I can't find an answer was that, If gamma radiation has the ... 2answers 282 views ### Is there a published upper limit on the electron's electric quadrupole moment? I understand an electric quadrupole moment is forbidden in the standard electron theory. In this paper considering general relativistic corrections (Kerr-Newman metric around the electron), however, ... 3answers 162 views ### can I move the atom core only? I was wondering if it is possible to move the atom core and leave behind the electrons. I can imagine that the electrons will follow the core. But what if the speed of the core is almost the same as ... 1answer 150 views ### Stability of a rotating ring of multiple electrons at relativistic speeds There was a time when physicists where concerned about electron internal structure. The rotating ring model was one of the proposals to explain how a charge density could become stable against ... 1answer 67 views ### What does $\psi_j(r_i)$ mean? I have a mean-field Hamiltonian for N electrons. The mean-field potential felt by electron $i$ at position ${\bf r}_i$ is given by $V^{(i)}_{int}({\bf r}_i)=\sum_{j\ne i}|\psi_j({\bf r}_i)|^2$ I ... 1answer 213 views ### Excess charge on an insulator and conductor So I was recently wondering what happens to the excess charge when it is placed on an insulator or conductor e.g. rubbing two objects together. I know in the conductor the electrons are free to move ... 1answer 130 views ### Units in cgs system How do I find the dimensions of this quantity (in $cgs$)... $$\frac{4\pi me^2}{h^2n_o^{1/3}}$$ where $m$ is the mass of electron $e$ is the magnitude of electronic charge $h$ ... 1answer 176 views ### Is Fractional quantum Hall effect proof that leptons are composite particles? The fractional quantum Hall effect (FQHE) is a physical phenomenon in which the Hall conductance of 2D electrons shows precisely quantised plateaus at fractional values. Should this be considered ... 0answers 63 views ### Converting $q/m_e$ to $C/kg$ [closed] I was doing some chemistry problems when I came across a question asking to find the charge to mass ratio of an electron in $q/m_e$. Then, it told me to compare what I found to the accepted value, at ... 1answer 187 views ### drift velocity of electrons in a superconductor is there a formula for the effective speed of electron currents inside superconductors? The formula for normal conductors is: $$V = \frac{I}{nAq}$$ I wonder if there are any changes to this ... 1answer 83 views ### Why is the anode (+) in a device that consumes power & (-) in one that provides power? I was trying to figure out the flow of electrons in a battery connected to a circuit. Conventionally, current is from the (+) terminal to the (-) terminal of the battery. Realistically it flows the ... 1answer 99 views ### Making or Demonstrating Principle of Electron Microscope is it possible to either demonstrate the principle or make a SEM ( electron microscope ) at home or lab as an enthusiast?? and how can i start? 1answer 294 views ### Why is it that protons and electrons have exactly the same but opposite charge? [duplicate] Possible Duplicate: Why do electron and proton have the same but opposite electric charge? Doesn't it seem very curious that one is an elementary particle and the other a subatomic particle ... 1answer 148 views ### The electron jumps and lets loose photons Where is the source of the photon. If the photon propagates from within the electrons transit does this point to some sort of field? Does the energy come from a boundary being broken in laymens ... 4answers 958 views ### Bohr's model of an atom doesn't seem to have overcome the drawback of Rutherford's model We, as high school students have been taught that-because Bohr's model of an atom assigns specific orbits for electrons-that it is better than Rutherford's model. But what Rutherford failed to explain ... 1answer 169 views ### Why do electrons make a Fermi sphere? In Sommerfeld theory for metals, after determining all of the possible levels for a single electron, one says that we build up a state for a system with $N$ electrons by filling up those levels, ... 1answer 194 views ### why is total electron energy of an electron in metal negative? In my textbook, it says that any electron bound in metals, modelled as some potential well $U$, has negative total electron energy, as shown below in the figure. Why is the total electron energy ... 2answers 383 views ### How do electrons know that? The current is maximum through those segments of a circuit that offer the least resistance. But how do electrons know beforehand that which path will resist their drift the least? 2answers 236 views ### A basic confusion about what is an atom Wikipedia defines atom as The atom is a basic unit of matter that consists of a dense central nucleus surrounded by a cloud of negatively charged electrons. and defines electron as: The ... 3answers 61 views ### fit funtion to the Sun electron fluxes data I'd like to fit a function to the Sun's electrons flux data (blue dots), please note that x,y axis are in the log scale. The green dots are the "best" fit from the gnuplot program. I have taken the ... 2answers 389 views ### Is there a list of all atomic electron state transitions and the corresponding radiation emitted? Here's a quote from Wikipedia: As an example, the ground state configuration of the sodium atom is 1s22s22p63s, as deduced from the Aufbau principle (see below). The first excited state is ... 1answer 31 views ### Will an entangled idler electron induce a current in a conductor if the signal elctron's spin is measured? I'm assuming a hypothetical setup as follows: Two labs (Alice and Bob) exist. Each has one electron of an entangled pair. At Alice, the electron travels through free space towards a magnetic field of ... 2answers 298 views ### Why do electrons around nucleus radiate light according to classical physics As I navigate through physics stackexchange, I noticed Electron model under Maxwell's theory. Electrons radiate light when revolving around nucleus? Why is it so obvious? Note that I do not know ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284846782684326, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/185293-can-someone-explain-markov-chain-me.html
Thread: 1. Can someone explain this Markov Chain to me It's an example from my textbook Suppose that whether or not it rains today depends on previous whether conditions through the last two days. Specifically, supposed that if it has rained for the past two days, then it will rain tomorrow with probability 0.7; if it rained today but not yesterday, then it will rain tomorrow with probability 0.5; if it rained yesterday but not today then it will rain tomorrow with probability 0.4; if it has not rained in the past two days, then it will rain tomorrow with probability 0.2. If we let the state at time n depend only on whether or not it is raining at time n, then the preceding model is not a Markov chain (explain why?). However, we can transform this model into a Markov chain by saying that the state at any time is determined by the weather conditions during both that day and the previous day. In other words, we can say that the process is in state 0: if it rained both today and yesterday state 1: if it rained today but not yesterday state 2: if it rained yesterday but not today state 3: if it did not rain either yesterday or today The preceding would then represent a four-state Markov Chain having a transition probability matrix I'm having a hard time understanding the different states and how the probabilities are calculated. Conversely, I had a very easy time understanding this one which is one of my homework problems. Three white and three black balls are distributed in two urns in such a way that each contains three balls. We say that the system is in state i,i=0,1,2,3, if the first urn contains i white balls. At each step, we draw one ball from each urn and place the ball drawn from the first urn into the second, and conversely with the ball from the second urn. Let X_n denote the state of the system after the nth step. Explain why {X_n,n=0,1,2,3,…} is a Markov chain and calculate its transition probability matrix. Any help is appreciated. 2. Re: Can someone explain this Markov Chain to me Have you considered multiplying hte matrix by itself to produce the results of two cycles? 3. Re: Can someone explain this Markov Chain to me Originally Posted by downthesun01 It's an example from my textbook [I]Suppose that whether or not it rains today depends on previous whether conditions through the last two days. Specifically, supposed that if it has rained for the past two days, then it will rain tomorrow with probability 0.7; if it rained today but not yesterday, then it will rain tomorrow with probability 0.5; if it rained yesterday but not today then it will rain tomorrow with probability 0.4; if it has not rained in the past two days, then it will rain tomorrow with probability 0.2. If we let the state at time n depend only on whether or not it is raining at time n, then the preceding model is not a Markov chain (explain why?). In a Markov chain, the next step depends only on the state at this step (or, equivalently, the state at this step depends only on the state at the previous step). This is not a Markov chain because the weather today (at this step) depends on the weather the previous two days. However, we can transform this model into a Markov chain by saying that the state at any time is determined by the weather conditions during both that day and the previous day. In other words, we can say that the process is in But we can make it a Markov chain by thinking of a "state" as being two days, today and yesterday. While the weather today has only two values, whether or not it rains today, the weather "yesterday and today" has $2^2= 4$ possible values, those given as state 0: if it rained both today and yesterday state 1: if it rained today but not yesterday state 2: if it rained yesterday but not today state 3: if it did not rain either yesterday or today The preceding would then represent a four-state Markov Chain having a transition probability matrix Suppose we are in "state 0"- it rained both yesterday and today. The probability it will rain tomorrow is, because "suppose that if it has rained for the past two days, then it will rain tomorrow with probability 0.7", 0.7. The probability that it rained today is, of course, 1- it did rain today. Therefore, the probability that it will rain today and tomorrow, which will be state 0 "it rained yesterday and today" in the next step, tomorrow, is (1)(0.7)= 0.7. Do you see now what those eight "0"s mean? Those are the four cases where "yesterday's weather", as seen from the next step (tomorrow) is NOT the same as "today's weather". Tomorrow's weather may be either rain or not rain but today's weather is already determined- its probability that is did rain today is either 1 or 0 so we are always multiplying the probabilities for "tomorrow" by 1 or 0. I'm having a hard time understanding the different states and how the probabilities are calculated. Conversely, I had a very easy time understanding this one which is one of my homework problems. Three white and three black balls are distributed in two urns in such a way that each contains three balls. We say that the system is in state i,i=0,1,2,3, if the first urn contains i white balls. At each step, we draw one ball from each urn and place the ball drawn from the first urn into the second, and conversely with the ball from the second urn. Let X_n denote the state of the system after the nth step. Explain why {X_n,n=0,1,2,3,…} is a Markov chain and calculate its transition probability matrix. Any help is appreciated. 4. Re: Can someone explain this Markov Chain to me Thank you. I think I have a bit of a better understanding, but still do not fully comprehend. Maybe I missed it in your answer, but it seems like like rows 1 through for represent states 0 through 3. If so, what do columns 1 through 4 represent? Why is row 1 {0.7,0,0.3,0} instead of {0,0.7,0.3,0} or some other combination? For example, on my homework problem, the transition probability matrix ended up looking like this. With this matrix, it's quite easy to see that, for example, p(0,0) is going from 0 white balls to 0 white balls which as a probability of 0, and p(2,0) is going from having to white balls in urn one to having 0 which is also 0% probability. Is there a way of viewing this weather example in a similar, easy to follow fashion? I apologize for inserting images like that, but I can't seem to get latex to create tables for a matrix anymore. Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9563286900520325, "perplexity_flag": "head"}
http://math.stackexchange.com/users/781/asmeurer?tab=activity
# asmeurer reputation 724 bio website [email protected] location New Mexico age member for 2 years, 9 months seen yesterday profile views 246 I am a graduate student in mathematics at NMSU. I am the lead developer for the SymPy project. I will gladly license my code snippets under something more permissive than StackExchange's CC-BY-SA if you want. Consider my work here to be public domain. | | | bio | visits | | | |------------|------------------|------------|-----------------------------|------------|-------------------| | | 1,737 reputation | website | [email protected] | member for | 2 years, 9 months | | 724 badges | location | New Mexico | seen | yesterday | | # 228 Actions | | | | |-------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 2d | awarded | Good Answer | | May12 | comment | How to represent the floor function using mathematical notation?With the note that the $\max$ is well-defined because the set is bounded from above (by $x$), and every set of integers bounded above has a maximum element by the well-ordering principle. | | May8 | awarded | Caucus | | Apr26 | asked | Understanding a proof of Diaconescu's theorem | | Apr26 | awarded | Nice Answer | | Apr6 | comment | Let $A^{27}=A^{64}=I$, show that $A=I$This is a great way to understand the Euclidean algorithm! | | Mar27 | answered | All natural numbers are equal. | | Mar26 | comment | How to show $(1/n!)^{1/n}$ goes to $0$ as $n$ goes to infinity?I guess it's basically the proof from Ofir's answer. The interesting thing about this one is that you can get it easily just from the definition of $\ln{x}$ as $\int_1^x{\frac{dt}{t}}$. My professor showed us it as part of deriving facts about exponentials and trig functions from the base definitions. The 4 comes from the fact that you can easily show that $\ln(4) > 1$ just from the integral definition of $\ln$ and a simple Riemann sum argument. | | Mar26 | comment | How to show $(1/n!)^{1/n}$ goes to $0$ as $n$ goes to infinity?Oh, sorry, I remembered wrong. The bound is $(n/4)^n\geq n!$. It comes from taking $\ln$ of both sides (I can put the proof in an answer if you are interested). | | Mar24 | comment | How to show $(1/n!)^{1/n}$ goes to $0$ as $n$ goes to infinity?I believe if you replace $e$ with 2 in the inequality, you can prove this without integration. | | Mar22 | answered | What was the first bit of mathematics that made you realize that math is beautiful? (For children's book) | | Mar22 | comment | What was the first bit of mathematics that made you realize that math is beautiful? (For children's book)Actually, the one that's interesting is the sum of the first $n$ odd integers. | | Feb15 | awarded | Necromancer | | Feb12 | comment | Unusual 5th grade problem, how to solve itAh, missed that. | | Feb12 | comment | The kernel of a continuous linear operator is a closed subspace?Wouldn't your argument about a convergent subsequence just imply that ker L is not compact, not open? | | Feb12 | comment | Unusual 5th grade problem, how to solve itI think it's less than $n!$ if you still consider $\frac{1}{2} + \frac{1}{2}$ and $\frac{1}{2} + \frac{1}{2}$ to be the same. | | Feb12 | comment | The kernel of a continuous linear operator is a closed subspace?You don't need to assume that $L \neq 0$. If $L = 0$, then it is automatically continuous, and the kernel, the whole space, is closed. | | Jan19 | awarded | Nice Question | | Jan1 | awarded | Nice Question | | Dec24 | comment | Purely “algebraic” proof of Young's Inequality@AndresCaicedo excellent blog post! |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422256350517273, "perplexity_flag": "middle"}
http://physics.aps.org/articles/large_image/f1/10.1103/Physics.3.87
Figure 1: The chiral tetragonal phase of elemental tellurium as a result of multiple charge order. (a) The distortion pattern of a one-dimensional chain with a two-third filled band, described by the displacements $u=cos(qx)$, with $q=2π/3a$. (b) The same one-dimensional pattern repeated throughout a three-dimensional simple cubic lattice. The displacements are given by $u=cos(Q⋅r)x̂$, with $Q=(q,q,q)$ and $x̂=(1,0,0)$. (c) The chiral combination of three differently polarized one-dimensional patterns with a relative phase difference of $ϕ=2π/3$. The distortion pattern $u=(cos(Q⋅r),cos(Q⋅r+ϕ),cos(Q⋅r+2ϕ))$ coincides with the tetragonal lattice structure of $Te$. The chirality of the pattern is indicated by the corkscrewlike arrow following the double bonds in the final structure.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9061048626899719, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/111199/the-influence-of-chis-on-complex-zeros-of-frac-zeta-overlines-zeta
## The influence of $\chi(s)$ on complex zeros of $\frac{\zeta(\overline{s})}{\zeta(1-s)} \pm \frac{\zeta(s)}{\zeta(1-\overline{s})}$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I was exploring the formula: $$g(s)_{\pm} := \displaystyle \frac{\zeta(\overline{s})}{\zeta(1-s)} \pm \frac{\zeta(s)}{\zeta(1-\overline{s})}$$ and found that for all $\Re(s) \ne \frac12$: $|g(s)_{+}|$ has pairs of complex zeros that always encapsulate a $\rho$ between them. $|g(s)_{-}|$ has complex zeros near the known $\rho$'s (call them $\mu$'s). Similar to the $\rho$'s (if they would exist for $\Re(s) \ne \frac12)$, all $\mu$-roots are equal for ${\mu,1-\mu,\overline{\mu},1-\overline{\mu}}$. The $\mu$'s obviously vanish without a trace when $\Re(s)=\frac12$, however the $\mu$-zeros for $\displaystyle \lim_{\Re(s) \to \frac12}$ clearly and smoothly converge towards the known $\rho$'s. The $\mu$'s can also be computed by putting $|g(s)_{+}| - 2|\chi(s)|=0$ with $\chi(s)=2^s \pi^{s-1} \sin(\frac{\pi s}{2}) \phantom. \Gamma(1-s)$. It is conjectured (or maybe already proven?), that $\chi(s)$ does not contain any information about the $\rho$'s, however the formula above suggests that for all $\Re(s) \ne \frac12$, there actually is information in $\chi(s)$ about the $\mu$'s. $\chi(s)$ apparently plays a complementary role for $\Re(s)=\frac12$ and $\Re(s) \ne \frac12$. I realise this is a very broad question, but could this complement in any way help explain that the only way to make $\chi(s)$ fully independent from the $\rho$'s, is when $\Re(s)=\frac12$? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9713086485862732, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/11591/suggestions-for-a-good-measure-theory-book/71592
## Suggestions for a good Measure Theory book ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have taken analysis and have looked at different measures, but I am currently looking at realizing a certain problem in a different light and feel that I need a better background in various measures that have been used / discovered / et cetera in order to really move my (very basic) research forward. So, I am curious if anyone can suggest a good book on Measure Theory that has theory and perhaps a NUMBER of examples and uses of various measures. Thanks for any help; contact privately if you feel that you need more info so as to recommend better; I will explain what I am thinking about for my research -- I'm a n00b to research in math so it's probably not that interesting ;-) but who knows. - 5 Community wiki, please!(edit and check the box). – Anweshi Jan 12 2010 at 21:59 3 You might try Folland's "Real Analysis: Modern Techniques and Their Applications". – Ben Linowitz Jan 13 2010 at 13:04 18 I am surprised that this got so many answers without anyone asking (publicly) the questioner what s/he was looking for in a measure theory text. Without that information, the question becomes "Please list some measure theory books that some people have liked", which is pretty close to just "Please list some measure theory books". Even a community wiki question should have more of a focus than this, IMO. – Pete L. Clark Feb 1 2010 at 8:59 ## 17 Answers Jürgen Elstrodt - Maß- und Integrationstheorie (only in German) Fremlin - Measure Theory (freely available in the web space, contains pretty much every significant aspect of measure theory in appropriate depth) - 1 The second one is great. Not able to look into the first one. – Anweshi Jan 14 2010 at 0:13 2 +1. Elstrodt is one of the best math books ever written. – Johannes Hahn Mar 5 2010 at 13:59 There is a typo: It should be Maß- und Integrationstheorie (without the extra "g") - the link to google books: books.google.de/… Unfortunately I don't have the rights to include this into the above article, perhaps somebody could do it... Thanks – vonjd Mar 5 2010 at 19:05 You are right, there was a typo, I corrected it. Thanks. – ex falso quodlibet Mar 6 2010 at 15:04 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Bartle, The Elements of Integration and Lebesgue Measure - Really nice book-my old analysis mentor Gerald Itzkowitz learned the subject from the first edition of that book and the lecture notes of Eberlien at the University of Rochester. – Andrew L Apr 7 2010 at 21:49 1. Rudin, Real and Complex Analysis. 2. Royden, Real Analysis. 3. Halmos, Measure Theory. - 3 I think one shouldn't post vague negative comments about books without any explanation whatsoever. Royden provides a fairly readable and nice overview over graduate real analysis. And using caps lock is a lot like shouting. – Michael Greinecker Jul 15 2010 at 12:35 (Michael Greinecker's comment refers to an off-topic comment that I have just deleted.) – S. Carnahan♦ Aug 11 2010 at 13:00 Folland's Real Analysis is nice and has some pretty good exercises which often elucidate important examples. Also, it contains some applications to other fields. - I "learned" measure theory and analysis from this book and I have to say it is horrible. The level of sophistication is way to high. It's nice if you've seen the material once before and as a second pass it probably wouldn't be so bad but for a beginner it's plain horrible. – david karapetyan Feb 1 2010 at 8:13 I agree with you in that this is not a begginer's book, but I don't think this justifies saying the book is horrible. I mentioned it because Andrew asked for a reference with examples, which can be found, if not in the text, in the exercises. This is probably not the best book to start learning measure theory (more basic references were already cited before) but it is certainly a valuable reference, IMHO. – Rodrigo Barbosa Feb 25 2010 at 13:44 It's actually a very good book,Folland-the second edition is really the best.The bibligraphical notes really help one assimilate the enormous amount of material by providing all important motivation and context. "adult Rudin" has SOME of this,but it's so concise and packed,the ideas have no room to breathe. It's almost like Rudin was delibrately trying to see how much he could pack into a one year course. – Andrew L Apr 8 2010 at 3:34 Real Analysis: Measure Theory, Integration, and Hilbert Spaces by Elias Stein. Lots of problems. - Another VERY good book.It's kind of expensive for what you're getting,though,sadly.See if you can borrow a copy. – Andrew L Apr 7 2010 at 21:37 I am a huge fan of Frank Jones's book "Lebesgue Integration on Euclidean Space". It's not as well known as most of the other books mentioned, but I like it for the following reasons. 1. It is extremely well-written. 2. On the one hand, it works in $\mathbb{R}^n$ from the offset rather than starting with $\mathbb{R}$ (though really this is no more difficult, and it is good to train students not to be scared of higher dimensions; also, it makes it a bit easier to draw pictures). However, it is still quite a bit more gentle than most other books, and its perspective is extremely concrete. 3. It includes a lot of classical material that many books ignore. 4. Its exercises are fantastic. I learned the subject from this book back when I was a 2nd year undergraduate (back in 1999!). However, though I now own many other books it is still the one I go back to when I want to remind myself about the basic facts of life about integration theory or measure theory or Fourier analysis. - 1 GREAT BOOK FOR UNDERGRADUATES,ANDY. If you can afford it and you're learning it on your own,no better choice. – Andrew L Apr 7 2010 at 21:37 Agreed. One of the things I really like about Jones' book is that many exercises are given right after the definitions and the theorems (rather than at the end of the chapter), which allows you to grasp the concepts taught just after they've been presented. I really don't understand why many books in mathematics prefer the "long list of exercises at the end of the chapter" approach. All in all, it's a great introduction to measure theory. For the more advanced stuff (generalities on Radon measures, $L^p$ spaces, etc.), I recommend Folland's book, as was mentioned here already. – Mark Schwarzmann Aug 11 2010 at 11:10 This is an outstanding book. I learned a tremendous amount from it as an undergrad. That said, it doesn't say much about measures other the one referenced in the title (presumably to keep the prerequisites minimal). – Frank Thorne Jul 29 2011 at 21:27 D. Cohn, Measure Theory, Birhkäuser - A very good book is "Measure and Integration Theory" from Heinz Bauer, especially if you are planning to study probability theory. One of its strengths is that the theory is first developed without using topology and then applied to topological spaces. In my opinion this leads to a better understanding of Radon measures for example. Its style is also very concise and precise. - "A Modern Theory of Integration" by Robert G. Bartle is an excellent introduction to the theory of gauge integrals which subsumes and generalizes the usual measure theory of Lebesgue. - 2 A GREAT book,but it's really kind of a seperate subject,David. A subject well worth learning and this is the book to do it with.But I don't think this is exactly what's being asked for. – Andrew L Apr 7 2010 at 21:36 Have a look at " An Introduction to Measure and Integration" by Inder K. Rana, Graduate Series in Mathematics 45, American Mathematical Society,2002 - measure theory should be learned first from Saks - If you want to go deeper into probability theory: I would also recommend Heinz Bauer's "Measure and integration theory". But I found it a bit dry (in German). Also Kai Lai Chung's "A Course in Probability Theory" is excellent. If you're more interested in (functional) analysis and want just a short intro to measue theory: As an alternative to Rudin's "Real and Complex Analysis" I warmly recommend the recent book by Jürgen Jost "Postmodern Analysis", which includes an intro to PDE. I wish I had that when I was young... Right next to these in my library is Segal & Kunze "Integrals and Operators" and Robert Geroch's "Mathematical Physics" (no physics inside). - • Principles of Real Analysis by Charlambos Aliprantis and Owen Burkinshaw • Introduction to Measure theory by the great Terence Tao :) which is available online here - 1 also introduction to radical theory of lebesgue integration. D.M.Bressoud published by the MAA. – Chandrasekhar Jul 16 2010 at 11:04 The online version of Tao's book is merely a first draft,Chandrasekhar. The AMS will publish a finished version early next year. From the draft,though,it looks like a terrific addition to the textbook literature-the influence of Elias Stein on Tao's book is very evident. – Andrew L Jul 29 2011 at 22:33 If the focus is on measures on $\mathbb{R}^n$, Measure theory and fine properties of functions by Lawrence C. Evans and Ronald F. Gariepy. Otherwise Linear Operators. Part I: General Theory by Nelson Dunford and Jacob T. Schwartz, chapter 'Integration and Set Functions'. - I am late to the party, but let me say that while Evans & Gariepy is a great book, it is much more about fine properties of functions than it is about measure theory, so it hardly fits the bill for this question. On the other hand, it does provide a concise introduction to Hausdorff measures, which is very useful if that is what you need. – Harald Hanche-Olsen Jun 13 2011 at 7:49 Well, I personally HATE Halmos' Measure Theory, even though an entire generation grew up on it. My favorite book on measure and integration is available in Dover paperback and is one of my all time favorite analysis texts: Angus Taylor's General Theory Of Functions And Integration. Lots of wonderful examples and GREAT exercises along with discussions of point set topology, measure theory both on $\mathbb{R}$ and in abstract spaces and the Daneill approach. And all written by a master analyst with lots of references for further reading. It's one of my all time favorites and I heartily recommend it. Folland's Real Analysis is a fine book, but it's much harder and it's really more of a general first year graduate analysis course. On the plus side, it does have many applications, including probability and harmonic analysis. It's definitely worth having, but it's going to take a lot more effort then Taylor. The IDEAL thing to do would be to work through both books simultaneously for a fantastic course in first year graduate analysis. And please don't torture yourself with Rudin's Real And Complex Analysis. It's sole purpose seems to be to see how much analysis can be crammed incomprehensibly into a single text. Folland is the same level and is much more accessible. That should get you started. Good luck! - 5 Rudin's book is a classic; deservedly so IMO. The proofs emphasize the important ideas, so it is particularly good for someone to go back to after becoming a professional. – Bill Johnson Apr 8 2010 at 1:40 Maybe so,Bill-but it's ridiculously concise and abstract and I just don't think it ends up teaching you a lot when you're a beginner unless you have a really good teacher who helps you with The Big Picture. – Andrew L Apr 8 2010 at 3:28 @Andrew L: I don't think Rudin (resp. Halmos) wrote Real and Complex (resp. Measure Theory) as bedtime reading for beginners, but during its four decades in print it has proven to be an important resource for mature students with a serious interest in analysis and (as Bill mentions) as a reference for analysts. I don't think it's productive to dismiss a standard (and valuable) text, even though it almost certainly isn't the ideal one for this stage of the OP's analysis education. – Z Norwood Jan 25 2011 at 4:12 I would surprisingly point to Shiryaev's "Probability", which doesn't go into details but definitely motivates the introduction of the notion of measure and explains of what use are the different properties ; the other suggestions given in the other answers will then plug the holes, if need be. - Personal favorites, in suggested reading order: • Bartle, The Elements of Integration. Exercises are all doable and at about the same level. The best first-time text. • Royden. I loved this book: the exercises vary from easy to quite tricky (when you need to sleep on it!or take a looong shower) and working as many exercises as possible, especially the hard ones, is a great way to really understand the real numbers. • Rudin. The proofs are at times way too slick, formulas popping out with no motivation, but that should be no real (or complex?) problem after Royden, and it gives a beautiful overview of the essentials; the topology part is great too. The problems are great and often quite challenging. • Oxtoby, Measure and Category. This is just a fantastic little book. After you have studied the others, you can read through this like a novel and everything will start to fit together much more. Pure inspiration. • Dunford and Schwarz. Some encounter with this is necessary, especially after you've also been through Rudin's Functional Analysis. • Lamperti's "Probability". This could be called "Probability for Analysts" and it is a beautiful little book. • Billingsley's Ergodic Theory and Information. Now you're ready to see what some of that abstract stuff is good for, and this beautiful text is an excellent choice. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505605697631836, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/determinant+polynomials
# Tagged Questions 1answer 31 views ### If $f(X) = a_0 + a_1 X + a_2 X^2 \in \mathbb{F}[X]$ then show $f$ is uniquely determined by $f(x)$, $f(y)$, $f(z)$? This is the exact question: It's part(ii) that I don't understand - what does it mean and what is it asking me to do? How would I go about constructing a proof? Any help would be much appreciated. 2answers 146 views ### Is a linear combination of minors irreducible? Let $X=(X_{ij})_{1\le i,j\le n}$ be a matrix of indeterminates over $\mathbb C$. For choices $I,J\subseteq\{1,\ldots,n\}$ with $|I|=|J|=k$ denote by $X_{I\times J}$ the matrix \$(X_{ij})_{i\in I,j\in ... 1answer 49 views ### Why Vandermonde's determinant divides such determinant? Assume that W(x_1,...,x_n;k)=\left [ \begin{array}{rrrrrrrr} 1 & x_1 &... & x_1^{n-2} & x_1^k \\ 1 & x_2 &... & x_2^{n-2} & x_k \\ & & \ddots \\ 1 & ... 0answers 32 views ### Why the ith coefficient of $|\lambda I-A|$ is the sum of all $i$-th order principle minors of $A$? I come across a theorem that $f(\lambda )=|\lambda I-A|$, which equals to $\lambda ^{n}-a_{1}\lambda ^{n-1}+\alpha _{2}\lambda ^{n-2}-...(-1)^{n}a_{n}$ where $a_{i}$ is the sums of all ith order ... 2answers 37 views ### why this is correct: $\det(C+Di)$ is not zero, then there exists some real number $a$ such that $\det(C + a D)$ is not zero I wonder why the following statement is correct: supposing $C$ and $D$ are two real matrix, if the determinant of the complex matrix $C + D i$ is not zero, then there exists some real number $a$ ... 1answer 37 views ### divisibility of polynomials and determinant relations Let $A$ be an integral domain and $f(x), g(x) \in A[x_1,\cdots,x_n]$. Write $f(x)=\sum \alpha_{\omega} x^{\omega}, g(x) = \sum \beta_{\omega} x^{\omega}$ where $\omega = (\omega_1,\cdots,\omega_n)$ ... 3answers 104 views ### Correlation between polynomial equations and matrix determinants Expanding $p(x)=(ax-b)(cx+d)$ we get $acx^2+(ad-bc)x-bd$. Notice the determinant of the matrix $\begin{pmatrix} a & b \\ c & d \end{pmatrix}$ is $ad-bc$ exactly like the constant of $x$ ... 4answers 210 views ### Vector space of polynomials over $\mathbb{R}$ with degree $\leqslant n-1$ Let $P \in \mathbb{R}_{n-1}[X]$ be a polynomial of degree $n-1 \geqslant 0$. Let $\mathbb{R}_{n-1}[X]$ be the vector space of polynomials with degree $\leqslant n-1$ over $\mathbb{R}$. Show ... 1answer 92 views ### expressing product as Vandermonde determinants Is it possible to express the product: $$\frac{\prod_{i < j} (a_i - a_j)(b_i - b_j) }{\prod_{i,j} (a_i - b_j) }$$ as the determinant of a single matrix ? This comes from a physics paper. Should ... 1answer 70 views ### The derivative of characterestic polynomial? Let $A\in M_{n}(R)$ and $f(x)$ be the characterestic polynomial of $A$. Is it true that $f'(x)=\sum_{i=1}^{^{n}}\sum_{j=1}^{n}\det(xI-A(i\mid j))$ which $A(i\mid j)$ is a submatrix of $A$ obtained by ... 3answers 125 views ### Minimal polynomial, determinants and invertibility I need to prove: if a matrix $A$ is invertible, then the minimal polynomial $m_a(0) \neq 0$ There is one definition I am unsure of or need help making more clear. I will proceed with proof by ... 1answer 114 views ### Rank of a Vandermonde Matrix with additional weighted columns A Vandermonde matrix: \$\left(\begin{array}{ccc} 1 & \alpha_{0} & \dots & \alpha_{0}^{n} \\ 1 & \alpha_{1} & \dots & \alpha_{1}^{n} \\ \vdots & \vdots & \ddots & ... 0answers 383 views ### quick ways to »verify« determinant, minimal polynomial, characteristic polynomial, eigenvalues, eigenvectors … What are easy and quick ways to »verify« determinant, minimal polynomial, characteristic polynomial, eigenvalues, eigenvectors after caculating them? So if I calculated determinant, minimal ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.861794650554657, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/15537-probability-generating-function-poisson-random-variable.html
# Thread: 1. ## probability generating function of a poisson random variable i should start by asking is anyone on the forum well versed in the coalescent theory? i have a problem in a derivation if the number of mutations is given by $P_n(z) = E exp((z-1)\frac{\theta}{2}(nT_n + ..... + 2T_2))$ as one can see this is the probability generating function of a poisson random variable with random mean. where $\lambda = \frac{\theta}{2}(nT_n + ..... + 2T_2)$ now from here i can write this as a product so $P_n(z) = \prod_{j=2}^n E\left(e^{(z-1)\frac{\theta}{2}_j T_j}\right)$ Now im confused what to do. If i have the probability generating function how do i find the density function (derivative?) the next step in the derivation should be $P_n(z) = \prod_{j=2}^n \left(1 - (z-1)\cdot \frac{j \theta /2}{j(j-1)/2}\right)^{-1}$ I cant for the life of me figure out how this was found from the pgf. PLEASE ANY IDEAS ARE WELCOME, SO IF NO ONE KNOWS THE ANSWER SUGGEST ANYTHING. many thanks chogo 2. perhaps i should rather ask if i had $P(z) = E\left(e^{(z-1)\lambda}\right)$ how would i remove the E 3. Originally Posted by chogo if i had $P(z) = E\left(e^{(z-1)\lambda}\right)$ how would i remove the E Are you sure that is the right formula? Because there is no random variable in $e^{(z-1)\lambda},$ it is a constant with respect to the expectation and $P(z) = E\left(e^{(z-1)\lambda}\right) = e^{(z-1)\lambda}.$ When specifying an expectation, the random variable involved should be made clear, for example, by writing $P(z) = E_X z^X$ when the random variable is $X.$ 4. thank you jakeD sorry yeah i should not have put the expectation there. My bad. Was just trying to write something simpler to see if anyone had any suggestions The expectation is valid in my original equation, where the random variable is $T_j$ the thing is how did the person who did that derivation remove the expectation? and arrive at the second formula. -> any suggestions? i thank you so much for your help 5. Originally Posted by chogo thank you jakeD sorry yeah i should not have put the expectation there. My bad. Was just trying to write something simpler to see if anyone had any suggestions The expectation is valid in my original equation, where the random variable is $T_j$ the thing is how did the person who did that derivation remove the expectation? and arrive at the second formula. -> any suggestions? i thank you so much for your help What is the distribution of the random variable $T_j ?$ You haven't said. The expectation uses that distribution. Let $M_{T_j}(t)$ be the moment generating function of $T_j .$ Then comparing the first and second equations $E_{T_j} (e^{(z-1)\theta T_j / 2j}) = M_{T_j}((z-1)\theta/2j ) = (1 - t 2j/(j-1))^{-1}$ where $t = (z-1)\theta / 2j.$ Then $M_{T_j}(t) = (1 - t 2j/(j-1))^{-1} = \frac{(j-1)/2j}{(j-1)/2j - t}$ which is the MGF of a Gamma(1,(j-1)/2j) distribution, that is, an exponential distribution with parameter $\lambda = (j-1)/2j.$ So it appears that is the distribution of $T_j.$ Is that correct? 6. My original comment won't work, but I have to wonder if "E" isn't supposed to be a " $\Sigma$?" -Dan 7. firstly JakeD and topsquark i cant thank you enough Top shark, yes its definitley an E yes $T_j$ is assumed to be exponentially distributed also yes this term does cancel out and becomes $P_n(z) = \prod^n_{j=2}\left(1-\frac{(z-1)\theta}{j-1}\right)^{-1}$ did he substitute a value for $T_j$? i dont know why the original equation which is of the form of a poisson generating function becomes what is is now, which you said is a gamma(1,1). im almost 100% certain this is not wrong, as its a very famous theory developed in mathematical biology and very well founded. thank you so much for you help again its really much appreciated. If you want i can write the entire derivation our for you guys, will this help? 8. what did you do here this seems to be exactly what i need 9. Originally Posted by chogo firstly JakeD and topsquark i cant thank you enough Top shark, yes its definitley an E yes $T_j$ is assumed to be exponentially distributed also yes this term does cancel out and becomes $P_n(z) = \prod^n_{j=2}\left(1-\frac{(z-1)\theta}{j-1}\right)^{-1}$ did he substitute a value for $T_j$? i dont know why the original equation which is of the form of a poisson generating function becomes what is is now, which you said is a gamma(1,1). im almost 100% certain this is not wrong, as its a very famous theory developed in mathematical biology and very well founded. thank you so much for you help again its really much appreciated. If you want i can write the entire derivation our for you guys, will this help? This post appears to be looking at my post before I finished editing it. So it responds to comments of mine I resolved and edited out. 10. Originally Posted by JakeD Let $M_{T_j}(t)$ be the moment generating function of $T_j .$ Then comparing the first and second equations $E_{T_j} (e^{(z-1)\theta T_j / 2j}) = M_{T_j}((z-1)\theta/2j ) = (1 - t 2j/(j-1))^{-1}$ where $t = (z-1)\theta / 2j.$ Originally Posted by chogo what did you do here this seems to be exactly what i need The explanations before and after this equation are needed. I used the definition of moment generating function and then equated corresponding parts of your first and second equations while substituting in the variable $t.$ 11. thank your help is invaluable. but one last problem. Like you before i used the definition $M(t) = E(e^{t,x})$ to go from $E_{t_j}(e^{(z-1)\theta T_j/2j}) = M_{t_j}((z-1)\theta T_j/2)$ but when you said you compared the forms of the first and second equations to arrive at the third equation im not fully satistifed. Is there a formal definition which says the moment generating function i have is equal to $(1-t2_j / (j-1))^{-1}$ where 12. Originally Posted by JakeD What is the distribution of the random variable $T_j ?$ You haven't said. The expectation uses that distribution. Let $M_{T_j}(t)$ be the moment generating function of $T_j .$ Then comparing the first and second equations $E_{T_j} (e^{(z-1)\theta T_j / 2j}) = M_{T_j}((z-1)\theta/2j ) = (1 - t 2j/(j-1))^{-1}$ where $t = (z-1)\theta / 2j.$ Then $M_{T_j}(t) = (1 - t 2j/(j-1))^{-1} = \frac{(j-1)/2j}{(j-1)/2j - t}$ which is the MGF of a Gamma(1,(j-1)/2j) distribution, that is, an exponential distribution with parameter $\lambda = (j-1)/2j.$ So it appears that is the distribution of $T_j.$ Is that correct? Originally Posted by chogo thank your help is invaluable. but one last problem. Like you before i used the definition $M(t) = E(e^{t,x})$ to go from $E_{t_j}(e^{(z-1)\theta T_j/2j}) = M_{t_j}((z-1)\theta T_j/2)$ but when you said you compared the forms of the first and second equations to arrive at the third equation im not fully satistifed. Is there a formal definition which says the moment generating function i have is equal to $(1-t2_j / (j-1))^{-1}$ where I was trying to deduce what the distribution of $T_j$ was because you didn't say what it was. So I deduced it was an exponential distribution with parameter $\lambda = (j-1)/2j.$ Is this correct? Now you know what the distribution of $T_j$ is. So you can say what the moment generating function for $T_j$ is. I was working backwards to deduce the distribution; you can work forwards knowing the distribution. I got that $M_T (t) = \frac{\lambda}{\lambda - t} = (1 - t/\lambda)^{-1}$ when $T \sim \text{Exp}(\lambda)$ from a probability text. See Exponential distribution - Wikipedia, the free encyclopedia . 13. sheer brilliance! thanks alot for the help
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 52, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428588151931763, "perplexity_flag": "head"}
http://mathhelpforum.com/number-theory/148143-continued-fraction-expansion-problems.html
# Thread: 1. ## Continued Fraction Expansion Problems Hello All, I have some questions regarding Continued Fraction Expansion. As an initial reference, these problems have come from my textbook in the "additional examples" area. My book doesn't really give the best explanation of those examples all the time, and as a result I have a few questions. Q1. The numbers a_k can be found for 113/50 by using a continued fraction algorithm. Note that 113/50 is rational, and as a result it will have to terminate. Can anyone help me find this a_k ? Q2. Given the following, one should be able to form a continued fraction expansion for $e$. We initially write (x,y,z,....) for the continued fraction of $x + (1/(y+(1/(z+(1/...)))))$. From here, the continued fraction expansion for $2.718281828459045$ is $(2,1,2,1,1,4,1,1,6,1,1,8,1,1)$. Knowing this, how many terms are necessary in order to accurately get 4 decimal places (2.718) ? Q3. Assume that x=Sqrt[3]-1. It can be proven that x = 1/(1+(1/(2+x))). Can somebody prove this? After it has been proven, the proof can be used to find the continued fraction expansion for x (Can somebody show this?). Next, find the continued franction expansion for Sqrt[3]. Once completed, a good accuracy check is that the first 6 or 7 terms of the expansion should give a reasonable approximation for Sqrt[3]. (The underlined portions are the questions I am asking). Q4. What is the continued fraction expansion of Sqrt[5]? My book notes that it might be helpful if "you find the continued fraction explansion of x where 0<x<1 and x is related to Sqrt[5]. Q5. What is the continued fraction expansion of Sqrt[7]? My book notes that part of the expansion is (2,1,1,1,4,1,1,1,...). Is the next entry in this expansion a 2 or a 4 ? Also, it mentions that you can find some number like what I had mentioned in Q4, and once we have our answer we should be able to show why it is correct. Any help is appreciated! 2. first question I will show you how u can find the continued fraction for 133/50 $\frac{113}{50} = 2.26$ i=2 $2.26 - 2 = .26$ $\frac{26}{100}$ $\frac{100}{26} = 3+ \frac{22}{26}$ 3 is $a_1$ $\frac{26}{22} = 1 + \frac{4}{22}$ 1 is $a_2$ $\frac{22}{4} = 5 + \frac{2}{4}$ $a_3 = 5$ $\frac{4}{2} = 2$ $a_4 = 2$ so the continued fraction is [i ; a_1 , a_2 , a_3 , a_4 ] [2; 3 , 1 , 5 , 2 ] Q2 I think the best way is to find the continued fraction for 2.718 = r i =2 r- 2 =718/1000 1000/718 = 1 + 282/718 718/282 = 2 + 154/282 282/154 = 1 + 128/154 154 /128 = 1 + 26/128 128/26 = 4 + 24/26 26/24 = 1 + 2/24 24/2 = 12 end [2 ; 1, 2, 1, 1, 4, 1, 12 ] http://en.wikipedia.org/wiki/Continued_fraction 3. Originally Posted by Amer first question I will show you how u can find the continued fraction for 133/50 $\frac{113}{50} = 2.26$ i=2 $2.26 - 2 = .26$ $\frac{26}{100}$ $\frac{100}{26} = 3+ \frac{22}{26}$ 3 is $a_1$ $\frac{26}{22} = 1 + \frac{4}{22}$ 1 is $a_2$ $\frac{22}{4} = 5 + \frac{2}{4}$ $a_3 = 5$ $\frac{4}{2} = 2$ $a_4 = 2$ so the continued fraction is [i ; a_1 , a_2 , a_3 , a_4 ] [2; 3 , 1 , 5 , 2 ] Q2 I think the best way is to find the continued fraction for 2.718 = r i =2 r- 2 =718/1000 1000/718 = 1 + 282/718 718/282 = 2 + 154/282 282/154 = 1 + 128/154 154 /128 = 1 + 26/128 128/26 = 4 + 24/26 26/24 = 1 + 2/24 24/2 = 12 end [2 ; 1, 2, 1, 1, 4, 1, 12 ] Continued fraction - Wikipedia, the free encyclopedia What is i ? Also for Q1 I take it a_k that I was looking for is a_4 = 2 (which terminates the algorithm) ? For Q2, I take it there are 7 terms necessary in order to get 4 digits of accuracy because there are 7 items in this array [2 ; 1, 2, 1, 1, 4, 1, 12 ] ? ------ Also, anyone have any ideas on the other questions or is able to confirm his Q1 and Q2 responses? 4. Originally Posted by Samson What is i ? Also for Q1 I take it a_k that I was looking for is a_4 = 2 (which terminates the algorithm) ? For Q2, I take it there are 7 terms necessary in order to get 4 digits of accuracy because there are 7 items in this array [2 ; 1, 2, 1, 1, 4, 1, 12 ] ? ------ Also, anyone have any ideas on the other questions or is able to confirm his Q1 and Q2 responses? i the integer part for the number that we want to find the continued fraction for it yeah a_4 terminated the algorithms Q2 is true 7 terms Q3 $x = \sqrt{3} -1$ $x = \frac{1}{1+\frac{1}{2+x}}$ take the second $x = \frac{1}{1+\frac{1}{2+x}}$ $x = \frac{1}{\frac{2+x+1}{2+x}}$ $x = \frac{2+x}{x+3}$ $x(x+3) = 2+x$ $x^2 +3x = 2+x \Rightarrow x^2+2x-2=0$ find x,then we are done how to write $x =\sqrt{3}-1$ in continued fraction using $x = \frac{1}{1+ \frac{1}{2+x}}$ $\sqrt{3}-1 = \dfrac{1}{1+ \dfrac{1}{2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\d frac{1}{2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\dfr ac{1}{2+..}}}}}}}}}}$ I just sub x value each time it will never end $\sqrt{3}-1 = \dfrac{1}{1+ \dfrac{1}{2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\d frac{1}{2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\dfr ac{1}{2+..}}}}}}}}}}$ so $\sqrt{3} = 1 +\dfrac{1}{1+ \dfrac{1}{2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\d frac{1}{2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\dfr ac{1}{2+..}}}}}}}}}}$ the continued fraction of sqrt{3} [1,2,1,2,1,2,...] first 6 terms will be (1,2,1,2,1,2) 5. Also, note that, given some set of numbers $a_0, a_1, a_2, \dots a_n$, if the series $u_{n + 1} = u_n^{-1} + a_n$ converges towards your value, then the continued fraction expansion of this value is $[a_n, a_{n - 1}, a_{n - 2}, \dots, a_0]$. The reciprocal is true as well. That might give you a clue ... 6. Originally Posted by Amer i the integer part for the number that we want to find the continued fraction for it yeah a_4 terminated the algorithms Q2 is true 7 terms Q3 $x = \sqrt{3} -1$ $x = \frac{1}{1+\frac{1}{2+x}}$ take the second $x = \frac{1}{1+\frac{1}{2+x}}$ $x = \frac{1}{\frac{2+x+1}{2+x}}$ $x = \frac{2+x}{x+3}$ $x(x+3) = 2+x$ $x^2 +3x = 2+x \Rightarrow x^2+2x-2=0$ find x,then we are done how to write $x =\sqrt{3}-1$ in continued fraction using $x = \frac{1}{1+ \frac{1}{2+x}}$ $\sqrt{3}-1 = \dfrac{1}{1+ \dfrac{1}{2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\d frac{1}{2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\dfr ac{1}{2+..}}}}}}}}}}$ I just sub x value each time it will never end $\sqrt{3}-1 = \dfrac{1}{1+ \dfrac{1}{2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\d frac{1}{2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\dfr ac{1}{2+..}}}}}}}}}}$ so $\sqrt{3} = 1 +\dfrac{1}{1+ \dfrac{1}{2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\d frac{1}{2+\dfrac{1}{1+\dfrac{1}{2+\dfrac{1}{1+\dfr ac{1}{2+..}}}}}}}}}}$ the continued fraction of sqrt{3} [1,2,1,2,1,2,...] first 6 terms will be (1,2,1,2,1,2) Okay, I'm a little lost with what you did here: $x = \frac{1}{1+\frac{1}{2+x}}$ take the second $x = \frac{1}{1+\frac{1}{2+x}}$ It looks like you have the same thing before and afterwards (unless I"m totally blind). How did you convert x=Sqrt[3]-1 to x = 1/(1+(1/(2+x))) ? It looks like you just wrote th first one then the second one and only used the second one. How did you connect them? Lastly, the first six terms are (1,2,1,2,1,2) but how do we ensure that that gives us 6 digits of accurracy (1.732050) ? Also, anyone have any input on Q4 and Q5? 7. a good accuracy check is that the first 6 or 7 terms of the expansion not 6 decimal places $x = \frac{1}{1+\frac{1}{2+x}}$ I called this second one if we solved it for x, we will get $x=\sqrt{3}-1$ and another one see $x = \frac{1}{1+ \frac{1}{2+x}}$ $x = \frac{1}{\frac{2+x+1}{2+x}}$ $x = \frac{2+x}{3+x}$ $x^2 +3x = 2+x$ $x^2 + 2x -2 =$ $x = \frac{-2\mp \sqrt{2^2 -4(1)(-2)}}{2(1)}$ $x = \frac{-2 \mp \sqrt{12}}{2}$ $x = \frac{-2 \mp 2\sqrt{3}}{2} = -1 \mp \sqrt{3}$ so x has two values $\sqrt{3} -1$ and $-1-\sqrt{3}$ 8. Originally Posted by Amer not 6 decimal places $x = \frac{1}{1+\frac{1}{2+x}}$ I called this second one if we solved it for x, we will get $x=\sqrt{3}-1$ and another one see $x = \frac{1}{1+ \frac{1}{2+x}}$ What does this even mean? "Called this second one if we solved it for x" ? I don't see how you got from one line to the other... 9. omg Assume that x=Sqrt[3]-1. It can be proven that x = 1/(1+(1/(2+x))). Can somebody prove this? if I show that $x=\sqrt{3}-1$ is a solution for $x =\frac{1}{1+ \frac{1}{2+x}}$ I'm done, is it ok for now, if it is then I sub $\sqrt{3}-1$ instead of x and want to show that the left hand side equal to the right hand side want to show $\frac{1}{1+ \frac{1}{2+\sqrt{3}-1}}\;\;$ equal to $\;\; \sqrt{3}-1$ $\frac{1}{1+ \frac{1}{2+\sqrt{3}-1}}= \frac{1}{1+ \frac{1}{1+\sqrt{3}}}$ $\frac{1}{1+ \frac{1}{1+\sqrt{3}}} = \frac{1}{\frac{1+ \sqrt{3}}{1+\sqrt{3}} + \frac{1}{1+\sqrt{3}}}$ $\frac{1}{\frac{1+ \sqrt{3}}{1+\sqrt{3}} + \frac{1}{1+\sqrt{3}}} = \frac{1}{\frac{1+\sqrt{3}+1}{1+\sqrt{3}}}$ $\frac{1}{\frac{1+\sqrt{3}+1}{1+\sqrt{3}}} = \frac{1+\sqrt{3}}{2+\sqrt{3}}$ multiply the denominator and nominator by the conjugate of the denominator $\frac{1+\sqrt{3}}{2+\sqrt{3}} = \left(\frac{1+\sqrt{3}}{2+\sqrt{3}}\right)\left(\f rac{2 - \sqrt{3}}{2 - \sqrt{3}}\right)$ $\left(\frac{1+\sqrt{3}}{2+\sqrt{3}}\right)\left(\f rac{2 - \sqrt{3}}{2 - \sqrt{3}}\right) = \frac{2+2\sqrt{3}-\sqrt{3}-3}{2^2 - (\sqrt{3})^2}$ $\frac{2+2\sqrt{3}-\sqrt{3}-3}{2^2 - (\sqrt{3})^2} = \frac{\sqrt{3} -1}{4 - 3} = \sqrt{3}-1$ end if something is not clear point it 10. Here is an algorithm for Q4 and Q5. (Wikipedia - Methods of computing square roots - subheading Continued fraction expansion) 11. Thank you amer, I understand that was tedious of you to do but I'm very appreciative. I followed your steps this time but I was lost originally. undefined, as far as Q4 and Q5 goes, that wikipedia article had my head spinning for that example (114)... I figured 5 and 7 would be much easier. Somebody should be able to translate this garble for me (as I believe it has the answer right into it) if we can apply it to the questions I had asked: Square root of 5 - Wikipedia, the free encyclopedia As far as 7 goes, the book gave us part of the expansion (2,1,1,1,4,1,1,1,...). We just need to find out if the next entry in this expansion is a 2 or a 4 and show how we got there essentially. 12. Originally Posted by Samson Thank you amer, I understand that was tedious of you to do but I'm very appreciative. I followed your steps this time but I was lost originally. . it is not tedious but it is better to determine exactly what you do not understand, and do something not just asking try to understand.and show us some work 13. Originally Posted by Amer it is not tedious but it is better to determine exactly what you do not understand, and do something not just asking try to understand.and show us some work Well for Sqrt[5], the Wiki article shows [2;4,4,4,4,4,...] and they have it shown visually at the top of the page. How did they reach this conclusion though? How did they set it up? Sqrt[7] i'm lost on as well. 14. Originally Posted by Samson This page does not describe how the continued fraction expansion of $\sqrt{5}$ was obtained. (Ah, I see your newest post mentions this.) Best rational approximation is a separate topic, interesting in its own right. The best rational approximation of an irrational number for some upper limit $d$ of denominator will always be either a convergent or a semiconvergent of the continued fraction expansion; the choice between convergent and semiconvergent can be determined using the so-called half rule. Computing square root continued fractions can be tricky to grasp, but if you review the sources and play around with all the numbers you will eventually see how it works. Post any specific questions. The description of algorithm in the link I gave you is ideal for computer programmers since it clearly specifies all the variables and how they change from one iteration to the next. On the other hand, it is not immediately clear where the algorithm came from. You can look at the example worked out in the problem statement of this Project Euler problem (#64) and see if it's any easier to follow. 15. Originally Posted by undefined This page does not describe how the continued fraction expansion of $\sqrt{5}$ was obtained. (Ah, I see your newest post mentions this.) Best rational approximation is a separate topic, interesting in its own right. The best rational approximation of an irrational number for some upper limit $d$ of denominator will always be either a convergent or a semiconvergent of the continued fraction expansion; the choice between convergent and semiconvergent can be determined using the so-called half rule. Computing square root continued fractions can be tricky to grasp, but if you review the sources and play around with all the numbers you will eventually see how it works. Post any specific questions. The description of algorithm in the link I gave you is ideal for computer programmers since it clearly specifies all the variables and how they change from one iteration to the next. On the other hand, it is not immediately clear where the algorithm came from. You can look at the example worked out in the problem statement of this Project Euler problem (#64) and see if it's any easier to follow. I was able to follow along but I don't see why they chose the number '4' to start the expansion off with. I don't know how it relates to Sqrt[23]. Can someone start me off with how they got Sqrt[5]'s expansion? They give the answer but I'd like to know how they got there. As far as Q5 goes, I need similar help because I don't know how to get to the point to where they left off in the book.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 86, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354628920555115, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/17569/solving-an-indeterminate-triangle-truss-structure-statics
# Solving an indeterminate triangle truss structure (statics) From the setup of two trusses as shown in this illustration, how can I solve for the axial reaction forces in the trusses, and in the points A and B (Ax, Ay, Bx, By)? It is assumed that the joints exert no torque/bending moment, and the lengths don't really matter, only the angles alpha and beta. I've drawn two slightly different setups, while the top one reflects the true setup, and the lower one might be easier to solve: I'm primarily interested in solving this by hand and calculator, and I'm a little rusty on constraint equations and Lagrange multipliers (though that might be overkill). - All You need is parallelogramm of forces, analyze the angles and apply law of cosines. – Georg Nov 28 '11 at 13:14 2 The first setup drawn is not statically indeterminate. If you had a truss member vertically from A to B, then it would be. The second setup drawn is not actually constrained and will collapse at the slightest imbalance. Use the joint method and consider the point at which the force is applied. You know the forces in the two members are in the direction of the members. You've got two equations (x and y) and two unknowns - the magnitudes. – Doresoom Nov 28 '11 at 20:43 Is force $F$ known? Are $\alpha$ and $\beta$ known? – 71GA Feb 28 at 2:42 ## 1 Answer Like @Doresoom said, the truss member here are "Two Force Members" carrying only tangential forces. Thus the sum of the forces on the triangle tip (Point ?C?) should yield the answer. $$T_A \cos \alpha + T_B \cos\beta = 0 \\ T_A \sin \alpha - T_B \sin\beta - F = 0$$ where $T_A$ and $T_B$ are the tensions on the two members. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385312795639038, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/06/06/lie-groups/?like=1&_wpnonce=122935e828
# The Unapologetic Mathematician ## Lie Groups Now we come to one of the most broadly useful and fascinating structures on all of mathematics: Lie groups. These are objects which are both smooth manifolds and groups in a compatible way. The fancy way to say it is, of course, that a Lie group is a group object in the category of smooth manifolds. To be a little more explicit, a Lie group $G$ is a smooth $n$-dimensional manifold equipped with a multiplication $G\times G\to G$ and an inversion $G\to G$ which satisfy all the usual group axioms (wow, it’s been a while since I wrote that stuff down) and are also smooth maps between manifolds. Of course, when we write $G\times G$ we mean the product manifold. We can use these to construct some other useful maps. For instance, if $h\in G$ is any particular element we know that we have a smooth inclusion $G\to G\times G$ defined by $g\mapsto (h,g)$. Composing this with the multiplication map we get a smooth map $L_h:G\to G$ defined by $L_h(g)=hg$, which we call “left-translation by $h$“. Similarly we get a smooth right-translation $R_h(g)=gh$. ## 2 Comments » 1. [...] a Lie group is a smooth manifold we know that the collection of vector fields form a Lie algebra. But this is [...] Pingback by | June 8, 2011 | Reply 2. [...] Lie groups are groups, they have representations — homomorphisms to the general linear group of some [...] Pingback by | June 13, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 12, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9210144877433777, "perplexity_flag": "head"}
http://crypto.stackexchange.com/users/1346/emilio-ferrucci?tab=activity
# Emilio Ferrucci reputation 3 bio website location Florence, Italy age 23 member for 1 year, 4 months seen Jan 29 '12 at 15:36 profile views 3 Master's student in Florence, Italy, currently on an ERASMUS program at FU Berlin. My main interest is Algebraic Topology. I wrote my bachelor degree project on Morse Theory. Here are some of my favourite quotes: • "To ask the right question is harder than to answer it." - Georg Cantor • "Reductio ad absurdum, which Euclid loved so much, is one of a mathematician's finest weapons. It is a far finer gambit than any chess gambit: a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game." (G. H. Hardy) • "We say that a proof is beautiful when it gives away the secret of the theorem, when it leads us to perceive the inevitability of the statement being proved." (Giancarlo Rota) • "A mathematician is a device for turning coffee into theorems." (Alfréd Rényi) • "Mathematicians may turn coffee into theorems, but with an american coffee the most you'll get out of me is a lemma." (a Professor at my university) • A topologist is someone who can't tell his ass from a hole in the ground, but can tell his ass from two holes in the ground. | | | bio | visits | | | |----------|----------------|-----------------|----------|---------------------|------------------| | | 123 reputation | website | | member for | 1 year, 4 months | | 3 badges | location | Florence, Italy | seen | Jan 29 '12 at 15:36 | | # 10 Actions | | | | |------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Jan5 | comment | Cycle attack on RSA@PeterTaylor Yes, by " $k$ might have not much to do with the order of $\mathbb{Z}_{\phi(n)}^{\times}$" I meant in terms of magnitude - it could be a divisor and still be small. | | Jan5 | comment | Cycle attack on RSAThank you for your answer, this is what I was looking for. I still don't understand one thing though: why is the probability of $|e|$ not being a multiple of $r$ at most $1/r$? | | Jan5 | awarded | Scholar | | Jan5 | accepted | Cycle attack on RSA | | Jan5 | comment | Cycle attack on RSABy $<e>$ i mean the group generated by $e$, $<e>=\{1,e,e^2,...,e^{k-1} \}$ (yes, $|e|$=k). I don't see the connection with factoring or with evaluating $\phi(n)$ (which are computationally equivalent, up to polynomial transformations), since in theory $k$ could be much smaller than $|\mathbb{Z}_{\phi(n)}^{\times}|=\phi(\phi(n))$ (I'm guessing it probably isn't - this is the type of result I was looking for). | | Jan5 | awarded | Supporter | | Jan5 | comment | Cycle attack on RSAThen regarding the first highlighted part you quoted: I am not wondering about the order of $\space$ $\mathbb{Z}_{\phi(n)}^{\times}$ (which of course is fixed), but about the order of $e$ in $\mathbb{Z}_{\phi(n)}^{\times}$, that is the order of the subgroup $<e>$ of $\mathbb{Z}_{\phi(n)}^{\times}$. This might not have much to do with the order of $\mathbb{Z}_{\phi(n)}^{\times}$ : for example the group $\mathbb{Z}_{8}^{\times}$ has order 4, but the only possible orders of elements are $1$ or $2$, since it is the klein group. | | Jan5 | comment | Cycle attack on RSAThank you for your answer. I was expecting th first objection you made: even if I came across $m$ I might not be able to tell it's the plaintext, and distinguish it from any other element in $\mathbb{Z_n}$. I don't know much about padding, and I can see how this is possible. | | Jan4 | awarded | Student | | Jan4 | asked | Cycle attack on RSA |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.960697591304779, "perplexity_flag": "middle"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/F11/f11bsc.html
# NAG Library Function Documentnag_sparse_nherm_basic_solver (f11bsc) ## 1  Purpose nag_sparse_nherm_basic_solver (f11bsc) is an iterative solver for a complex general (non-Hermitian) system of simultaneous linear equations; nag_sparse_nherm_basic_solver (f11bsc) is the second in a suite of three functions, where the first function, nag_sparse_nherm_basic_setup (f11brc), must be called prior to nag_sparse_nherm_basic_solver (f11bsc) to set up the suite, and the third function in the suite, nag_sparse_nherm_basic_diagnostic (f11btc), can be used to return additional information about the computation. These three functions are suitable for the solution of large sparse general (non-Hermitian) systems of equations. ## 2  Specification #include <nag.h> #include <nagf11.h> void nag_sparse_nherm_basic_solver (Integer *irevcm, Complex u[], Complex v[], const double wgt[], Complex work[], Integer lwork, NagError *fail) ## 3  Description nag_sparse_nherm_basic_solver (f11bsc) solves the general (non-Hermitian) system of linear simultaneous equations $Ax=b$ of order $\mathit{n}$, where $\mathit{n}$ is large and the coefficient matrix $A$ is sparse, using one of four available methods: RGMRES, the preconditioned restarted generalized minimum residual method (see Saad and Schultz (1986)); CGS, the preconditioned conjugate gradient squared method (see Sonneveld (1989)); Bi-CGSTAB($\ell $), the bi-conjugate gradient stabilized method of order $\ell $ (see Van der Vorst (1989) and Sleijpen and Fokkema (1993)); or TFQMR, the transpose-free quasi-minimal residual method (see Freund and Nachtigal (1991) and Freund (1993)). For a general description of the methods employed you are referred to Section 3 in nag_sparse_nherm_basic_setup (f11brc). nag_sparse_nherm_basic_solver (f11bsc) can solve the system after the first function in the suite, nag_sparse_nherm_basic_setup (f11brc), has been called to initialize the computation and specify the method of solution. The third function in the suite, nag_sparse_nherm_basic_diagnostic (f11btc), can be used to return additional information generated by the computation during monitoring steps and after nag_sparse_nherm_basic_solver (f11bsc) has completed its tasks. nag_sparse_nherm_basic_solver (f11bsc) uses reverse communication, i.e., it returns repeatedly to the calling program with the argument irevcm (see Section 5) set to specified values which require the calling program to carry out one of the following tasks: • compute the matrix-vector product $v=Au$ or $v={A}^{\mathrm{H}}u$ (the four methods require the matrix transpose-vector product only if ${‖A‖}_{1}$ or ${‖A‖}_{\infty }$ is estimated internally by Higham's method (see Higham (1988))); • solve the preconditioning equation $Mv=u$; • notify the completion of the computation; • allow the calling program to monitor the solution. Through the argument irevcm the calling program can cause immediate or tidy termination of the execution. On final exit, the last iterates of the solution and of the residual vectors of the original system of equations are returned. Reverse communication has the following advantages. 1. Maximum flexibility in the representation and storage of sparse matrices: all matrix operations are performed outside the solver function, thereby avoiding the need for a complicated interface with enough flexibility to cope with all types of storage schemes and sparsity patterns. This applies also to preconditioners. 2. Enhanced user interaction: you can closely monitor the progress of the solution and tidy or immediate termination can be requested. This is useful, for example, when alternative termination criteria are to be employed or in case of failure of the external functions used to perform matrix operations. ## 4  References Freund R W (1993) A transpose-free quasi-minimal residual algorithm for non-Hermitian linear systems SIAM J. Sci. Comput. 14 470–482 Freund R W and Nachtigal N (1991) QMR: a Quasi-Minimal Residual Method for Non-Hermitian Linear Systems Numer. Math. 60 315–339 Higham N J (1988) FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation ACM Trans. Math. Software 14 381–396 Saad Y and Schultz M (1986) GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems SIAM J. Sci. Statist. Comput. 7 856–869 Sleijpen G L G and Fokkema D R (1993) BiCGSTAB$\left(\ell \right)$ for linear equations involving matrices with complex spectrum ETNA 1 11–32 Sonneveld P (1989) CGS, a fast Lanczos-type solver for nonsymmetric linear systems SIAM J. Sci. Statist. Comput. 10 36–52 Van der Vorst H (1989) Bi-CGSTAB, a fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems SIAM J. Sci. Statist. Comput. 13 631–644 ## 5  Arguments Note: this function uses reverse communication. Its use involves an initial entry, intermediate exits and re-entries, and a final exit, as indicated by the argument irevcm. Between intermediate exits and re-entries, all arguments other than irevcm and v must remain unchanged. 1:     irevcm – Integer *Input/Output On initial entry: ${\mathbf{irevcm}}=0$, otherwise an error condition will be raised. On intermediate re-entry: must either be unchanged from its previous exit value, or can have one of the following values. ${\mathbf{irevcm}}=5$ Tidy termination: the computation will terminate at the end of the current iteration. Further reverse communication exits may occur depending on when the termination request is issued. nag_sparse_nherm_basic_solver (f11bsc) will then return with the termination code ${\mathbf{irevcm}}=4$. Note that before calling nag_sparse_nherm_basic_solver (f11bsc) with ${\mathbf{irevcm}}=5$ the calling program must have performed the tasks required by the value of irevcm returned by the previous call to nag_sparse_nherm_basic_solver (f11bsc), otherwise subsequently returned values may be invalid. ${\mathbf{irevcm}}=6$ Immediate termination: nag_sparse_nherm_basic_solver (f11bsc) will return immediately with termination code ${\mathbf{irevcm}}=4$ and with any useful information available. This includes the last iterate of the solution. Immediate termination may be useful, for example, when errors are detected during matrix-vector multiplication or during the solution of the preconditioning equation. Changing irevcm to any other value between calls will result in an error. On intermediate exit: has the following meanings. ${\mathbf{irevcm}}=-1$ The calling program must compute the matrix-vector product $v={A}^{\mathrm{H}}u$, where $u$ and $v$ are stored in u and v, respectively; RGMRES, CGS and Bi-CGSTAB($\ell $) methods return ${\mathbf{irevcm}}=-1$ only if the matrix norm ${‖A‖}_{1}$ or ${‖A‖}_{\infty }$ is estimated internally using Higham's method. This can only happen if ${\mathbf{iterm}}=1$ in nag_sparse_nherm_basic_setup (f11brc). ${\mathbf{irevcm}}=1$ The calling program must compute the matrix-vector product $v=Au$, where $u$ and $v$ are stored in u and v, respectively. ${\mathbf{irevcm}}=2$ The calling program must solve the preconditioning equation $Mv=u$, where $u$ and $v$ are stored in u and v, respectively. ${\mathbf{irevcm}}=3$ Monitoring step: the solution and residual at the current iteration are returned in the arrays u and v, respectively. No action by the calling program is required. nag_sparse_nherm_basic_diagnostic (f11btc) can be called at this step to return additional information. On final exit: ${\mathbf{irevcm}}=4$: nag_sparse_nherm_basic_solver (f11bsc) has completed its tasks. The value of fail determines whether the iteration has been successfully completed, errors have been detected or the calling program has requested termination. Constraint: on initial entry, ${\mathbf{irevcm}}=0$; on re-entry, either irevcm must remain unchanged or be reset to ${\mathbf{irevcm}}=5$ or $6$. 2:     u[$\mathit{dim}$] – ComplexInput/Output Note: the dimension, dim, of the array u must be at least $\mathit{n}$. On initial entry: an initial estimate, ${x}_{0}$, of the solution of the system of equations $Ax=b$. On intermediate re-entry: must remain unchanged. On intermediate exit: the returned value of irevcm determines the contents of u as follows. If ${\mathbf{irevcm}}=-1$, $1$ or $2$, u holds the vector $u$ on which the operation specified by irevcm is to be carried out. If ${\mathbf{irevcm}}=3$, u holds the current iterate of the solution vector. On final exit: if NE_OUT_OF_SEQUENCE or NE_BAD_PARAM, the array u is unchanged from the last entry to nag_sparse_nherm_basic_solver (f11bsc). Otherwise, u holds the last available iterate of the solution of the system of equations, for all returned values of fail. 3:     v[$\mathit{dim}$] – ComplexInput/Output Note: the dimension, dim, of the array v must be at least $\mathit{n}$. On initial entry: the right-hand side $b$ of the system of equations $Ax=b$. On intermediate re-entry: the returned value of irevcm determines the contents of v as follows. If ${\mathbf{irevcm}}=-1$, $1$ or $2$, v must store the vector $v$, the result of the operation specified by the value of irevcm returned by the previous call to nag_sparse_nherm_basic_solver (f11bsc). If ${\mathbf{irevcm}}=3$, v must remain unchanged. On intermediate exit: if ${\mathbf{irevcm}}=3$, v holds the current iterate of the residual vector. Note that this is an approximation to the true residual vector. Otherwise, it does not contain any useful information. On final exit: if NE_OUT_OF_SEQUENCE or NE_BAD_PARAM, the array v is unchanged from the initial entry to nag_sparse_nherm_basic_solver (f11bsc). If NE_NOERROR or NE_ACCURACY, the array v contains the true residual vector of the system of equations (see also Section 6). Otherwise, v stores the last available iterate of the residual vector unless NE_USER_STOP is returned on last entry, in which case v is set to $0.0$. 4:     wgt[$\mathit{dim}$] – const doubleInput Note: the dimension, dim, of the array wgt must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,\mathit{n}\right)$. On entry: the user-supplied weights, if these are to be used in the computation of the vector norms in the termination criterion (see Sections 3 and 5 in nag_sparse_nherm_basic_setup (f11brc)). Constraint: if weights are to be used, at least one element of wgt must be nonzero. 5:     work[lwork] – ComplexCommunication Array On initial entry: the array work as returned by nag_sparse_nherm_basic_setup (f11brc) (see also Section 5 in nag_sparse_nherm_basic_setup (f11brc)). On intermediate re-entry: must remain unchanged. 6:     lwork – IntegerInput On initial entry: the dimension of the array work (see also Sections 3 and 5 in nag_sparse_nherm_basic_setup (f11brc)). The required amount of workspace is as follows: Method Requirements RGMRES ${\mathbf{lwork}}=120+\mathit{n}\left(m+3\right)+m\left(m+5\right)+1$, where $m$ is the dimension of the basis CGS ${\mathbf{lwork}}=120+7\mathit{n}$ Bi-CGSTAB($\ell $) ${\mathbf{lwork}}=120+\left(2\mathit{n}+\ell \right)\left(\ell +2\right)+p$, where $\ell $ is the order of the method TFQMR ${\mathbf{lwork}}=120+10\mathit{n}$, where • $p=2\mathit{n}$, if $\ell >1$ and ${\mathbf{iterm}}=2$ was supplied; • $p=\mathit{n}$, if $\ell >1$ and a preconditioner is used or ${\mathbf{iterm}}=2$ was supplied; • $p=0$, otherwise. Constraint: ${\mathbf{lwork}}\ge {\mathbf{lwreq}}$, where lwreq is returned by nag_sparse_nherm_basic_setup (f11brc). 7:     fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_ACCURACY The required accuracy could not be obtained. However, a reasonable accuracy may have been achieved. User-requested termination: the required accuracy could not be obtained. However, a reasonable accuracy may have been achieved. Further information: nag_sparse_nherm_basic_solver (f11bsc) has terminated with reasonable accuracy: the last iterate of the residual satisfied the termination criterion but the exact residual $r=b-Ax$, did not. After the first occurrence of this situation, the iteration was restarted once, but nag_sparse_nherm_basic_solver (f11bsc) could not improve on the accuracy. This error code usually implies that your problem has been fully and satisfactorily solved to within or close to the accuracy available on your system. Further iterations are unlikely to improve on this situation. You should call nag_sparse_nsym_basic_diagnostic (f11bfc) to check the values of the left- and right-hand sides of the termination condition. NE_ALG_FAIL Algorithm breakdown at iteration no. $〈\mathit{\text{value}}〉$. NE_BAD_PARAM On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. On initial entry, argument number $〈\mathit{\text{value}}〉$ had an illegal value. On intermediate re-entry, argument number $〈\mathit{\text{value}}〉$ had an illegal value. NE_CONVERGENCE The solution has not converged after $〈\mathit{\text{value}}〉$ iterations. User-requested tidy termination. The solution has not converged after $〈\mathit{\text{value}}〉$ iterations. NE_INT On entry, ${\mathbf{irevcm}}=〈\mathit{\text{value}}〉$. Constraint: on initial entry, ${\mathbf{irevcm}}=0$; on re-entry, either irevcm must remain unchanged or be reset to ${\mathbf{irevcm}}=5$ or $6$. On entry, ${\mathbf{lwork}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{lwork}}\ge {\mathbf{lwreq}}$, where lwreq is returned by nag_sparse_nherm_basic_setup (f11brc). NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. NE_OUT_OF_SEQUENCE Either nag_sparse_nherm_basic_setup (f11brc) was not called before calling nag_sparse_nherm_basic_solver (f11bsc) or it has returned an error. nag_sparse_nherm_basic_solver (f11bsc) has already completed its tasks. You need to set a new problem. NE_USER_STOP User-requested immediate termination. NE_WEIGHT_ZERO The weights in array wgt are all zero. ## 7  Accuracy On completion, i.e., ${\mathbf{irevcm}}=4$ on exit, the arrays u and v will return the solution and residual vectors, ${x}_{k}$ and ${r}_{k}=b-A{x}_{k}$, respectively, at the $k$th iteration, the last iteration performed, unless an immediate termination was requested. On successful completion, the termination criterion is satisfied to within the user-specified tolerance, as described in nag_sparse_nherm_basic_setup (f11brc). The computed values of the left- and right-hand sides of the termination criterion selected can be obtained by a call to nag_sparse_nherm_basic_diagnostic (f11btc). ## 8  Further Comments The number of operations carried out by nag_sparse_nherm_basic_solver (f11bsc) for each iteration is likely to be principally determined by the computation of the matrix-vector products $v=Au$ and by the solution of the preconditioning equation $Mv=u$ in the calling program. Each of these operations is carried out once every iteration. The number of the remaining operations in nag_sparse_nherm_basic_solver (f11bsc) for each iteration is approximately proportional to $\mathit{n}$. The number of iterations required to achieve a prescribed accuracy cannot be easily determined at the onset, as it can depend dramatically on the conditioning and spectrum of the preconditioned matrix of the coefficients $\stackrel{-}{A}={M}^{-1}A$ (RGMRES, CGS and TFQMR methods) or $\stackrel{-}{A}=A{M}^{-1}$ (Bi-CGSTAB($\ell $) method). Additional matrix-vector products are required for the computation of ${‖A‖}_{1}$ or ${‖A‖}_{\infty }$, when this has not been supplied to nag_sparse_nherm_basic_setup (f11brc) and is required by the termination criterion employed. If the termination criterion ${‖{r}_{k}‖}_{p}\le \tau \left({‖b‖}_{p}+{‖A‖}_{p}×{‖{x}_{k}‖}_{p}\right)$ is used (see Section 3 in nag_sparse_nherm_basic_setup (f11brc)) and $‖{x}_{0}‖\gg ‖{x}_{k}‖$, then the required accuracy cannot be obtained due to loss of significant digits. The iteration is restarted automatically at some suitable point: nag_sparse_nherm_basic_solver (f11bsc) sets ${x}_{0}={x}_{k}$ and the computation begins again. For particularly badly scaled problems, more than one restart may be necessary. This does not apply to the RGMRES method which, by its own nature, self-restarts every super-iteration. Naturally, restarting adds to computational costs: it is recommended that the iteration should start from a value ${x}_{0}$ which is as close to the true solution $\stackrel{~}{x}$ as can be estimated. Otherwise, the iteration should start from ${x}_{0}=0$. ## 9  Example See Section 9 in nag_sparse_nherm_basic_setup (f11brc).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 108, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7990094423294067, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/11713/slice-ribbon-for-links-surely-its-wrong/12325
## slice-ribbon for links (surely it’s wrong) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The slice-ribbon conjecture asserts that all slice knots are ribbon. This assumes the context: 1) A `knot' is a smooth embedding $S^1 \to S^3$. We're thinking of the 3-sphere as the boundary of the 4-ball $S^3 = \partial D^4$. 2) A knot being slice means that it's the boundary of a 2-disc smoothly embedded in $D^4$. 3) A slice disc being ribbon is a more fussy definition -- a slice disc is in ribbon position if the distance function $d(p) = |p|^2$ is Morse on the slice disc and having no local maxima. A slice knot is a ribbon knot if one of its slice discs has a ribbon position. My question is this. All the above definitions have natural generalizations to links in $S^3$. You can talk about a link being slice if it's the boundary of disjointly embedded discs in $D^4$. Similarly, the above ribbon definition makes sense for slice links. Are there simple examples of $n$-component links with $n \geq 2$ that are slice but not ribbon? Presumably this question has been investigated in the literature, but I haven't come across it. Standard references like Kawauchi don't mention this problem (as far as I can tell). - 2 Can't d(saddle) > d(min) be attained trivially by extending fingers of the disk toward the center of the ball? If so, then the essence of ribbonness of a slice knot is just eliminating the local maxima on the disk. – Greg Kuperberg Jan 14 2010 at 1:31 Ah, right. I'll clean up that statement. – Ryan Budney Jan 14 2010 at 1:37 ## 2 Answers Ryan, I think this is an open problem. The best related result I know is a theorem of Casson and Gordon [A loop theorem for duality spaces and fibred ribbon knots. Invent. Math. 74 (1983)] saying that for a fibred knot that bounds a homotopically ribbon disk in the 4-ball, the slice complement is also fibred. More precisely, they are assuming that the knot K bounds a disk R in the 4-ball such that the inclusion $S^3 \smallsetminus K \hookrightarrow D^4 \smallsetminus R$ induces an epimorphism on fundamental groups. If one glues R to a fibre of the fibration $S^3 \smallsetminus K \to S^1$ to obtain a closed surface F, then the statement is that the monodromy extends from F to a solid handlebody which is a fibre of a fibration $D^4 \smallsetminus R \to S^1$ extending the given one on the boundary. - Thanks Peter. This is surprising to me. – Ryan Budney Mar 22 2010 at 9:16 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As far as I know, extension of slice and ribbon to links is not unique. There are "strong slice", "weak slice", "strong ribbon" and "weak ribbon" for links. "CHARACTERIZATION OF SLICES AND RIBBONS" (by H.FOX) mentioned these concepts. - I'm referring to the specific generalization above, but any results positive or negative on any generalization would be nice, I suppose. It doesn't appear that he has a result of this form though. – Ryan Budney Jan 19 2010 at 21:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288676977157593, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/gauss-law?sort=active&pagesize=15
# Tagged Questions The gauss-law tag has no wiki summary. 3answers 948 views ### “Find the net force the southern hemisphere of a uniformly charged sphere exerts on the northern hemisphere” This is Griffiths, Introduction to Electrodynamics, 2.43, if you have the book. The problem states Find the net force that the southern hemisphere of a uniformly charged sphere exerts on the ... 3answers 121 views ### Gauss' law and an external charge Gauss' law states that the net outward normal electric flux through a closed surface is equal to $q_{total, inside}/\epsilon_0$. However, I'm a bit confused of why the presence of an external charge ... 1answer 63 views ### Gauss's Law with Moving Charges My text claims that Gauss's Law has been proven to work for moving charges experimentally, is there a non-experimental way to verify this? 3answers 802 views ### Charge Distribution on a Parallel Plate Capacitor If a parallel plate capacitor is formed by placing two infinite grounded conducting sheets, one at potential $V_1$ and another at $V_2$, a distance $d$ away from each other, then the charge on either ... 1answer 146 views ### Finding Electric Field outside a Charged Cylinder I'm trying to solve a problem that involves finding the electric field due to a uniformly cylinder of radius $r$, length $L$ and total charge $Q$. Well, my thought was: if I am to use Gauss' Law, I'll ... 1answer 100 views ### Applying Gauss' Law to find Electric Field I'm in doubt in the application of Gauss' Law to find electric fields when the charge distribution is symmetric. Well, first of all: I know how to find the magnitude of the field - we just enclose the ... 1answer 55 views ### Charges lying on a Gaussian Surface Let's say you have a spherical charge distribution of radius R. This distribution has some charge density as a function of radius. I know that I can determine the electric field outside of the charge ... 1answer 88 views ### Electric field around charged cylinder This is a homework question, so please don't give me the answer outright. I just need help conceptually. "A cylindrical shell of length 190 m and radius 4 cm carries a uniform surface charge density ... 5answers 441 views ### Paradox with Gauss' law when space is uniformly charged everywhere Consider that space is uniformly charged everywhere, i.e., filled with a uniform charge distribution, $\rho$, everywhere. By symmetry, the electric field is zero everywhere. (If I take any point in ... 3answers 135 views ### Is Newtonian gravity consistent with an infinite universe? [duplicate] Let us assume that we have have an infinite Newtonian space-time and the universe is uniformly filled with matter of constant density (no fluctuations whatsoever), all of it at rest. By symmetry, the ... 1answer 113 views ### Why doesn't a gaussian surface pass through discrete charges? I have read that Gaussian surface cannot pass through discrete charges. Why is it so? I have even seen in application of Gauss' Law when we imagine a Gaussian Surface passing through a charge ... 1answer 153 views ### Proof that flux through a surface is independent of the inner objects' arrangement $$\Phi=\iint_{\partial V}\mathbf{g} \cdot d \mathbf{A}=-4 \pi G M$$ Essentially, why is $\Phi$ independent of the distribution of mass inside the surface $\partial V$, and the shape of surface ... 1answer 117 views ### Newtonian Gravity on a Riemannian $3$-Manifold To solve the Poisson equation for the Newton Potential, say $\phi$, one can use the divergence theorem, such that \int_U \nabla^2 \phi \sqrt{g}~ dV= \int_{\partial U} <\nabla \phi,n> ... 2answers 133 views ### Electric lines of force Why cant electric lines of force pass through the charged sphere? Well, basically that's how a Faraday cage works, but how can it be so? 1answer 205 views ### Gravity force strength in 1D, 2D, 3D and higher spatial dimensions Let's say that we want to measure the gravity force in 1D, 2D, 3D and higher spatial dimensions. Will we get the same force strength in the first 3 dimensions and then it will go up? How about if ... 3answers 184 views ### Why can we use Gauss' law to compute electric field? For simplicity I'm considering only the sphere case. In the Gauss' Law formulation we have some field E introduced by charges $Q$ inside some sphere, then we compute flux and integrate, and we get ... 1answer 121 views ### Finding the electric field on a point (x,y,z) using Coulomb's Law Using Gauss' Law, the answer is $$\frac{Q}{4 \pi \epsilon R^2}.$$ However if I were to do the integration using Coulomb's Law, I get \int_0^{2\pi} \int_{0}^{\pi}\int_r^a \frac{\rho \sin\theta dR ... 1answer 306 views ### Formula of Gauss' Law of Gravitation Gauss's law for Gravitation: $$\int g\cdot \mathrm{d}S=4\pi GM$$ where $g$ is the gravitational field and $S$ is the surface area. Am I correct? 4answers 501 views ### Why are so many forces explainable using inverse squares when space is three dimensional? It seems paradoxical that the strength of so many phenomena (Newtonian gravity, Coulomb force) are calculable by the inverse square of distance. However, since volume is determined by three ... 5answers 290 views ### Intuitive explanation of the inverse square power $\frac{1}{r^2}$ in Newton's law of gravity Is there an intuitive explanation why it is plausible that the gravitational force which acts between two point masses is proportional to the inverse square of the distance $r$ between the masses (and ... 1answer 410 views ### Electric field inside and outside a metallic hollow sphere 1) It is known that inside a metallic hollow sphere it will not experience outside electric field because of the charge separation of electrons and holes at the surface of sphere and creating an equal ... 1answer 320 views ### Divergence of non conservative electric field I'm looking for the proof that the 1st Maxwell equation is valid also on non conservative electric field. When we are talking about a electrostatic field, the equation is ok. We can apply the Gauss ... 5answers 2k views ### Does Coulomb's Law, with Gauss's Law, imply the existence of only three spatial dimensions? Coulomb's Law states that the fall-off of the strength of the electrostatic force is inversely proportional to the distance squared of the charges. Gauss's law implies that a the total flux through a ... 2answers 551 views ### Would a gauss rifle based on generated magnetic fields have any kickback? In the case of currently developing Gauss rifles, in which a slug is pulled down a line of electromagnets, facilitated by a micro-controller to achieve great speed in managing the switching of the ... 1answer 542 views ### How is Gauss' Law (integral form) arrived at from Coulomb's Law, and how is the differential form arrived at from that? On a similar note: when using Gauss' Law, do you even begin with Coulomb's law, or does one take it as given that flux is the surface integral of the Electric field in the direction of the normal to ... 1answer 99 views ### How does one come up with the Coulomb's law? My teacher mentioned that field line density = no. of lines / area and the total area of a sphere is $4\pi r^2$ and so an electric force is inversely proportional to $r^2$. Actually, why can the total ... 1answer 155 views ### Gauss Law for Electric Fields What is the integral form for the Gauss Law for Electric Fields? or ? 2answers 211 views ### Find the quantity of charge - given potential function A potential function is given by $V(r)=\frac{Ae^{-\lambda r}}{r}$ Find charge density and hence charge. I first took the gradient of potential to get \$\vec{E}(r)=\frac{Ae^{-\lambda ... 1answer 961 views ### Electric field due to a solid sphere of charge I have been trying to understand the last step of this derivation. Consider a sphere made up of charge $+q$. Let $R$ be the radius of the sphere and $O$, its center. A point $P$ lies inside the ... 3answers 863 views ### What are the applications of Gauss's law in technology? [closed] Freshmen physics textbooks use Gauss's law plus symmetry to calculate the electric field. I was wondering if this method of finding the electric field using a symmetry is used in real applications in ... 4answers 2k views ### Infinitely charged wire and Differential form of Gauss' Law I have tried calculating the potential of a charged wire the direct way. If lambda is the charge density of the wire, then I get \phi(r) = \frac{\lambda}{4 \pi \epsilon_0 r} \int_{-\infty}^\infty ... 1answer 120 views ### Gaussian surface question There is an infinite slab of charge, and a (Gaussian surface) cylinder whose ends are both outside of the slab. $\phi_A$ is the flux through this cylinder, by symmetry the component of the flux ... 1answer 908 views ### Shouldn't the electric field in a solid insulating sphere be linear with radius? I am a senior in High School who is taking the course AP Physics Electricity and Magnetism. I was studying Gauss's laws and I found this problem: A solid insulating sphere of radius R contains a ... 1answer 56 views ### Conducting surface inside conducting surface Let's say there's a closed conducting surface. Then by Gauss's Law the E field bound by the surface must equal the charge inside. There's no charge inside, so the E field cancels. This is a Faraday ... 1answer 247 views ### Gauss law in classical U(1) gauge theory I can see that $a_{0}$ is not an independent field and Gauss law is a constraint on the theory arising from field equations. But, I don't get the geometrical picture. Let $A$ be the space of all ... 2answers 144 views ### Is it really to solve problem below by using, in the main, Gauss law? There is an infinite cylinder surface which uniformly charged along and has a surface charge density, which can be represented as $$\sigma = \sigma_{0}cos(\varphi ),$$ where $\varphi$ - polar angle ... 0answers 286 views ### Newton's Law of Gravitation, Gauss Law and GR From One of My Unpublished Papers $$\frac{d^2 x^{\alpha}}{d\tau^2}=-\Gamma^{\alpha}_{\beta \gamma}\frac{dx^{\beta}}{d\tau}\frac{dx^{\gamma}}{d\tau} \tag{1}$$ For radial motion in Schwarzschild’s ... 2answers 3k views ### Using Gauss's Law to calculate electric fields between plates I have two earthed metal plates, separated by a distance $d$ with a plane of charge density $\sigma$ placed a distance $a$ from the lower plate. I want to derive expressions for the strength of the ... 1answer 141 views ### Gauss's Law in action Need someone to tell me if I got this done correctly (a) Draw Gaussuian cylinder inside the black cylinder to find charge enclosed $Q_{en} = Q(\frac{r}{a})^2$ Apply Gauss's Law \$E2\pi r \ell = ... 1answer 161 views ### What is discontinuity in Vector Fields I am reading David J. Griffiths and have a problem understanding the concept of discontinuity for E-field. The E-field has apparently to components. (How does he decompose the vector field into the ... 2answers 156 views ### In which cases is it better to use Gauss' law? I could, for example calculate the electric field near a charged rod of infinite length using the classic definition of the electric field, and integrating the: \overrightarrow{dE} = \frac{dq}{4 ... 2answers 556 views ### Electric potential of sphere (a) I am a little confused about this part. The point at A to B isn't radial. The electric field is radially outward, but if I look at the integral \int_{a}^{b}\mathbf{E}\cdot d\mathbf{s} = ... 1answer 250 views ### Gauss Law for Magnetism,Non Instantaneous Field Propagation Is the magnetic force instantaneous? And, are all field lines established simultaneously? Otherwise, for example, the field line marked 'L' will take longer time to propagate than the ones above it, ... 1answer 176 views ### Gravimagnetic monopole and General relativity Review and hystorical background: Gravitomagnetism (GM), refers to a set of formal analogies between Maxwell's field equations and an approximation, valid under certain conditions, to the Einstein ... 1answer 223 views ### Could someone remind me of what we mean by zero electric field “inside” a conductor? If I have a spherical conductor (perhaps a shell) and "inside", as in the hollow area there is nothing. The electric field is 0. But what happens if there is a charge "inside" (not like inside the ... 1answer 86 views ### Gaussian Unit of Charge and Force I just read that in the Gaussian Units of charge The Final equation in Coulomb's law is as simple as $$\boldsymbol{F}=\frac{q_1q_2}{r^2}$$ No $\epsilon_0$ no $4\pi$ like you have in the $\mbox{SI}$ ... 1answer 76 views ### Gauss' Law… Neglect Edge effects [closed] Two Large Metal plates of area $1.0$ $m^2$ face each other. They are $5$ $cm$ apart and carry equal and opposite charge on their inner surfaces. If $E$ between the plates is $55$ $N/C$, what is ... 2answers 442 views ### A closed surface, no charge enclosed, yet flux not 0? ! The book says it is $E_0\pi r^2$ because the flux through the circle is equal to the curved part of the paraboloid. I don't understand this, shouldn't the total flux be 0 for the whole surface? ... 1answer 72 views ### Flux if there were only one type of charge in the universe There was this question that i saw in a book and it also had an answer given. The Question was: If there were only one type of charge in the universe, then: \$\phi = \oint ... 1answer 103 views ### Why is the flux 0? I don't understand this concept ! Why does it say that the flux due to q_2 and q_3 through S is 0? Doesn't it contain a nonzero charge q_1? Does anyone also know the difference between "no charge" vs "net charge is 0"? My book ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301856160163879, "perplexity_flag": "middle"}
http://brightstartutors.com/blog/2012/great-formulas-v-squared-over-r/
• Home • Have a Math Question • BrightStar Tutors ## Great Formulas: V Squared Over r All physics classes cover the formula for acceleration when the motion is in a circle: $$a=v^2 / r$$. $$a$$ is the acceleration of the mass, $$v$$ is its velocity as it moves around the circle, and $$r$$ is the radius of the circle. In an example that is always used, the formula lets us compute the tension in a string when a mass at the end of the string is being spun in a circle. For a given $$v$$ and $$r$$, we compute the acceleration $$a$$. Then, Newton’s Second Law, $$F=ma$$ gives the force; the tension  in the string. Although the textbooks do not make a fuss over the formula, its discovery by Isaac Newton in 1665 was a landmark in the history of science. Deriving the formula required a great deal of insight into the nature of motion, and it was the starting point for Newton’s work on gravity and the motion of planets. In an earlier post, I showed how Newton inferred the inverse square law of gravity by applying the formula to the Moon’s motion. What Newton saw was that with circular motion, there are two simultaneous motions superimposed on one another. One of these is a straight-line motion, and the other is an accelerating motion toward the center. Circular Motion Deconstructed The mass in the drawing would move, in the absence any external forces, from $$A$$ to $$B$$ in the time $$\Delta t$$. Instead, the mass is at point $$C$$ after time $$\Delta t$$. The difference is that it has also “fallen” through the distance $$d$$. The combination of straight line motion and inward motion is such that the mass follows the circular path. It is a bit tricky to depict or visualize because the two motions are going on continuously, and the $$\Delta t$$ intervals are infinitely small. The inward motion is due to a center-directed, constant force, which causes an acceleration toward the center.  In the example of a mass being whirled around at the end of a string, the tension in the string provides the force. If it is a planet revolving around the Sun, then  the inward directed force comes from the Sun’s gravity. In any case, the acceleration value is exactly such that the “falling” object stays on the circle. #### $$v^2 / r$$  Derived College textbooks normally derive the formula using calculus (and high school books don’t include a derivation at all). Here, I will work out the formula using only elementary methematics. As a preliminary step, we will need a little  formula for the distance $$d$$ in the drawing below. The formula will actually apply only when the angle $$\theta$$ is very small. In the drawing, we have a right triangle whose hypotenuse is longer than  $$r$$  by the amount $$d$$, so that, by the Pythagorean theorem: $r^2+s^2=(r+d)^2=r^2+2rd+d^2$ We are seeking an expression for $$d$$, and we write $d=\frac{s^2}{2r}-\frac{d^2}{2r}$ This seems wrong; we have not solved for $$d$$ because there is a $$d^2$$ term on the right. But here’s the rub: we only care about the case where $$\theta$$ is small. If $$\theta$$ is small, then $$d$$ is small. If $$d$$ is small, then $$d^2$$ is very small, and we can ignore the second term on the right. For example, if $$d=0.001$$, then $$d^2=0.000001$$. More formally, we have, $\lim_{\theta\rightarrow 0}d=\frac{s^2}{2r} \tag{1}$ This reasoning may seem a bit suspect, especially to those who have not been exposed to calculus. However, the procedure and formula are sound, and when $$\theta$$ is infinitely small, formula (1) is not an approximation; it is exact. Now we return to the circular motion situation. The diagram above shows what happens in a time period $$\Delta t$$. The straight-line distance is $$v \cdot\Delta t$$, and the mass has also fallen toward the center by the distance $$d$$. The inward force, and therefore the acceleration, is constant. One of Galileo’s formulas gives us the distance fallen  as a function of time: $d=\frac{1}{2}a (\Delta t)^2\mbox{ (from kinematics)} \tag{2}$ where $$a$$ is the (as yet unknown) acceleration. But from formula (1), we also have $d=\frac{(v\:\Delta t)^2}{2r}=\frac{1}{2}\frac{v^2}{r}(\Delta t)^2\mbox{ (from geometry)} \tag{3}$ For these two expressions for $$d$$  to be equal, we can see what $$a$$ must be: $a=\frac{v^2}{r} \tag{4}$ For a given $$v$$ and $$r$$, this is the acceleration value that will keep the mass “on the circle”. The acceleration $$a$$ in (4) is directed inward - toward the center of the circle. The associated force on the moving mass (from $$F=ma$$) is also inward, and Newton invented the term centripetal force to describe it. This conception alone was a significant insight. Before Newton, others had used the term centrifugal for the force; meaning an outward directed force. Aristotle and Descartes believed that an object “wanted” to go in a straight line, and if it moved in a curve the object itself would exert a force pulling it toward the straight line. #### Another Way to Say It It  is worth pointing out an equivalent way to write the great formula. Recall that when an angle $$\theta$$ is measured in radians, the length of the circle’s arc that it subtends is $$s=r \theta$$. For example, when $$\theta$$ is a full circle (that is, $$\theta = 2\pi$$), then $$s=r\cdot2\pi$$, which is the circle’s circumference. Now imagine a point moving along the circle (i.e., $$\theta$$ is changing). By convention, we denote the rate of change of $$\theta$$ with the symbol $$\omega$$ (omega), its units being radians per second. It’s not hard to see that the speed $$v$$ of the object is just $$v=r\omega$$. Using this expression in place of $$v$$ in (4), we get, $a=\frac{(r\omega)^2}{r}=r\omega^2 \tag{5}$ This gives us acceleration in terms of rotation rate, and that is often easier than using the speed $$v$$. #### Some Implications of the Formula This fabulous formula does not apply to all motion that is not in a straight line. For example, the planets move in ellipses, not circles, so that r is not constant. Also, a planet’s velocity is not constant as it moves around its orbit. For these more complex situations, a fancier formula is required. Even so, the insight behind the $$v^2 / r$$ was Newton’s starting point in working out the more general case. Even if it does not apply all situations, the formula is central to many important physics topics. I plan to do future posts on at least two other uses of the formula: • Kepler’s Third Law, which relates a planet’s orbital period with its distance from the Sun, is easy to derive using the formula. (perhaps you would like to try working out the math: the law says that, for all planets in a solar system, $$T^2 / r^3$$ is a constant, where $$T$$ is the time for one orbit, and $$r$$ is the orbit radius.) • The invention of the cyclotron, the original “atom smasher” by Ernest Lawrence in 1929, centers around the $$v^2 / r$$ formula. Tags: Math, Physics This entry was posted on Thursday, June 14th, 2012 at 8:51 pm and is filed under Astronomy Mathematics, Great Formulas Derived, Mathematics History, Physics, Thrilling Math. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 3 Responses to “Great Formulas: V Squared Over r” 1. Nike NFL Jerseys for Cheap Says: July 13th, 2012 at 3:50 am Hey just wanted to point out something. The title of your post seem to be running off the screen in firefox. I’m not sure if this is something to do with internet browser compatibility but I thought I’d post to let you know. Hope you get it fixed soon. 2. Mike Mike Says: October 26th, 2012 at 3:41 am Thanks a bunch! Finally someone that explains this acceleration in a way I can understand it! For me this is very concrete and tangible. I have a hard time proving things with calculus. To me calculus was built upon summations and the Pythagorean theorem so why not prove everything using summations and the Pythagorean theorem?? So I thank you! 3. Frankie Rehrig Says: November 18th, 2012 at 3:42 am Just wish to say your article is as astounding. The clarity on your submit is just excellent and i can assume you’re a professional on this subject. Fine together with your permission allow me to seize your feed to stay up to date with approaching post. Thanks 1,000,000 and please continue the gratifying work.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 58, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359337687492371, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/15238-distance-traveled.html
# Thread: 1. ## Distance traveled The velocity function (in meters per second) is given for a particle moving along a line. Find (a) the displacement and (b) distance traveled. v(t) = 3t - 5 for 0 less than or equal to t less than or equal to 3 I know how to do part a. The dispalcement is -3/2, but I don't know how to do part b. I know that the find the distance traveled you need to integrate the absolute value of v(t), and to do that you must split the integral into 2 parts, one where v(t) is less than or equal to 0 and one where v(t) is greater than or equal to zero. Can someone please explain how you split the integrals? I have no problem integrating 3t - 5, but I don't know what intervals to evaluate it at. Thanks 2. Originally Posted by zachb The velocity function (in meters per second) is given for a particle moving along a line. Find (a) the displacement and (b) distance traveled. v(t) = 3t - 5 for 0 less than or equal to t less than or equal to 3 I know how to do part a. The dispalcement is -3/2, but I don't know how to do part b. The displacement is, $\int_0^3 3t - 5 dt =-1.5$ The distance is, $\int_0^3 |3t-5|dt = \frac{20}{3}$ We need to split the integrals for the absolute value. Now $3t-5 \geq 0$ for $t\geq 5/3$ This is where we make the seperation. $\int_0^{5/3} |3t-5| dt + \int_{5/3}^3 |3t-5|dt$ $\int_0^{5/3} - (3t-5)dt + \int_{5/3}^3 (3t-5)dt$ 3. Okay, I understand how you split the integral now, but I don't understand what you did in the last step. You started with the definite integral from 0 to 5/3 plus the definite integral from 5/3 to 3, but in the last step you changed it to the definite integral from 0 to 3 plus the definite integral from 5/3 to 3. Why? Is that a mistake, because it doesn't make sense to me. 4. Originally Posted by zachb Okay, I understand how you split the integral now, but I don't understand what you did in the last step. You started with the definite integral from 0 to 5/3 plus the definite integral from 5/3 to 3, but in the last step you changed it to the definite integral from 0 to 3 plus the definite integral from 5/3 to 3. Why? Is that a mistake, because it doesn't make sense to me. Usually when something does not make sense it is wrong. Judge Judy loves to say that. But to answer your question, yes, I made a mistake and fixed it now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404751658439636, "perplexity_flag": "head"}
http://chempaths.chemeddl.org/services/chempaths/?q=book/General%20Chemistry%20Textbook/Entropy%20and%20Spontaneous%20Reactions/1498/thermodynamic-probability-w-a
# Thermodynamic Probability W and Entropy Submitted by ChemPRIME Staff on Thu, 12/16/2010 - 15:17 The section on atoms, molecules and probability has shown that if we want to predict whether a chemical changeA process in which one or more substances, the reactant or reactants, change into one or more different substances, the products; chemical change involves rearrangement, combination, or separation of atoms. Also referred to as chemical reaction. is spontaneous or not, we must find some general way of determining whether the final state is more probable than the initial. This can be done using a number W, called the thermodynamic probability. W is defined as the number of alternative microscopic arrangements which correspond to the same macroscopic state. The significance of this definition becomes more apparent once we have considered a few examples. Figure 1a illustrates a crystalA solid with a regular polyhedral shape; for example, in sodium chloride (table salt) the crystal faces are all at 90° angles. A solid in which the atoms, molecules, or ions are arranged in a regular, repeating lattice structure. consisting of only eight atoms at the absolute zeroThe minimum possible temperature: 0 K, -273.15 °C, -459.67 °F. of temperatureA physical property that indicates whether one object can transfer thermal energy to another object.. Suppose that the temperature is raised slightly by supplying just enough energyA system's capacity to do work. to set one of the atoms in the crystal vibrating. There are eight possible ways of doing this, since we could supply the energy to any one of the eight atoms. All eight possibilities are shown in Fig. 1b. Figure 1 The thermodynamic probability W of a crystal containing eight atoms at three different temperatures. (a) At 0 K there is only one way in which the crystal can be arranged, so that W = 1. (b) If enough energy is added to start just one of the atoms vibrating (color), there are eight different equally likely arrangements possible, and W = 8. (c) If the energy is doubled, two different atoms can vibrate simultaneously (light color) or a single atom can have all the energy (dark color). The number of equally likely arrangements is much larger than before; W = 36. Since all eight possibilities correspond to the crystal having the same temperature, we say that W = 8 for the crystal at this temperature. Also, we must realize that the crystal will not stay perpetually in any of these eight arrangements. Energy will constantly be transferred from one atom to the other, so that all the eight arrangements are equally probable. Let us now supply a second quantity of energy exactly equal to the first, so that there is just enough to start two molecules vibrating. There are 36 different ways in which this energy can be assigned to the eight atoms (Fig. 1c). We say that W = 36 for the crystal at this second temperature. Because energy continually exchanges from one atom to another, there is an equal probability of finding the crystal in any of the 36 possible arrangements. A third example of W is our eight-atom crystal at the absolute zero of temperature. Since there is no energy to be exchanged from atom to atom, only one arrangement is possible, and W = 1. This is true not only for this hypothetical crystal, but also presumably for a real crystal containing a large number of atoms, perfectly arranged, at absolute zero. Figure 2 Heat flow and thermodynamic probability. When two crystals, one containing 64 units of vibrational energy and the other (at 0 K) containing none are brought into contact, the 64 units of energy will distribute themselves over the two crystals since there are many more ways of distributing 64 units among 200 atoms than there are of distributing 64 units over only 100 atoms. The thermodynamic probability W enables us to decide how much more probable certain situations are than others. Consider the flow of heat from crystal A to crystal B, as shown in Fig. 2. We shall assume that each crystal contains 100 atoms. Initially crystal B is at absolute zero. Crystal A is at a higher temperature and contains 64 units of energy-enough to set 64 of the atoms vibrating. If the two crystals are brought together, the molecules of A lose energy while those of B gain energy until the 64 units of energy are evenly distributed between both crystals. In the initial state the 64 units of energy are distributed among 100 atoms. Calculations show that there are 1.0 × 1044 alternative ways of making this distribution. Thus W1, initial thermodynamic probability, is 1.0× 1044. The 100 atoms of crystal A continually exchange energy among themselves and transfer from one of these 1.0 × 1044 arrangements to another in rapid succession. At any instant there is an equal probability of finding the crystal in any of the 1.0 × 1044 arrangements. When the two crystals are brought into contact, the energy can distribute itself over twice as many atoms. The number of possible arrangements rises enormously, and W2, the thermodynamic probability for this new situation, is 3.6 × 1060. In the constant reshuffle of energy among the 200 atoms, each of these 3.6 × 1060 arrangements will occur with equal probability. However, only 1.0 × 1044 of them correspond to all the energy being in crystal A. Therefore the probability of the heat flow reversing itself and all the energy returning to crystal A is $\frac{W_{\text{1}}}{W_{\text{2}}}=\frac{\text{1}\text{.0 }\times 10^{\text{44}}}{\text{3}\text{.6 }\times \text{ 10}^{\text{60}}}=\text{2}\text{.8 }\times \text{ 10}^{-\text{17}}$ In other words the ratio of W1 to W2 gives us the relative probability of finding the system in its initial rather than its final state. This example shows how we can use W as a general criterion for deciding whether a reaction is spontaneous or not. Movement from a less probable to a more probable molecular situation corresponds to movement from a state in which W is smaller to a state where W is larger. In other words W increases for a spontaneous change. If we can find some way of calculating or measuring the initial and final values of W, the problem of deciding in advance whether a reaction will be spontaneous or not is solved. If W2 is greater than W1, then the reaction will occur of its own accord. Although there is nothing wrong in principle with this approach to spontaneous processes, in practice it turns out to be very cumbersome. For real samples of matterAnything that occupies space and has mass; contrasted with energy. (as opposed to 200 atoms in the example of Fig. 2) the values of W are on the order of 101024—so large that they are difficult to manipulate. The logarithm of W, however, is only on the order of 1024, since log 10x = x. This is more manageable, and chemists and physicists use a quantity called the entropy which is proportional to the logarithm of W. This way of handling the extremely large thermodynamic probabilities encountered in real systems was first suggested in 1877 by the Austrian physicist Ludwig Boltzmann (1844 to 1906). The equation S = k ln W      (1) is now engraved on Boltzmann’s tomb. The proportionality constant k is called, appropriately enough, the Boltzmann constant. It corresponds to the gas constantA proportionality constant between the product of the pressure and volume of a gas and the product of the chemical amount (moles) and temperature of the gas. R divided by the Avogadro constant NA: $k=\frac{R}{N_{\text{A}}}$      (2) and we can regard it as the gasA state of matter in which a substance occupies the full volume of its container and changes shape to match the shape of the container. In a gas the distance between particles is much greater than the diameters of the particles themselves; hence the distances between particles can change as necessary so that the matter uniformly occupies its container. constant per molecule rather than per mole. In SI unitsThe international system of units (Système International d'Unité) based on seven fundamental units: meter, kilogram, second, ampere, kelvin, candela, mole., the Boltzmann constant k has the value 1.3805 × 10–23 J K–1. The symbol ln in Eq. (1) indicates a natural logarithm,i.e., a logarithm taken to the baseIn Arrhenius theory, a substance that increases the concentration of hydroxide ions in an aqueous solution. In Bronsted-Lowry theory, a hydrogen-ion (proton) acceptor. In Lewis theory, a species that donates a pair of electrons to form a covalent bond. e. Since base 10 logarithms and base e logarithms are related by the formula ln x = 2.303 log x it is easy to convert from one to the other. Equation (1), expressed in base 10 logarithms, thus becomes S = 2.303k log W      (1a) EXAMPLE 1 The thermodynamic probability W for 1 mol propane gas at 500 K and 101.3 kPa has the value 101025. Calculate the entropy of the gas under these conditions. SolutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture.    Since W = 101025 log W = 1025 Thus      S = 2.303k log W = 1.3805 × 10–23 J K–1 × 2.303 × 1025 = 318 J K–1 Note: The quantity 318 J K–1 is obviously much easier to handle than 101025. Note also that the dimensions of entropy are energy/temperature. One of the properties of logarithms is that if we increase a number, we also increase the value of its logarithm. It follows therefore that if the thermodynamic probability W of a system increases, its entropy S must increase too. Further, since W always increases in a spontaneous change, it follows that S must also increase in such a change. The statement that the entropy increases when a spontaneous change occurs is called the second law of thermodynamicsA formal statement that any spontaneous (product-favored) process is accompanied by an increase in the entropy (dispersal of energy) of the universe.. (The first law is the law of conservation of energy.) The second law, as it is usually called, is one of the most fundamental and most widely used of scientific laws. In this book we shall only be able to explore some of its chemical implications, but it is of importance also in the fields of physics, engineering, astronomy, and biology. Almost all environmental problems involve the second law. Whenever pollutionThe contamination of the air, water, and earth by personal, industrial, and farm waste. increases, for instance, we can be sure that the entropy is increasing along with it. The second law is often stated in terms of an entropy difference ΔS. If the entropy increases from an initial value of S1 to a final value of S2 as the result of a spontaneous change, then ΔS = S2 – S1      (3) Since S2 is larger than S1, we can write ΔS > 0      (4) Equation (4) tells us that for any spontaneous process, ΔS is greater than zero. As an example of this relationship and of the possibility of calculating an entropy change, let us find ΔS for the case of 1 mol of gas expanding into a vacuum. We have already argued for this process that the final state is 101.813 × 1023 times more probable than the initial state. This can only be because there are 101.813 × 1023 times more ways of achieving the final state than the initial state. In other words, taking logs, we have log $\frac{W_{\text{2}}}{W_{\text{1}}}$ = 1.813 × 1023 Thus ΔS = S2 – S1 = 2.303 × k × log W2 – 2.303 × k × log W1 = 2.303 × k × log $\frac{W_{\text{2}}}{W_{\text{1}}}$ = 2.303 × 1.3805 × 10–23 J K–1 × 1.813 × 1023 S = 5.76 J K–1 As entropy changes go, this increase in entropy is quite small. Nevertheless, it corresponds to a gargantuan change in probabilities. • Printer-friendly version • Login or register to post comments ## Term of the Day Abbreviation for curieA unit of radioactive decay; equal to 3.70 x 1010 disintegrations per second; abbreviated Ci., a unitA particular measure of a physical quantity that is used to express the magnitude of the physical quantity; for example, the meter is the unit of the physical quantity, length. of radioactiveDescribes a substance that gives off radiation‐alpha particles, beta particles, or gamma rays‐by the disintegration of its nucleus. decay; equal to 3.70 x 1010 disintegrations per second. ## Print Options • Printer Friendly Version will open a new window that has a clean view of the page you are viewing, plus any sub-pages. Warning: if you attempt to view a print version of the top page of a path, this could result in a very large file. • PDF Version will prompt a download of only the current page you are viewing.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913469135761261, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Cournot_competition
# Cournot competition Cournot competition is an economic model used to describe an industry structure in which companies compete on the amount of output they will produce, which they decide on independently of each other and at the same time. It is named after Antoine Augustin Cournot[1] (1801–1877) who was inspired by observing competition in a spring water duopoly. It has the following features: • There is more than one firm and all firms produce a homogeneous product, i.e. there is no product differentiation; • Firms do not cooperate, i.e. there is no collusion; • Firms have market power, i.e. each firm's output decision affects the good's price; • The number of firms is fixed; • Firms compete in quantities, and choose quantities simultaneously; • The firms are economically rational and act strategically, usually seeking to maximize profit given their competitors' decisions. An essential assumption of this model is the "not conjecture" that each firm aims to maximize profits, based on the expectation that its own output decision will not have an effect on the decisions of its rivals. Price is a commonly known decreasing function of total output. All firms know $N$, the total number of firms in the market, and take the output of the others as given. Each firm has a cost function $c_i(q_i)$. Normally the cost functions are treated as common knowledge. The cost functions may be the same or different among firms. The market price is set at a level such that demand equals the total quantity produced by all firms. Each firm takes the quantity set by its competitors as a given, evaluates its residual demand, and then behaves as a monopoly. ## Graphically finding the Cournot duopoly equilibrium This section presents an analysis of the model with 2 firms and constant marginal cost. $p_1$ = firm 1 price, $p_2$ = firm 2 price $q_1$ = firm 1 quantity, $q_2$ = firm 2 quantity $c$ = marginal cost, identical for both firms Equilibrium prices will be: $p_1 = p_2 = P(q_1+q_2)$ This implies that firm 1’s profit is given by $\Pi_1 = q_1(P(q_1+q_2)-c)$ • Calculate firm 1’s residual demand: Suppose firm 1 believes firm 2 is producing quantity $q_2$. What is firm 1's optimal quantity? Consider the diagram 1. If firm 1 decides not to produce anything, then price is given by $P(0+q_2)=P(q_2)$. If firm 1 produces $q_1'$ then price is given by $P(q_1'+q_2)$. More generally, for each quantity that firm 1 might decide to set, price is given by the curve $d_1(q_2)$. The curve $d_1(q_2)$ is called firm 1’s residual demand; it gives all possible combinations of firm 1’s quantity and price for a given value of $q_2$. • Determine firm 1’s optimum output: To do this we must find where marginal revenue equals marginal cost. Marginal cost (c) is assumed to be constant. Marginal revenue is a curve - $r_1(q_2)$ - with twice the slope of $d_1(q_2)$ and with the same vertical intercept. The point at which the two curves ($c$ and $r_1(q_2)$) intersect corresponds to quantity $q_1''(q_2)$. Firm 1’s optimum $q_1''(q_2)$, depends on what it believes firm 2 is doing. To find an equilibrium, we derive firm 1’s optimum for other possible values of $q_2$. Diagram 2 considers two possible values of $q_2$. If $q_2=0$, then the first firm's residual demand is effectively the market demand, $d_1(0)=D$. The optimal solution is for firm 1 to choose the monopoly quantity; $q_1''(0)=q^m$ ($q^m$ is monopoly quantity). If firm 2 were to choose the quantity corresponding to perfect competition, $q_2=q^c$ such that $P(q^c)=c$, then firm 1’s optimum would be to produce nil: $q_1''(q^c)=0$. This is the point at which marginal cost intercepts the marginal revenue corresponding to $d_1(q^c)$. • It can be shown that, given the linear demand and constant marginal cost, the function $q_1''(q_2)$ is also linear. Because we have two points, we can draw the entire function $q_1''(q_2)$, see diagram 3. Note the axis of the graphs has changed, The function $q_1''(q_2)$ is firm 1’s reaction function, it gives firm 1’s optimal choice for each possible choice by firm 2. In other words, it gives firm 1’s choice given what it believes firm 2 is doing. • The last stage in finding the Cournot equilibrium is to find firm 2’s reaction function. In this case it is symmetrical to firm 1’s as they have the same cost function. The equilibrium is the intersection point of the reaction curves. See diagram 4. • The prediction of the model is that the firms will choose Nash equilibrium output levels. ## Calculating the equilibrium In very general terms, let the price function for the (duopoly) industry be $P(q_1+q_2)$ and firm i have the cost structure $C_i(q_i)$. To calculate the Nash equilibrium, the best response functions of the firms must first be calculated. The profit of firm i is revenue minus cost. Revenue is the product of price and quantity and cost is given by the firm's cost function, so profit is (as described above): $\Pi_i = P(q_1+q_2) \cdot q_i - C_i(q_i)$. The best response is to find the value of $q_i$ that maximises $\Pi_i$ given $q_j$, with $i \ne j$, i.e. given some output of the opponent firm, the output that maximises profit is found. Hence, the maximum of $\Pi_i$ with respect to $q_i$ is to be found. First take the derivative of $\Pi_i$ with respect to $q_i$: $\frac{\partial \Pi_i }{\partial q_i} = \frac{\partial P(q_1+q_2) }{\partial q_i} \cdot q_i + P(q_1+q_2) - \frac{\partial C_i (q_i)}{\partial q_i}$ Setting this to zero for maximization: $\frac{\partial \Pi_i }{\partial q_i} = \frac{\partial P(q_1+q_2) }{\partial q_i} \cdot q_i + P(q_1+q_2) - \frac{\partial C_i (q_i)}{\partial q_i}=0$ The values of $q_i$ that satisfy this equation are the best responses. The Nash equilibria are where both $q_1$ and $q_2$ are best responses given those values of $q_1$ and $q_2$. ### An example Suppose the industry has the following price structure: $P(q_1+q_2)= a - (q_1+q_2)$ The profit of firm i (with cost structure $C_i(q_i)$ such that $\frac{\partial ^2C_i (q_i)}{\partial q_i^2}=0$ and $\frac{\partial C_i (q_i)}{\partial q_j}=0, j \ne i$ for ease of computation) is: $\Pi_i = \bigg(a - (q_1+q_2)\bigg) \cdot q_i - C_i(q_i)$ The maximization problem resolves to (from the general case): $\frac{\partial \bigg(a - (q_1+q_2)\bigg) }{\partial q_i} \cdot q_i + a - (q_1+q_2) - \frac{\partial C_i (q_i)}{\partial q_i}=0$ Without loss of generality, consider firm 1's problem: $\frac{\partial \bigg(a - (q_1+q_2)\bigg) }{\partial q_1} \cdot q_1 + a - (q_1+q_2) - \frac{\partial C_1 (q_1)}{\partial q_1}=0$ $\Rightarrow \ - q_1 + a - (q_1+q_2) - \frac{\partial C_1 (q_1)}{\partial q_1}=0$ $\Rightarrow \ q_1 = \frac{a - q_2 - \frac{\partial C_1 (q_1)}{\partial q_1}}{2}$ By symmetry: $\Rightarrow \ q_2 = \frac{a - q_1 - \frac{\partial C_2 (q_2)}{\partial q_2}}{2}$ These are the firms' best response functions. For any value of $q_2$, firm 1 responds best with any value of $q_1$ that satisfies the above. In Nash equilibria, both firms will be playing best responses so solving the above equations simultaneously. Substituting for $q_2$ in firm 1's best response: $\ q_1 = \frac{a - (\frac{a - q_1 - \frac{\partial C_2 (q_2)}{\partial q_2}}{2}) - \frac{\partial C_1 (q_1)}{\partial q_1}}{2}$ $\Rightarrow \ q_1* = \frac{a + \frac{\partial C_2 (q_2)}{\partial q_2} - 2*\frac{\partial C_1 (q_1)}{\partial q_1}}{3}$ $\Rightarrow \ q_2* = \frac{a + \frac{\partial C_1 (q_1)}{\partial q_1} - 2*\frac{\partial C_2 (q_2)}{\partial q_2}}{3}$ The symmetric Nash equilibrium is at $(q_1*,q_2*)$. (See Holt (2005, Chapter 13) for asymmetric examples.) Making suitable assumptions for the partial derivatives (for example, assuming each firm's cost is a linear function of quantity and thus using the slope of that function in the calculation), the equilibrium quantities can be substituted in the assumed industry price structure $P(q_1+q_2)= a - (q_1+q_2)$ to obtain the equilibrium market price. ## Cournot competition with many firms and the Cournot Theorem For an arbitrary number of firms, N>1, the quantities and price can be derived in a manner analogous to that given above. With linear demand and identical, constant marginal cost the equilibrium values are as follows: edit; we should specify the constants. Given the following results are these; Market Demand; $\ p(q)=a-bq=a-bQ=p(Q)$ Cost Function; $\ c_i(q_i)=cq_i$, for all i $\ q_i = Q/N = \frac{a-c} {b(N+1)}$ , which is each individual firm's output $\sum q_i = Nq = \frac{N(a-c)} {b(N+1)}$ , which is total industry output $\ p =a-b(Nq)= \frac{a + Nc} {N+1}$ , which is the market clearing price and $\Pi_i = \left(\frac{a - c} {N+1}\right)^2 \left(\frac{1}{b}\right)$ , which is each individual firm's profit. The Cournot Theorem then states that, in absence of fixed costs of production, as the number of firms in the market, N, goes to infinity, market output, Nq, goes to the competitive level and the price converges to marginal cost. $\lim_{N\rightarrow \infty} p = c$ Hence with many firms a Cournot market approximates a perfectly competitive market. This result can be generalized to the case of firms with different cost structures (under appropriate restrictions) and non-linear demand. When the market is characterized by fixed costs of production, however, we can endogenize the number of competitors imagining that firms enter in the market until their profits are zero. In our linear example with $N$ firms, when fixed costs for each firm are $F$, we have the endogenous number of firms: $N=\frac{(a-c)}{\sqrt{Fb}}-1$ and a production for each firm equal to: $q=\frac{\sqrt{Fb}}{b}$ This equilibrium is usually known as Cournot equilibrium with endogenous entry, or Marshall equilibrium.[2] ## Implications • Output is greater with Cournot duopoly than monopoly, but lower than perfect competition. • Price is lower with Cournot duopoly than monopoly, but not as low as with perfect competition. • According to this model the firms have an incentive to form a cartel, effectively turning the Cournot model into a Monopoly. Cartels are usually illegal, so firms might instead tacitly collude using self-imposing strategies to reduce output which, ceteris paribus will raise the price and thus increase profits for all firms involved. ## Bertrand versus Cournot Although both models have similar assumptions, they have very different implications: • Since the Bertrand model assumes that firms compete on price and not output quantity, it predicts that a duopoly is enough to push prices down to marginal cost level, meaning that a duopoly will result in perfect competition. • Neither model is necessarily "better." The accuracy of the predictions of each model will vary from industry to industry, depending on the closeness of each model to the industry situation. • If capacity and output can be easily changed, Bertrand is a better model of duopoly competition. If output and capacity are difficult to adjust, then Cournot is generally a better model. • Under some conditions the Cournot model can be recast as a two stage model, where in the first stage firms choose capacities, and in the second they compete in Bertrand fashion. However, as the number of firms increases towards infinity, the Cournot model gives the same result as in Bertrand model: The market price is pushed to marginal cost level. ## References 1. Varian, Hal R. (2006), Intermediate microeconomics: a modern approach (7 ed.), W. W. Norton & Company, p. 490, ISBN 0-393-92702-4 2. Etro, Federico. , page 6, Dept. Political Economics -- Università di Milano-Bicocca, November 2006 • Tirole, Jean. The Theory of Industrial Organization, MIT Press, 1988. • Oligoply Theory made Simple, Chapter 6 of Surfing Economics by Huw Dixon.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 82, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9194146990776062, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/50736?sort=votes
## Polynomials from a Recurrence Relation ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi guys, I have recently started looking at polynomials $q_n$ generated by initial choices $q_0=1$, $q_1=x$ with, for $n\geq 0$, some recurrence formula $$q_{n+2}=xq_{n+1}+c_n q_n$$ where $c_n$ is some function in $n$. The first few of these are $$q_2=x^2+c_0$$ $$q_3=x^3+(c_0+c_1)x$$ $$q_4=x^4+(c_0+c_1+c_2)x^2+c_2c_0$$ $$q_5=x^5+(c_0+c_1+c_2+c_3)x^3+(c_0c_2+c_0c_3+c_1c_3)x$$ $$q_6=x^6+(c_0+c_1+c_2+c_3+c_4)x^4+(c_0c_2+c_0c_3+c_0c_4+c_1c_3+c_1c_4+c_2c_4)x^2$$$$+c_0c_2c_4$$ My question is whether there is a name for the coefficients of the powers of $x$. I realise that they can be written as certain formulations of elementary symmetric polynomials but I am ideally looking for a reference where the specific expressions are studied Any help would be great :) - I've added the tag orthogonal polynomials, that are related -orthogonal polynomials satisfy a three terms linear recurrence of a slightly more general form, and well characterized. Are you interested in the results with $any$ sequence $(c_n)$ or is your $(c_n)$ a particular sequence? (in which case it should be healthy to check if it corresponds to an orthogonal sequence). – Pietro Majer Dec 30 2010 at 17:23 @ Pietro : Yes sorry, I should originally have tagged as relating to orthogonal polynomials. I am interested in general $c_n$, although I came to the recurrence relations from a number of specific examples which are well studied :) – backstoreality Dec 30 2010 at 17:25 ## 2 Answers These polynomials are closely related to continuants, which arise in studying continuing fractions. The $n$th continuant of a sequence $a_0$, $a_1$, $\ldots$ is defined by $K(0)=1$, $K(1)=a_1$, $K(n)=a_n K(n-1) + K(n-2)$. They are sums of products of $a_1,\dots, a_n$ in which consecutive pairs are deleted. (See, for example, http://en.wikipedia.org/wiki/Continuant_(mathematics).) Neither continuants nor the coefficients of the $c_n$ are symmetric polynomials; they cannot be expressed in terms of (or as "combinations of") elementary symmetric functions. - 1 +1. There is no known closed-form evaluation for the matrix product $M_n=\prod_{k=1}^n[x,c_k;1,0]$ and even some kind of "understanding" its structure. If it were for $x=1$, then several hard conjectures in number theory about the continuants would be solved... – Wadim Zudilin Dec 31 2010 at 4:49 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let's treat the $c_i$ as formal indeterminates. Let $S(n,m)$ be the set of increasing functions $i:\{1,\ldots, m\}\to \{0,\ldots, n-2\}$, written $j \mapsto i_j$, such that $i_{j+1} > i_j + 1$ for all $j=1,\ldots, m$. So $S(n,m)$ is in bijection with the set of subsets of $\{0, \ldots, n-2\}$ of size $m$ which contain no adjacent pair $(k, k+1)$. Then the coefficient of $x^{n - 2m}$ in $q_n$ is $\sum_{i \in S(n,m)} c_{i_1} c_{i_2} \cdots c_{i_m}$ and all other coefficients are zero: $q_n = \sum_{m = 0}^{[n/2]} \left( \sum_{i \in S(n,m)} c_{i_1} c_{i_2} \cdots c_{i_m} \right) x^{n - 2m}$. Proof: induction on $n$. Possibly considering the generating function $F = \sum_{n=0}^\infty q_nt^n$ may be helpful? Note that these coefficients are not elementary symmetric polynomials in the $c_i$, since for example already $c_2c_0$ isn't invariant under all permutations of $\{0,1,2\}$. I just thought I'd spell out the symmetry that is involved here, and perhaps someone else knows a name for these coefficients. - @ Konstantin : Thanks for your reply and thanks for formally stating the weird symmetry that I hadn't yet put to paper :) Yes I should have been clearer in stating that I know that the coefficients are not elementary symmetric polynomials but they could probably written as combinations of them :) – backstoreality Dec 30 2010 at 18:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505389928817749, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php/Chryzodes
Chryzodes From Math Images Chryzode Chryzodes are visualizations of arithmetic using chords in a circle. Basic Description Chryzodes use a simple process to create beautiful images. Given a circle, a certain number of evenly spaced points are chosen on the perimeter of the circle. Each point represents a number. An operation, such as addition or multiplication, is then used to connect certain points. For example, if multiplication by 2 is chosen, then the point representing 1 will be connected to the point representing 2, 2 to 4, 3 to 6, and so on. Notice that modular arithmetic is used here, so that $6*2=12 \equiv 5 \pmod {7}\$ . Continuing this process for a large number of iterations, creating many lines, produces images such as this page's main image. Below is an interactive applet which creates chryzodes based on user input. [Show Interactive Applet][Hide Interactive Applet] If you can see this message, you do not have the Java software required to view the applet. Teaching Materials There are currently no teaching materials for this page. Add teaching materials. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. Chryzode Field: Number Theory Created By: J-F. Collonna &. J-P Bourguigno
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8357332944869995, "perplexity_flag": "middle"}
http://johncarlosbaez.wordpress.com/2011/04/30/crooks-fluctuation-theorem/
# Azimuth ## Crooks’ Fluctuation Theorem guest post by Eric Downes Christopher Jarzynski, Gavin Crooks, and some others have made a big splash by providing general equalities that relate free energy differences to non-equilibrium work values. The best place to start is the first two chapters of Gavin Crooks’ thesis: • Gavin Crooks, Excursions in Statistical Dynamics, Ph.D. Thesis, Department of Chemistry, U.C. Berkeley, 1999. Here is the ~1 kiloword summary: If we consider the work $W$ done on a system, Clausius’ Inequality states that $W \ge \Delta F$ where $\Delta F$ is the change in free energy. One must perform more work along a non-equilibrium path because of the second law of thermodynamics. The excess work $W- \Delta F$ is dissipated as heat, and is basically the entropy change in the universe, measured in different units. But who knows how large the excess work will be… One considers a small system for which we imagine there exists a distribution of thermodynamic work values $W$ (more on that below) in moving a system through phase space. We start at a macrostate with free energy $F_1$, and (staying in touch with a thermal reservoir at inverse temperature $\beta$) move in finite time to a new non-equilibrium state. When this new state is allowed to equilibriate it will have free energy $F_1 + \Delta F$ You can do this by changing the spin-spin coupling, compressing a gas, etc: you’re changing one of the parameters in the system’s Hamiltonian in a completely deterministic way, such that the structure of the Hamiltonian does not change, and the system still has well-defined microstates at all intervening times. Your total accumulated work values will follow $\displaystyle{ \langle \exp(-\beta W) \rangle = \exp(-\beta \Delta F)}$ where the expectation value is over a distribution of all possible paths through the classical phase space. This is the Jarzynski Equality. It has an analogue for quantum systems, which appears to be related to supersymmetry, somehow. But the proof for classical systems simply relies on a Markov chain that moves through state space and an appropriate definition for work (see below). I can dig up the reference if anyone wants. This is actually a specific case of a more fundamental theorem discovered about a decade ago by Gavin Crooks: the Crooks fluctuation theorem: $\displaystyle{ \exp(-\beta(W- \Delta F)) = \frac{P_{\mathrm{fwd}}}{P_{\mathrm{rev}}} }$ where $P_{\mathrm{fwd}}$ is the probability of a particular forward path which requires work $W$, and $P_{\mathrm{rev}}$ is the probability of its time-reversal dual (see Gavin Crooks’ thesis for more precise definitions). How do we assign a thermodynamic work value to a path of microstates? At the risk of ruining it for you: It turns out that one can write a first law analog for a subsystem Hamiltonian. We start with: $H_{\mathrm{tot}} = H_{\mathrm{subsys}} + H_{\mathrm{environ}} + H_{\mathrm{interact}}$ As with Gibbs’ derivation of the canonical ensemble, we never specify what $H_{\mathrm{environ}}$ and $H_{\mathrm{interact}}$ are, only that the number of degrees of freedom in $H_{\mathrm{environ}}$ is very large, and $H_{\mathrm{interact}}$ is a small coupling. You make the observation that work can be associated with changing the energy-levels of the microstates in $H_{\mathrm{subsys}}$, while heat is associated with the energy change when the (sub)system jumps from one microstate to another (due to $H_{\mathrm{interact}}$) with no change in the spectrum of available energies. This implies a rather deep connection between the Hamiltonian and thermodynamic work. The second figure in Gavin’s thesis explained everything for me, after that you can basically derive it yourself. The only physical applications I am aware of are to Monte Carlo simulations and mesoscopic systems in nano- or molecular biophysics. In that regard John Baez’ recent relation between free energy and Rényi entropy is a nice potential competitor for the efficient calculation of free energy differences (which apparently normally requires multiple simulations at intervening temperaturess, calculating the specific heat at each.) But the relation to Markov chains is much more interesting to me, because this is a very general mathematical object which can be related to a much broader class of problems. Heat ends up being associated with fluctuations in the system’s state, and the (phenomenological) energy values are kind of the “relative unlikelihood” of each state. The excess work turns out to be related to the Kullback-Leibler divergence between the forward and reverse path-probabilities. For visual learners with a background in stat mech, this is all developed in a pedagogical talk I gave in Fall 2010 at U. Wisconsin-Madison’s Condensed Matter Theory Seminar; talk available here. I’m licensing it cc-by-sa-nc through the Creative Commons License. I’ve been sloppy with references, but I emphasize that this is not original work; it is my presentation of Crooks’ and Jarzynski’s. Nonetheless, any errors you find are my own. Hokay, have a nice day! This entry was posted on Saturday, April 30th, 2011 at 3:55 am and is filed under information and entropy, physics. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 3 Responses to Crooks’ Fluctuation Theorem 1. Blake Stacey says: It has an analogue for quantum systems, which appears to be related to supersymmetry, somehow. Mallick et al.‘s relation between Jarzynski’s equality and supersymmetry is actually set up in classical terms, using the “response field” formalism for writing a field theory with stochastic dynamics and Grassmann number-valued fields to bring in the idea of supersymmetry. (I wish I knew more about what they actually did, but that’s a bit of background on the tools they used.) • John Baez says: I’d like to understand this stuff!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 20, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.91597980260849, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/32094-implicit-differentiation-print.html
# Implicit Differentiation Printable View • March 25th 2008, 07:12 PM lyzzi3 1 Attachment(s) Implicit Differentiation I think my teacher was talking about turning y into y(x) or something but I'm just completely lost on this one • March 25th 2008, 07:23 PM TheEmptySet $5x^2+4x+xy=2$ using implicit we get $10x+4+y+xy'=0$ $y(2)=-13$ Solving for y' we get $y'=- \left( \frac{10x+4+y}{x} \right)$ evaluate at x=2 $y'(2)=-\left( \frac{10(2)+4+(-13)}{2} \right)=-\frac{11}{2}=5.5$ • March 25th 2008, 07:26 PM o_O Let's do an example to illustrate implicit differentiation: $x^{2} + y^{3} = 9$ Now, if you were to ask to find the derivative of this, you could solve for y then differentiate the equation with respect to x. However, not all functions are easily represented only in terms of x (ex. you could not solve for y for this equation: $x^{2} + xy + y^{2} = 5$). Fortunately, we are still able to find the derivative to such functions. Note that y is a function of x so that we can differentiate it normally. However, we would have to use the chain rule. I think if you study the example, you'll get what I mean: $x^{2} + y^{3} = 9$ $2x + \underbrace{3y^{2}y'}_{} = 0 \quad \mbox{Differentiated with respect to x}$ Looking at the underbrace, notice how we used the power rule (3y^2) and in addition the chain rule (hence the y'). Continuing on to solve for y': $3y^{2}y' = -2x$ $y' = \frac{-2x}{3y^{2}}$ Note it's perfectly fine to have both x and y in our solution to y'. That is basically the trick when implicitly differentiating equations. See if you can extend this to your question. • March 25th 2008, 07:33 PM SengNee Quote: Originally Posted by lyzzi3 I think my teacher was talking about turning y into y(x) or something but I'm just completely lost on this one $5x^2+4x+xy(x)=2$ $10x+4+y(x)+x[y'(x)]=0$ If, $y(2)=-13$ $10(2)+4+(-13)+(2)[y'(2)]=0$ $11+2[y'(2)]=0$ $y'(2)=-\frac{11}{2}$ All times are GMT -8. The time now is 09:59 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548002481460571, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/310698/exactly-one-of-the-following-systems-is-solvable
# Exactly one of the following systems is solvable 1. $Ax\leq b$ and $x\geq 0$ 2. $b^{T}y<0$, $A^{T}y\geq 0$ and $y\geq 0$ I can show easily that if (1) is true then (2) is not and converse too. That means both statements can not be true at same time. But I want to show that Exactly one of these is always true. - I presume $A$ and $b$ are given and the question concerns the existence of $x$ and $y$? And when you have one vector less than another, does that mean each component of the first is less than the corresponding component of the other? – Gerry Myerson Feb 21 at 23:43 Remains to prove that "neither is true" is impossible. – gt6989b Feb 21 at 23:49 – gt6989b Feb 22 at 0:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499067068099976, "perplexity_flag": "head"}
http://su.wikipedia.org/wiki/Integrasi_Lebesgue
# Integrasi Lebesgue Ti Wikipédia, énsiklopédia bébas basa Sunda Luncat ka: pituduh, sungsi Dina matematik, integral ngarupakeun konsep umum ngeunaan daerah tina gambar nu teratur ka wewengkon nu diwatesan ku fungsi. Integrasi Lebesgue ngarupakeun rarangkay pagawean keur ngalegaan integral ka fungsi kelas nu kacida gedena. Integral Lebesgue kacida pentingna dina widang matematik nu disebut real analysis sarta sababaraha widang sejenna. Integral Lebesgue dumasar kana ngaran Henri Lebesgue (1875-1941). Cara maca ngaranna nu gampang nyaeta leh BEG. ## Panganteur Fungsi integral f bisa dihartikeun salaku wewengkon S sahandapeun grapik f. Ieu gampang dipikaharti keur fungsi kulawarga saperti polinomial, tapi naon hartina keur fungsi nu leuwih asing? Sacara umum, naha kelas fungsi nu ngarupakeun "wewengkon sahandapeun kurva" asup akal? Keur ngajawab ieu patarosan merlukan teori jeung praktek. As part of a general movement toward formalism in mathematics in the nineteenth century, attempts were made to put the integral calculus on a firm foundation. The Riemann integral, proposed by Bernhard Riemann (1826-1866), is a broadly successful attempt to provide such a foundation for the integral. Riemann's definition starts with the construction of a sequence of easily-calculated integrals which converge to the integral of a given function. This definition is successful in the sense that it gives the expected answer for many already-solved problems, and gives useful results for many other problems. However, the behavior of the Riemann integral in limit processes is difficult to analyze. This is of prime importance, for instance, in the study of Fourier series, Fourier transforms and other topics. The Lebesgue integral is better able to describe how and when it is possible to take limits under the integral sign. The Lebesgue definition considers a different class of easily-calculated integrals than the Riemann definition, which is the main reason the Lebesgue integral is better behaved. The Lebesgue definition also makes it possible to calculate integrals for a broader class of functions. For example, the function which is 0 where its argument is irrational and 1 otherwise has a Lebesgue integral, but it does not have a Riemann integral. We now give a highly technical description. It is possible to skip directly to the discussion heading for further technical and historical justification of the Lebesgue integral if the reader is so inclined. ## Construction of the Lebesgue integral Let μ be a (non-negative) measure on a sigma-algebra X over a set E. (In real analysis, E will typically be Euclidean n-space Rn or some Lebesgue measurable subset of it, X will be the sigma-algebra of all Lebesgue measurable subsets of E, and μ will be the Lebesgue measure. In probability and statistics, μ will be a probability measure on a probability space E.) We build up an integral for real-valued functions defined on E as follows. Fix a set S in X and let f be the function on E whose value is 0 outside of S and 1 inside of S (i.e., f(x) = 1 if x is in S, otherwise f(x) = 0.) This is called the indicator function or characteristic function of S and is denoted 1S. To assign a value to ∫1S consistent with the given measure μ, the only reasonable choice is to set: $\int 1_S = \mu (S)$ We extend by linearity to the linear span of indicating functions: $\int \sum a_k 1_{S_k} = \sum a_k \mu( S_k)$ where the sum is finite and the coefficients ak are real numbers. Such a finite linear combination of indicating functions is called a simple function. Note that a simple function can be written in many ways as a linear combination of characteristic functions, but the integral will always be the same. Now the difficulties begin as we attempt to take limits so that we can integrate more general functions. It turns out that the following process works and is most fruitful. Let f be a non-negative function supported on the set E (we allow it to attain the value +∞, in other words, f takes values in the extended real number line.) We define ∫f to be the supremum of ∫s where s varies over all simple functions which are under f (that is, s(x) ≤ f(x) for all x.) This is analogous to the lower sums of Riemann. However, we will not build an upper sum, and this fact is important in getting a more general class of integrable functions. One can be more explicit and mention the measure and domain of integration: $\int_E f\,d\mu := \sup\left\{\,\int_E s\,d\mu : s\le f,\ s\ \mbox{simple}\,\right\}$ There is the question of whether this definition makes sense (do simple function or indicating function keep the same integral?) There is also the question of whether this corresponds in any way to a Riemann notion of integration. It is not so hard to prove that the answer to both questions is yes. We have defined ∫f for any non-negative function on E; however for some functions ∫f will be infinite. Furthermore, desirable additive and limit properties of the integral are not satisfied, unless we require that all our functions are measurable, meaning that the pre-image of any interval is in X. We will make this assumption from now on. To handle signed functions, we need a few more definitions. If f is a function of the measurable set E to the reals (including ± ∞), then we can write f = g - h where g(x) = (f(x) if f(x)>0, 0 otherwise) and h(x) = (-f(x) if f(x) < 0, 0 otherwise). Note that both g and h are non-negative functions. Also note that |f| = g + h. If ∫|f| is finite, then f is called Lebesgue integrable. In this case, both ∫g and ∫h are finite, and it makes sense to define ∫f by ∫g - ∫h. It turns out that this definition is the correct one. Complex valued functions can be similarly integrated, by considering the real part and the imaginary part separately. ## Properties of the Lebesgue integral Every reasonable notion of integral needs to be linear and monotone, and the Lebesgue integral is: if f and g are integrable functions and a and b are real numbers, then af + bg is integrable and ∫(af + bg) = a∫f + b∫g; if f ≤ g, then ∫f ≤ ∫g. Two functions which only differ on a set of μ-measure zero have the same integral, or more precisely: if μ({x : f(x) ≠ g(x)}) = 0, then f is integrable if and only if g is, and in this case ∫ f = ∫ g. One of the most important advantages that the Lebesgue integral carries over the Riemann integral is the ease with which we can perform limit processes. Three theorems are key here. The monotone convergence theorem states that if fk is a sequence of non-negative measurable functions such that fk(x) ≤ fk+1(x) for all k, and if f = lim fk, then ∫fk converges to ∫f as k goes to infinity. (Note: ∫f may be infinite here.) Fatou's lemma states that if fk is a sequence of non-negative measurable functions and if f = liminf fk, then ∫f ≤ liminf ∫fk. (Again, ∫f may be infinite.) The dominated convergence theorem states that if fk is a sequence of measurable functions with pointwise limit f, and if there is an integrable function g such that |fk| ≤ g for all k, then f is integrable and ∫fk converges to ∫f. ## Equivalent formulations If f is non-negative, then ∫f dμ is precisely the area under the curve as measured by the product measure μ × λ where λ is the Lebesgue measure for R. One can also circumvent measure theory entirely. The Riemann integral exists for any continuous function f of compact support. Then we use functional analysis to obtain the integral for more general functions. Let Cc be the space of all real-valued compactly supported continuous functions of R. Define a norm on Cc by ||f|| = ∫ |f(x)| Then Cc is a normed vector space (and in particular, it is a metric space.) All metric spaces have completions, so let L1 be its completion. This space is isomorphic to the space of Lebesgue integrable functions (modulo sets of measure zero). Furthermore, the Riemann integral ∫ defines a continuous functional on Cc which is dense in L1 hence ∫ has a unique extension to all of L1. This integral is precisely the Lebesgue integral. In this formulation, the limit taking theorems are hard to prove. However, in more general cases (such as when the functions, or perhaps the measures, take values in a large vector space instead of Rn) this approach is a fast way of obtaining an integral. ## Discussion Here we discuss the limitations of the Riemann integral and the greater scope offered by the Lebesgue integral. We presume a working understanding of the Riemann integral. With the advent of Fourier series, there arose the need to exchange summation and integral signs much more often. However, the conditions under which ∑k∫fk and ∫∑kfk are equal proved quite elusive in the Riemann framework. It may come as a surprise to the casual reader that these two quantities may not be equal, so an example helps: • Let fk(x) be 1 on (k, k+1] and -1 on (k+1, k+2] and 0 everywhere else. • Then, ∑k=1∞fk(x) = f(x) where f(x) is zero everywhere except on (1, 2] where it is 1. Hence, ∫∑fk = ∫f = 1. • However, ∫fk = 0 for every k, hence ∑∫fk = 0. However, it was clear from experience that in many very useful situations, the sum and the integral did commute. It was very important to be able to describe which conditions enabled the exchange of the sum and integral signs. Unfortunately, the Riemann integral is poorly equipped to deal with this question; its main useful convergence theorem being the uniform convergence theorem: if fk are Riemann-integrable functions of [a, b] converging uniformly to f, then ∫fk converges to ∫f. Since Fourier series rarely converge uniformly, this theorem is clearly insufficient. There are some other technical difficulties with the Riemann integral. These are linked with the limit taking difficulty discussed above. • If H(x) is a function of [0,1] which is 0 everywhere, except that it is 1 on the rational numbers (see nowhere continuous), then it is not Riemann integrable. This is because, in the calculation of its upper sum, any rectangle used will have height 1 (because all rectangles contain rational points) and in the lower sum, any rectangle used will have height 0 (because all rectangles contain irrational points.) Hence the lower sum is 0 and the upper sum is 1. • This means that the monotone convergence theorem does not hold. The monotone convergence theorem would say that if fk(x) is a sequence of non-negative functions increasing monotonically in k to f(x), then the integrals of ∫ fk(x) dx should converge to ∫ f(x) dx. To see why this is so, let {ak} be an enumeration of all the rational numbers in [0,1] (they are countable so this can be done.) Then let gk be the function which is 1 on ak and 0 everywhere else. Lastly let fk = g1 + g2 + ... + gk. Then fk is zero everywhere except on a finite set of points, hence its Riemann integral is zero. The sequence fk is also clearly non-negative and monotonously increasing to H(x), but H(x) isn't Riemann integrable. • The Riemann integral can only integrate functions on an interval. The simplest extension is to define ∫− ∞∞f(x) dx by the limit of ∫−aaf(x) dx as a goes to +∞. However, this breaks translation invariance: if f and g are zero outside some interval [a, b] and are Riemann integrable, and if f(x) = g(x + y) for some y, then ∫ f = ∫ g. However, with this definition of the improper integral (this definition is sometimes called the improper Cauchy principal value about zero), the functions f(x) = (1 if x > 0, −1 otherwise) and g(x) = (1 if x > 1, −1 otherwise) are translations of one another, but their improper integrals are different. (∫ f = 0 but ∫ g = − 2.) ### Towards a better integration theory The solution, as it turns out, is to study an even simpler problem first. The observation is that, if we have a notion of length, we can turn it into a notion of area. Instead of measuring the area of a surface in the plane, we turn our attention to measuring the length of subsets of the real line. One obvious requirement is that an interval [a, b] should have a length of b-a. What other demands we should put on the notion of length is less clear, and much effort was put into obtaining a useful definition. In fact, the term length was first used, but its construction was misguided, and a later, more useful construction is in use today; it is called the measure. Measure theory enables us to calculate the lengths of subsets of the real line. It also fully classifies which sets have a length, and which sets do not have a reasonable notion of length. By spending the extra effort into calculating lengths carefully, we now have a more solid foundation to work with. Of course, the Riemann integral uses the notion of length anonymously. Indeed, the element of calculation for the Riemann integral is the rectangle [a, b] × [c, d], whose area is calculated to be (b-a)(c-d). Obviously the numbers b-a and c-d are meant to be the lengths of [a, b] and [c, d]. However, we can now augment the Riemann integral. Indeed, Riemann could only use rectangles because he could only measure intervals. Equipped with a measure μ, we can calculate the length of sets much more interesting than intervals. So, if X and Y are μ-measurable, we can easily define the area of the cartesian product X × Y to be μ(X) μ(Y). This definition clearly generalizes Riemann's notion of area of a rectangle. In the context of Lebesgue integration, sets such as X × Y are sometimes called rectangles, even though they are far more complicated than the quadrilaterals of the same name. With the ability to measure the area of more complex rectangles, we can attempt to integrate more complex functions. One crucial, but nonobvious step, was to drop the notion of upper sum. While upper sums work just fine for bounded functions of bounded intervals, there is a clear problem for unbounded functions, or functions which are supported by all of the real line. For instance, the function f(x) = 1/x2 for x > 1 would necessarily have an infinite upper sum, however it can be shown that this function has a finite integral. Dropping the upper sum robs us of our main way of checking for integrability of functions. It isn't obvious how to decide on the integrability of functions while maintaining a consistent theory. It is very fortunate that a simple (if technical) definition is available. The resulting theory of integration is much more accurate in describing limit taking processes. Many of the original questions posed by Fourier series (about swapping the integral and summation signs) are answerable using one or another of the various Lebesgue integral limit theorems (the main ones are monotone convergence, dominated convergence and Fatou's lemma; see above.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8978577256202698, "perplexity_flag": "middle"}
http://gowers.wordpress.com/2009/02/06/dhj-the-triangle-removal-approach/
# Gowers's Weblog Mathematics related discussions ## DHJ — the triangle-removal approach The post, A combinatorial approach to density Hales-Jewett, is about one specific idea for coming up with a new proof for the density Hales-Jewett theorem in the case of an alphabet of size 3. (That is, one is looking for a combinatorial line in a dense subset of $\null [3]^n$.) In brief, the idea is to imitate a well-known proof that uses the so-called triangle-removal lemma to prove that a dense subset of $\null [n]^2$ contains three points of the form $(x,y)$, $(x,y+d)$ and $(x+d,y)$. Such configurations are sometimes called corners, and we have been referring to the theorem that a dense subset of $\null [n]^2$ contains a corner as the corners theorem. The purpose of this post is to summarize those parts of the ensuing discussion that are most directly related to this initial proposal. Two other posts, one hosted by Terence Tao on his blog, and the other here, take up alternative approaches that emerged during the discussion. (Terry’s one is about more combinatorial approaches, and the one here will be about obstructions to uniformity and density-increment strategies.) I would be very happy to add to this summary if anyone thinks there are important omissions (which there could well be — I have written it fast and from my own perspective). If you think there are, then either comment about it or send me an email. Two suggestions, made by Terry in comment number 4 (the second originating with his student Le Thai Hoang), led to two of the main threads of the discussion relating to the triangle-removal approach. One was that since Sperner’s theorem is the density Hales-Jewett theorem for alphabets of size 2, and since there is no obvious way of generalizing the existing proofs of Sperner’s theorem to obtain the density Hales-Jewett theorem, then it would make sense to look for other proofs of Sperner’s theorem. The other was that the existing methods of tackling density theorems tend to prove not just that you get one configuration of a desired type but a positive proportion of the number of configurations in the whole set. One can usually obtain a seemingly stronger result of this type by an easy averaging argument (a prototype of which was proved by Varnavides in the 1950s), but in the case of the density Hales-Jewett theorem the stronger statement is false: one can easily come up with dense subsets of $\null [3]^n$ with far fewer than $4^n$ combinatorial lines (the number that you get in $\null [3]^n$ itself). The Varnavides thread. This led to a search for a suitable Varnavides-like statement, and that search seems to have been successful — though we have not checked this carefully. (See comments 61-64 and 66.)The basic idea is to restrict attention to the set $T_m$ of all sequences $x$ such that the numbers of 1s, 2s and 3s in $x$ are all within $m$ of $n/3$, where $m$ is a large integer that is much smaller than $n$. An easy averaging argument shows that if you can prove that a dense subset of $T_m$ contains a combinatorial line, then you have the density Hales-Jewett theorem. (It seems that one can take $m$ to be anything from a very large constant that depends on the density only, to an integer that is a small multiple of $\sqrt n$.) The reason this helps is that it avoids the problem one has in $\null [3]^n$ that the points in a random combinatorial line are very atypical: with high probability they contain about $n/2$ of one value and about $n/4$ of the other two values. This seems to have cleared up the Varnavides problem, so it is something to hold in reserve and use if we get to the stage where we have a definite approach to try out on the problem. For now, one can either just pretend that the Varnavides problem doesn’t exist and think about $\null [3]^n$, or one can simply bear in mind that one is thinking about sequences with roughly equal numbers of 1s, 2s and 3s and combinatorial lines with small variable sets, or sets of wildcards as some of us are calling them. The Sperner thread. Some definite progress has been made on trying to find alternative proofs of Sperner’s theorem. One approach was initiated in comments 21 and 22. The idea here was to try to find a statement that would have the same relationship to Sperner’s theorem as the hoped-for (but not yet even precisely formulated) triangle-removal lemma had to density Hales-Jewett. An argument was given in comments 21 and 22 that the statement, if it existed, should have the following form. Let $\mathcal{A}$ and $\mathcal{B}$ be collections of subsets of $\null [n]$ such that there are very few pairs $(U,V)$ with $U\in\mathcal{A}$, $V\in\mathcal{B}$ and $U\cap V=\emptyset$. Then it is possible to remove a small number of sets from $\mathcal{A}$ and $\mathcal{B}$ and end up with no such pairs. We have been calling statements like this pair-removal lemmas, even though we have not yet proved any useful ones. A statement like this implies a weak form of Sperner’s theorem as follows. Suppose you have a dense set system $\mathcal{A}$ that contains no pair of distinct sets $(U,W)$ with $U\subset W$. Then let \$\mathcal{B}\$ consist of all complements of sets in $A$. If we can find $U\in\mathcal{A}$ and $V\in\mathcal{B}$ with $U\cap V=\emptyset$, then $U\subset V^c$. Since $V^c\in\mathcal{A}$, this is a contradiction unless $V^c=U$. Therefore, the number of pairs $(U,V)$ with $U\in\mathcal{A}$, $V\in\mathcal{B}$ and $U\cap V=\emptyset$ is very small. The pair-removal statement then implies that one can remove small subsets from $\mathcal{A}$ and $\mathcal{B}$ in order to end up with no such pairs. But that contradicts the fact that $\mathcal{A}$ is dense and for every $U\in\mathcal{A}$ we have the pair $(U,U^c)$. After some difficulty in formulating a precise pair-removal lemma that wasn’t either trivial or false, we have arrived at the following test problem. (It’s not actually the pair-removal lemma we’re going to want, but it is cleaner, looks both true and non-trivial, and seems to involve all the right difficulties.) Tentative conjecture. Let $n=2m+k$, where $k$ tends slowly to infinity with $n$. Let $\mathcal{A}$ and $\mathcal{B}$ be subsets of $\binom{[n]}{m}$ such that the number of pairs $(U,V)$ with $U\in\mathcal{A}$, $V\in\mathcal{B}$ and $U\cap V=\emptyset$ is $o(1)\binom{n}{m,m,k}$. Then one can remove $o(1)\binom{n}{m}$ sets from each of $\mathcal{A}$ and $\mathcal{B}$ and end up with no such pairs. Some promising suggestions have been made for how to go about proving this conjecture. Jozsef (in comment 124) has drawn attention to a theorem of Bollobás that implies that if each $U\in\mathcal{A}$ is disjoint from precisely one $V\in\mathcal{B}$, then $\mathcal{A}$ and $\mathcal{B}$ must both be small. And Ryan (in comments 82, 145, 150 and 152) has drawn attention to existing papers on very closely related subjects. Very recently (at the time of writing it is in the last two or three hours) Jozsef has given us what appears to be a proof of a pair-removal lemma. Assuming there aren’t any irritating surprises, such as it turning out not to be quite the statement we really want, the obvious thing to do here seems to be to play around with Jozsef’s argument and see whether it can be generalized from 01-sequences to 012-sequences. A very quick word about what to expect of such a generalization. It would be “completing the square”, where vertex 00 is the trivial pair-removal lemma, vertex 01 is the pair-removal lemma in Kneser graphs, vertex 10 is the normal triangle-removal lemma (as described in the background-information post), and we are looking for vertex 11. Since getting from 00 to 10 is by no means a triviality, getting from 10 to 11 is unlikely to be a triviality either. But now we are in a stronger position than we were initially, because to get to 11 there are two potential routes. If we are really lucky, then we’ve got all the ingredients we need and just have to decide how to mix them together. Other approaches to Sperner. The above approach to re-proving Sperner has a certain naturalness to it, at least in the context of the main problem, but it is not the only one to have been discussed. If generalizing it runs into difficulties, then we have other ideas to explore that have definitely not yet been explored as far as they could have. They begin with Ryan’s comment 57 (in which he was responding to a related comment of Boris). This introduces the idea of what Boris calls “pushing shadows around”, which is apparently what Sperner himself did. Other ideas that are still to be explored. If we are going to follow the triangle-removal proof of the corners theorem quite closely, then at some point we will need to find an analogue of Szemerédi’s regularity lemma for subgraphs of the graph where you join two sets if they are disjoint. There are some obvious problems with doing this (discussed in the initial post in comment V, and comments EE to LL), and a very speculative suggestion for how a proof might look in comment 110. There would also be a need for a counting lemma: an offer to search for such a lemma in a systematic way has not yet been taken up. [Remark added 7/2/09. It's important to understand that this post is how the situation seemed to be when it was written yesterday. If you read the comments you will see that some of the assertions that looked plausible (such as the precise form of the tentative conjecture) have now been discovered to be wrong.] ### Like this: This entry was posted on February 6, 2009 at 10:03 am and is filed under polymath1. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 71 Responses to “DHJ — the triangle-removal approach” 1. gowers Says: February 6, 2009 at 11:16 am | Reply 300. One of the main reasons for this comment is to establish the numbering. Comments on Terry’s post will be numbered 200-299 (with a convention that from now on if a post receives 100 comments then it must be summarized and restarted), comments on this post will be 300-399, and comments on the obstructions-to-uniformity post, when it is written, will be 400-499. To get the ball rolling for this post, I’d like to write out Jozsef’s proof of pair removal in full detail, mainly for my own benefit so that I understand it properly, but also because if, as I hope, a major subproject for this thread will be to see if we can generalize Jozsef’s argument, it will be convenient to have that argument here for easy referral. The first thing to note is that the result Jozsef proves is not quite as strong as the pair-removal lemma we originally asked for, because what he shows is not that you can remove a small number of sets and end up with no disjoint pairs, but rather that you can remove a small number of sets and end up with removing almost all the disjoint pairs you started with. This kind of weakening of a removal lemma is good enough for many applications. For instance, if we applied it to the corners problem we would find that we could remove a small number of edges and get rid of almost all the degenerate triangles, which is just as much of a contradiction as the usual one. (In fact, even if we could show that we could get rid of one percent of the degenerate triangles we would have a contradiction.) I’ll get on to the actual proof in the next comment. 2. gowers Says: February 6, 2009 at 11:29 am | Reply 301. Pair removal in Kneser graphs. The Kneser graph $K_{n,m}$ is the graph whose vertices are all subsets of $\null [n]$ of size $m$, with two sets $U$ and $V$ joined if and only if they are disjoint. We should think of $n$ as being $2m+k$ for some fairly small but nonconstant $k$ but for now I will avoid trying to specify them. Let $\mathcal{A}$ and $\mathcal{B}$ be collections of sets of size $m$ (i.e., subsets of the vertex set of $K_{n,m}$. We can form a bipartite graph by joining $U\in\mathcal{A}$ to $V\in\mathcal{B}$ if and only if $U\cap V=\emptyset$: this is not quite a bipartite subgraph of $K_{n,m}$ (since $\mathcal{A}$ and $\mathcal{B}$ are not required to be disjoint), but it is an induced subgraph of what one might call the bipartite version of $K_{n,m}$ (where each vertex set contains all $m$-element sets and you join two such sets if they are disjoint). From now on I will blur the distinction between this bipartite graph and $K_{n,m}$ itself. The maximum number of edges that could possibly join $\mathcal{A}$ to $\mathcal{B}$ is the number of edges in $K_{n,m}$ (bipartite version), which is $\binom nm\binom{m+k}m=\binom n{m,m,k}$. So we shall assume that the total number of edges is at most $c\binom n{m,m,k}$ for some very small constant $c>0$. Our aim is to remove at most $a\binom nm$ vertices and get rid of at least half these edges (or any other fixed fraction you might care to go for), where $a$ tends to zero as $c$ tends to zero. I’ll discuss how Jozsef does this in the next comment. 3. gowers Says: February 6, 2009 at 11:45 am | Reply 302. Pair removal in Kneser graphs. What Jozsef ends up showing is that there is a very simple way of deciding which vertices to remove: you just get rid of all vertices of large degree. It turns out that if you do this then you either get rid of a significant fraction of all the edges, or you had almost no vertices to start with (in which case you can just remove all of them). Now all the time we are hoping to remove a number of edges that is large as a fraction of the total number of edges, so “large degree” means “significantly larger than the average degree”. Let us suppose, then, that the average degree in our graph is $d$. (Here I am averaging over all vertices in $K_{n,m}$, including those that are not in $\mathcal{A}$ or $\mathcal{B}$.) It follows from Markov’s inequality that for any $C$ the number of vertices of degree at least $Cd$ is at most $2C^{-1}\binom nm$. So we will need to choose $C$ so that $2C^{-1}$ tends to zero as $c$ tends to zero. Suppose that removing these vertices doesn’t get rid of half the edges. That means that after the removal we have a bipartite induced subgraph of $K_{n,m}$ such that the average degree is at least $d/2$ and the maximal degree is at most $Cd$. In the next comment I’ll explain what Jozsef does in this case. 4. gowers Says: February 6, 2009 at 12:26 pm | Reply 303. Pair removal in Kneser graphs. What he does is to use what he calls “the permutation trick of Lubell” to get an upper bound on the number of vertices that there can be in a graph with average degree at least $d/2$ and maximal degree at most $Cd$. Take a random permutation $\pi$ of $\null [n]$. Given two disjoint sets $U$ and $V$, we shall say that $\pi$ separates $U$ and $V$ if every element of $\pi(U)$ is less than every element of $\pi(V)$ or vice versa. What is the expected number of separated pairs with $U\in\mathcal{A}$ or $V\in\mathcal{B}$? Let’s suppose that the total number of vertices in the bipartite graph is $N$. Then the total number of disjoint pairs is at least $(d/2)N$. Each disjoint pair $(U,V)$ has a probability of $2\binom{2m}m^{-1}$ of being separated (since there are $\binom{2m}m$ ways of ordering $U\cup V$ of which precisely two separate them), so the expected number of separated pairs is at least $dN\binom{2m}m^{-1}$. In particular, there exists a permutation $\pi$ that gives rise to at least $dN\binom{2m}m^{-1}$ separated pairs. But how many separated pairs can there be for a single permutation? Well, let’s suppose first that that permutation is the identity (without loss of generality) and that we have separated pairs $(U_1,V_1),\dots,(U_r,V_r)$, ordered in such a way that every element of $U_i$ is less than every element of $V_i$. The crucial observation here is that if $U_1$ is a set with the smallest maximal element out of any of the $U_i$ (again without loss of generality), then $(U_1,V_i)$ is a separated pair for all $V_i$. Therefore, there can be at most $Cd$ distinct sets $V_i$ (since we know that $U_1$ has degree at most $Cd$). Similarly, there can be at most $Cd$ distinct $U_i$, so the total number of separated pairs is at most $C^2d^2$. It follows that $dN\binom{2m}m^{-1}\leq C^2d^2$, and hence that $N\leq C^2d\binom{2m}m=C^2d\binom{n-k}m$. And having got to this point, I realize that I don’t understand why the right-hand side is small. Does anyone see it, or do we wait till Jozsef wakes up? (I also have trouble understanding the significance of the final inequality in his comment 155). 5. gowers Says: February 6, 2009 at 12:36 pm | Reply 304. Pair removal in Kneser graphs. Maybe the point is that the right-hand side doesn’t have to be small if all we know about $d$ is that it’s a small multiple of $\binom{n+m}m$, but if we assume that $d$ is much smaller (it’s possible that for applications to Sperner we may even be able to take $d=1$) and $k$ is reasonably large, then the right-hand side is much smaller than $\binom nm$. So this would fit in with my comment 153 , where I suggested that using a stronger hypothesis might be helpful to prove a pair-removal lemma. 6. gowers Says: February 6, 2009 at 12:56 pm | Reply 305. Triangle removal. Here is a possible toy problem to think about, as an initial contribution to the project of generalizing Jozsef’s proof. This time we let $n=3m+k$. Our basic objects will be pairs $(U,V)$ of disjoint sets. We are given three collections $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ of such pairs. A triangle is a triple of the form $(U,V),(U,W),(V,W)$ with $(U,V)\in\mathcal{A}$, $(U,W)\in\mathcal{B}$ and $(V,W)\in\mathcal{C}$. We would like to prove that if for each pair $(U,V)\in\mathcal{A}$ there is at most one $W$ such that $(U,W)\in\mathcal{B}$ and $(V,W)\in\mathcal{C},$ and similarly for the other two ways round, then it is possible to remove a small number of elements from $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ and end up removing a significant fraction of all the triangles. There are no degenerate triangles here, so this wouldn’t do DHJ straight off, but I think it would be a good way of trying to understand whether Jozsef’s methods generalize. An alternative line of investigation would be to generalize Jozsef’s methods to a statement about several central layers of the cube rather than just one. 7. gowers Says: February 6, 2009 at 1:16 pm | Reply 306. Triangle removal. An optimistic guess about how it might go: we just routinely try to generalize Jozsef’s argument, and at a certain point a difficulty arises, which we turn out to be able to solve using the usual triangle-removal lemma. 8. gowers Says: February 6, 2009 at 1:38 pm | Reply 307. Triangle removal. At the very least, let’s try to do the routine generalization and see what happens. So without any serious thought about what is sensible and what is not, let us say that a permutation $\pi$ separates the three sets $U,$ $V$ and $W$ if every element of $U$ comes before every element of $V$, which in turn comes before every element of $W$, or any one of the other five such statements you get by permuting the three sets. We’ll also say that $\pi$ separates a triangle if it separates the three sets used to build it. Given any two disjoint sets $U$ and $V$, let us define the $\mathcal{A}$-degree of $(U,V)$ to be the number of $W$ such that $(U,V)\in\mathcal{A}$, $(U,W)\in\mathcal{B}$ and $(V,W)\in\mathcal{C}$. (In this case, we say that $U,V,W$ form a triangle.) We have similar definitions for the other two possible degrees. Now let’s suppose that the average degree (over all three bipartite graphs) is $d$ and the maximal degree is $Cd$. What can we say? Let’s think about the expected number of separated triangles. The total number of triangles is $d\binom n{m,m,n-2m}=d\binom{3m+k}{m,m,m+k}$ (up to some absolute constant depending on whether one orders the vertices etc.), and the probability that $\pi$ separates any given triangle is $6\binom{3m}{m,m,m}^{-1}$. So the expected number is $6d\binom{3m}{m,m,m}^{-1}\binom {3m+k}{m,m,m+k}.$ Therefore, there must be some permuation $\pi$ that separates at least this number of triangles. Now let’s think about how many triangles a permutation can separate. Our assumption is that no pair of sets forms a triangle with more than $Cd$ other sets. It’s at this point that, if our wildest dreams come true, we find that normal dense triangle removal is exactly what we need. But no reason to suppose that just yet. I’m going to continue in a new comment. 9. gowers Says: February 6, 2009 at 1:59 pm | Reply 308. Triangle removal. Let’s suppose that $(U_1,V_1,W_1),\dots,(U_m,V_m,W_m)$ are separated triangles. We would like an upper bound on $m$. If we choose $i$ such that the maximal element of $V_i$ is minimized, we see that the number of distinct sets $W_j$ can be at most $D=Cd$. Similarly, the number of distinct $U_i$ can be at most $D$. But each pair $(U_i,W_j)$ is allowed at most $D$ sets $V_h$ in the middle, so we seem to have at most $D^3$ triangles separated by $\pi$. This is slightly worrying as it came out a bit too simply for my liking. If it’s correct, it shows that $6d\binom{3m}{m,m,m}^{-1}\binom{3m+k}{m,m,m+k}\leq Cd^3$, and therefore that $d^2\geq 6C^{-1}\binom{3m}{m,m,m}^{-1}\binom{3m+k}{m,m,m+k}$, which is exponential in $k$. So for very small $d$, such as we might expect to have in DHJ, we get a contradiction. I’ve got to go now, so all I’ll say at this point is that this argument feels too simple to be correct, but I can’t see what’s wrong with it. (If it’s right, then the explanation must simply be that it is not what we need.) 10. Ryan O'Donnell Says: February 6, 2009 at 3:14 pm | Reply 309. Pair removal in Kneser graphs. Can someone help me with this dumb question? Suppose $A = B$ are the family of sets not including the last element $n$. Then $A$ and $B$ have density about $1/2$ within $KN_{n, n/2-k/2}$. (We’re thinking $k(n) \to \infty$, $k(n)/n \to 0$ here, right?) It seems that the fraction of Kneser graph edges which are in $A \times B$ is about $k/n$, which is “negligible”. So we should be able to delete any small constant fraction of vertices from $A = B$ and make it an intersecting family. But $A$ is $100\%$ of the Kneser graph $KN_{n-1, n/2-k/2}$, and doesn’t the largest intersecting family inside such a Kneser graph have density only around $1/2$? 11. gowers Says: February 6, 2009 at 4:49 pm | Reply 310. Pair removal in Kneser graphs. Ryan, all I can say is that I can’t see a flaw in your reasoning. If it’s correct, then it shows something rather interesting. I couldn’t get Jozsef’s argument to work unless I assumed that degrees were very small rather than just small, and your example suggests that it’s actually false when you merely assume that the degrees are a negligible fraction of the maximum possible. In your example, the degree of a vertex in $A$ is still large, even if it’s proportionately small, so there’s still some hope that Jozsef’s argument is correct and adaptable to DHJ. Now I must stare at the argument that I produced in 308 and try to work out what’s going on. 12. Ryan O'Donnell Says: February 6, 2009 at 5:10 pm | Reply 311. I guess you can make the degree go down fairly rapidly compared to the density by requiring the last $t$ coordinates to be ${}0$. (Density goes down like $2^{-t},$ fraction of edges goes down like $(k/n)^t$.) 13. Jason Dyer Says: February 6, 2009 at 5:11 pm | Reply 312. Counting lemma This is idle wondering, but given a random (not quasi-random) tripartite graph, do we know the probability P of n triangles appearing given a density D? This seems like this sort of thing that ought to be just a textbook example. 14. gowers Says: February 6, 2009 at 5:13 pm | Reply 313. Triangle removal. I’m going to try to do two things at once. The first is express myself more clearly when it comes to arguments such as the one in 308, and the second is to try to develop an argument that involves more than one slice, in the hope that degenerate triangles will come into play somehow. Let me begin by trying to define the nicest possible triangular grid of slices . I’ll define $T_m$ to be the union of all $\Gamma_{a,b,c}$ such that $(a,b,c)$ is a triple of integers that belongs to the convex hull of $(n/3+m,n/3-m,n/3)$, … actually, cancel this. It seems to be more natural, if one is going for the nicest possible and most symmetrical set of slices, to go for a hexagon. So let’s take all $(a,b,c)$ that sum to $n$ and that satisfy $\max\{|a-n/3|,|b-n/3|,|c-n/3|\}\leq m$. Here I’m imagining that $n$ is a multiple of 3, not that it’s all that important. (This is a hexagon because there are six linear inequalities, and the symmetry of the situation makes it a regular hexagon.) Let’s write $\Gamma$ for the union of all these $\Gamma_{a,b,c}$. We would like to prove that any dense subset $A$ of $\Gamma$ contains a combinatorial line. Now I’m going to do roughly the same thing as before. I’ll assume that there are no combinatorial lines in $A$. For each sequence $x$ in $A$ and each $j\in\{1,2,3\}$, let’s define the $j$-set of $x$ to be $\{i:x_i=j\}$, and we’ll denote this by $U_j(x)$. As ever, we define $\mathcal{A}$ to be the set of all pairs $(U_1(x),U_2(x))$ with $x\in A$, $\mathcal{B}$ to be the set of all $(U_1(x),U_3(x))$ and $\mathcal{C}$ to be the set of all $(U_2(x),U_3(x))$ (again, always with $x\in A$). The assumption that there are no combinatorial lines tells us that there are no triples $(U,V,W)$ with $(U,V)\in\mathcal{A}$, $(U,W)\in\mathcal{B}$ and $(V,W)\in\mathcal{C}$, except in the degenerate case where $U\cup V\cup W=[n]$. In the next comment I’ll try to hit this with Jozsef’s permutation argument. 15. Jason Dyer Says: February 6, 2009 at 5:15 pm | Reply (Meta-comment) Ryan, like the brackets problem you should toss a {} before the 0 there if you want to render it in LaTeX in WordPress. 16. gowers Says: February 6, 2009 at 5:31 pm | Reply 314. Triangle removal. So now let’s choose a random permutation and see what we can say about the number of separated triangles. To do this, my first task is to say how many actual triangles there are. And the answer is $|A|$, one degenerate triangle for each element of $A$ and no others. Next, I’d like some idea of the probability that a random permutation $\pi$ separates a given triangle. Let’s suppose that the point $x$ of $A$ that gives rise to the triangle belongs to $\Gamma_{a,b,c}$. Then the probability that $\pi$ separates $(U_1(x),U_2(x),U_3(x))$ is $6\binom n{a,b,c}^{-1}$. So the expected number of separated triangles is $6|A|\binom n{a,b,c}^{-1}$. Now suppose I just choose an arbitrary permutation — without loss of generality the identity. How many separated triangles can there be? I think this may be where the problems arise, but let’s see. Suppose that $(U_1,V_1,W_1),\dots,(U_s,V_s,W_s)$ are separated triangles. We know that for each $(U_i,V_i)$ … I may come back to this, but I’ve now seen where I went wrong in 308, so I should think about that first, as it’s a simpler problem. 17. gowers Says: February 6, 2009 at 5:49 pm | Reply 315. Triangle removal: my mistake in 308. I started 308 with the claim that if you choose $(U_i,V_i)$ so as to minimize the maximum element of $V_i$, then you could see immediately that there were at most $D$ distinct $W_j$. The thinking behind this was that each $W_j$ would form a triangle. But that was wrong: there is no reason for $(U_i,W_j)$ to belong to $\mathcal{B}$ or for $(V_i,W_j)$ to belong to $\mathcal{C}$. So it’s back to the drawing board here. (But this, I should stress, is good news as I wanted there to be a complication to give triangle-removal a chance to creep in.) Here is a new proof strategy. First I’ll copy the first part of Jozsef’s argument to get to the stage where each vertex is contained in roughly the same number of triangles. (If I can’t, I’ll remove a few vertices and get rid of a significant fraction of the triangles.) Next, I’ll argue as above that the expected number of separated triangles is reasonably large. Then I’ll try to bound the number of separated triangles if no vertex is in too many triangles and no edge is in more than one triangle. The first step of that will be to pick … actually, I still don’t see it. I was going to say that one should pick the $U_i$ with smallest maximum, but I don’t see what that gives us. What we can say is this. For each $U$ there will be some system of sets $(V_i,W_i)$ such that $(U,V_i,W_i)$ forms a triangle. If in this set system we choose $i$ such that the maximum of $V_i$ is minimized, then we can deduce that there are not too many distinct $W_j$. Except that I don’t know why I even say this so confidently: what makes me think that $(V_i,W_j)$ should lie in $\mathcal{C}$? OK I’m floundering around a bit here so I’d better stop. But I’ll leave these messy thoughts here in case anyone can get a clearer idea of what ought to be done. It feels as though it’s in promising territory somehow. 18. gowers Says: February 6, 2009 at 6:14 pm | Reply 316. Triangle removal. This is the kind of reason I find it promising. The graph where you join two sets if and only if one is completely to the left of the other is, under normal circumstances, much denser than the graph where you just join two sets if they are disjoint. So what Jozsef’s permutation argument is cleverly achieving seems to be to get us from a sparse context to a dense context. (I don’t yet have a fully precise formulation of this remark.) Now let’s suppose we have our separated triangles $(U_i,V_i,W_i)$ above. Suppose also that we have managed to show somehow that many of the pairs $(U_i,V_j)$ belong to $\mathcal{A}$, and so on. We know that there won’t be many triangles, so this will imply, by the usual triangle-removal lemma, that we can remove a small number of edges and end up with no triangles. Could there be an argument along these lines that allows us to prove density Hales-Jewett by hitting it permutation by permutation? We’re in that familiar state where there’s a small probability $p$ of being close to a solution and a large probability $1-p$ of having written a load of nonsense. 19. jozsef Says: February 6, 2009 at 6:33 pm | Reply 317. Pair removal in Kneser graphs. Bad news – good news. Good morning! I hope that I didn’t derail the project with my last calculation yesterday – that the “removal lemma” works when $m\sim n - \sqrt{n}$. The main inequality is correct, and we see that it tells us something non-trivial if m is small compare to n. As I see now, the average degree, d, grows as $m=o(n - \sqrt{n})$. I think that this is the range where we have a removal lemma so far. This is a removal lemma we get by simple double-counting, so I’m afraid that we can’t expect too much out of it, but it supports Tim’s conjecture. 20. gowers Says: February 6, 2009 at 7:15 pm | Reply 318. Pair/triangle removal. Hi Jozsef. Two remarks. It seems to me (but I’d be reassured if I knew you agreed about this) that we have a removal lemma if the average degree is very small. And in interesting contexts it is. For example, if $\mathcal{A}$ is a counterexample to Sperner’s theorem and we let $\mathcal{B}$ consist of al complements of sets in $\mathcal{A}$, then each set in $\mathcal{A}$ is joined just to its complement, so the average degree is 1. The second remark is that it’s not impossible that a simple double counting gives us something good. The reason is that when going from 2 to 3 in the dense case you go from trivial to interesting, so when going from 2 to 3 here, perhaps you go from simple to very interesting. 21. Ryan O'Donnell Says: February 6, 2009 at 7:25 pm | Reply 319. Sperner. Hi Jozsef, is that a typo? $m$ needs to be less than $n/2$; otherwise the $KN_{n,m}$ Kneser graph is empty. I’m not sure the scenario we should consider now… we have counterexamples to the Tentative Conjecture for $k$ both constant and superconstant. 22. jozsef Says: February 6, 2009 at 7:28 pm | Reply 320 Pair removal lemma Now I feel a bit dummy; I wanted to check where did the calculation go wrong yesterday in my last post, but I can’t see. Tim, in 304 you asked why is the right hand side small. By checking $\binom{n}{m}/\binom{2m}{m}$ what I see – again – is that it should be around $2^{2\sqrt{n}}$. I probably badly overlook the same thing again … I’ll go to have a morning coffee. 23. gowers Says: February 6, 2009 at 7:34 pm | Reply 321. Sperner Ryan, you may be ahead of me here and already see why the suggestion I’ve been making is a bad one, but what about strengthening the hypothesis of the Tentative Conjecture from the number of disjoint pairs being $o(1)\binom{n}{m,m,k}$ to each set belonging to at most one disjoint pair? (With this second hypothesis one tries to prove that the sets must be small.) Actually, isn’t that just Bollobás’s theorem? I think it is because if I take $d=C=1$ in 303 then I get the correct bound of $\binom{2m}m$. Is that the standard proof of Bollobás’s theorem? 24. Jason Dyer Says: February 6, 2009 at 7:35 pm | Reply 322. Pair removal in Kneser graphs. Ok, I am trying to understand Jozsef’s proof (I notice it uses the greedy algorithm, interesting) and I must be missing something elementary at 303: The crucial observation here is that if $U_1$ is a set with the smallest maximal element out of any of the $U_i$ (again without loss of generality), then $(U_1,V_i)$ is a separated pair for all $V_i$. Could someone expand on this a little more? I’m not seeing why. 25. gowers Says: February 6, 2009 at 7:37 pm | Reply 323. Pair removal in Kneser graphs. Jason, if $U_1$ has the smallest maximal element and $(U_i,V_i)$ is separated, then the largest element of $U_1$ is at most the largest element of $U_i$ is less than the smallest element of $V_i$. If this doesn’t explain it then I don’t understand what you’re not understanding. 26. gowers Says: February 6, 2009 at 8:08 pm | Reply 324. Triangle removal A quick technical question: I think it should be reasonably straightforward to answer. Suppose we have a dense set $A$ in $\null[3]^n$. Suppose now that we take a random permutation $\pi$ and just take from $A$ the sequences that start with 1s, continue with 2s and finish with 3s (after you’ve used $\pi$ to scramble the coordinates). And now suppose you form the tripartite graph from just those sequences. That is, you let $\mathcal{A}$ consist of all pairs $(U,V)$ that are equal to $(U_1(x),U_2(x))$ for some $x\in A$, and so on. We can think of $\mathcal{A}$ as a bipartite graph, whose vertices are subsets of $\null [n]$, where we join $U$ to $V$ if $U$ finishes before $V$ starts. What I’d like to know is whether this graph tends to be dense. I think the straightforward answer is no, so what I really mean is whether it can be made dense if you remove a smallish number of vertices and edges, or something like that. The motivation for this question is that I’d very much like to be able to define a dense tripartite graph with few triangles and see what happens when we apply the usual triangle-removal lemma. 27. jozsef Says: February 6, 2009 at 8:30 pm | Reply 325. Pair removals Re: 319, yes there was a typo, I meant n/2 there, thanks. But I still don’t see if anything is bad with my calculation; Please help me, is it correct that if $m=n/2-\sqrt{n}$ then d should be at least $c2^{2\sqrt{n}}$ or we have a removal lemma? After this I will try to digest Tim’s triangle removal technique. 28. Ryan O'Donnell Says: February 6, 2009 at 9:29 pm | Reply 326. Kneser. Hi Tim, in #321, if each set is involved in at most one Kneser edge, then I think the least number of sets you need to delete in order to eliminate all Kneser edges is with a factor of two of the number of sets (vertices), no? 29. gowers Says: February 6, 2009 at 9:37 pm | Reply 327. Kneser Yes, so if you can prove that you can remove a small number of sets then you’ve proved that there weren’t very many sets in the first place. 30. gowers Says: February 6, 2009 at 9:43 pm | Reply 328. Pair removal Jozsef in 325, my calculation (which I checked and I’m pretty sure it works out to be the same as yours) suggests that you get something interesting as long as $C$ is large and $C^2d\binom{2m}m$ is a lot smaller than $\binom{2m+k}m$. Since $\binom{2m+k}m$ is about $2^k\binom{2m}m$, it follows that we need $d$ to be small compared with $2^k/C^2$ in order to get a removal lemma. I think this agrees with you, since you are taking $m=n/2-\sqrt n$ and $k=2\sqrt n$. So it seems correct to you and it seems correct to me, which means that there’s an $\epsilon^{3/2}$ chance that it’s incorrect. 31. jozsef Says: February 6, 2009 at 9:53 pm | Reply 329. Thank you Tim. I think I should apologize that, after reading 303, I didn’t pay much attention to later posts from where I could have learned that things were OK with the calculation. In the next post I will try to understand and extend your toy version of a triangle removal lemma. Unfortunately now I have to go to the university and probably I won’t read the posts in the next 5-6 hours. 32. gowers Says: February 6, 2009 at 11:11 pm | Reply 330. Triangle removal Jozsef, I should make it clear that I don’t yet have an interesting new triangle removal lemma proved, so if you prove anything at all it counts as an extension of what I’ve done. But I might think just a little bit further about the vague idea of 324. Suppose then that we have a dense set of sequences in $\null[3]^n$, and for simplicity let’s assume that they’re all reasonably well balanced (meaning that they all have roughly equal numbers of 1s, 2s and 3s). How many times do we expect a given set $U$ to occur as $U_1(x)$ for some $x\in A$? The very rough answer is $\binom{2n/3}{n/3}$. Let’s be even more rough and call that $2^{2n/3}$. Now $A$ has size around $\binom{n}{n/3,n/3,n/3}=\binom{n}{n/3}\binom{2n/3}{n/3}$. I’m going to interrupt this comment because I’ve had a different idea that I want to pursue urgently. 33. gowers Says: February 6, 2009 at 11:52 pm | Reply 331. Sperner and DHJ The proof I know of Sperner goes like this. You take a random sequence of sets of the form $\emptyset=A_0\subset A_1\subset A_2\subset\dots\subset A_n=[n]$, where $A_i$ has cardinality $i$. If $\mathcal{A}$ is an antichain, then it intersects each such chain in at most one set. Now let’s imagine we choose a set at random. What is the probability that it lies in $\mathcal{A}$? Let’s write $\delta_i$ for the density of $\mathcal{A}$ inside layer $i$ (that is, the number of sets in $\mathcal{A}$ of size $i$ divided by $\binom ni$). Then the expected size of the intersection of $\mathcal{A}$ with our random chain is $\delta_0+\delta_1+\dots+\delta_n$. It follows that the $\delta_i$ add up to at most 1 (since at least one random chain will have at least the expected number of elements of $\mathcal{A}$ in it). Since the middle layer(s) is (are) biggest, it’s clear that we maximize the size of $\mathcal{A}$ by choosing a middle layer. The idea that’s just occurred to me is that we’ve just deduced Sperner from the one-dimensional corners result (which states that if you have a dense subset of $\null [n]$ then you must be able to find $x$ and $x+d$ in it with $d\ne 0$). What happens if we try to deduce density Hales-Jewett from 2-dimensional corners? Well, first we need to decide what replaces a random chain. For reasons of symmetry I’d like to think of a random chain as follows: you pick a random permutation of $\null[n]$ and then once you’ve scrambled the set you take all partitions $(A,B)$ of $\null [n]$ into two sets such that every element of $A$ is less than every element of $B$. At this point I’m just going to guess that it might be reasonable to choose for DHJ purposes a random permutation of $\null[n]$ followed by all partitions $(A,B,C)$ such that every element of $A$ comes before every element of $B$, which comes before every element of $C$. This is at least a two-dimensional structure, since it’s determined by the end points of $A$ and $B$. I will also think of this as the set $S$ of all sequences in $\null[3]^n$ that start with some 1s, continue with some 2s, and end with some 3s. Next question: if $\mathcal{A}$ contains no combinatorial line, can we deduce that the intersection of $\mathcal{A}$ with $S$ is small? What I’d like to do is prove that it has no corners, in some useful sense of the word “corners”. How would we get a combinatorial line in this set? We can of course encode each element of $S$ as a triple $(a,b,c)$ where $a+b+c=n$. And at this point we have a problem: $S$ contains no combinatorial lines. Well, I suppose if that had worked it would have been discovered by now. But it suggests an amusing problem. Let $T_m$ be the hexagonal portion of the triangular grid $x+y+z=0$ (a 2-dimensional subset of $\mathbb{Z}^2$) defined by $\max\{|x|,|y|,|z|\}\leq m$. Let us call a function $f:T_m\rightarrow [3]^n$ a corner isomorphism if it is an injection such that every aligned equilateral triangle in $T_m$ maps to a combinatorial line. (An aligned equilateral triangle is a set of the form $(a+r,b,c),(a,b+r,c),(a,b,c+r)$.) If we could find a corner isomorphism for some fairly large $m$, then we could probably move the image about and prove density Hales-Jewett using an averaging argument. So can anyone either find a big corner isomorphism or (more likely) prove that no such function exists? I don’t expect this problem to be very hard. 34. gowers Says: February 7, 2009 at 1:48 am | Reply 332. Sperner and DHJ My last contribution for the day: I’ve just checked and there isn’t even a corner isomorphism from $T_2$ into $\null [3]^n$. The proof goes like this: you take an arithmetic progression of three points in a line and map them to $12333$, $12233$ and $12223$. (I think I may also have wanted to insist that lines go to sets where the appropriate $j$-set is fixed.) Those three points determine a $T_2$ and their images determine what the images of the other three points under a corner isomorphism would have to be, but those three images don’t lie in a combinatorial line. (For what it’s worth, they are $12133$, $12113$ and $12213$.) 35. jozsef Says: February 7, 2009 at 6:49 am | Reply 333. Triangle removal To understand the notations and arguments below, one should read 305-307-308 first. Tim’s proof in 307-308 works nicely, if we change the initial conditions. Our base set $\mathcal{S}$ is now just a collection of m-element subsets (where n=3m+k) instead of $\mathcal{A},\mathcal{B},\mathcal{C}$ and we are considering now every triangle formed by any three pairwise disjoint elements of $\mathcal{S}$. For this set of triangles, we do have the removal lemma as Tim has calculated it. Let me add two technical remarks which might be needed for the complete proof. A triangle is called “normal” if none of its edges is edge of more than Cd other triangles. Then, the argument gives bound on the number of normal triangles. If most of the triangles aren’t normal, then we have a removal lemma. The conclusion is that if we have a small number of triangles in $\mathcal{S}$ compare to the size of $\mathcal{S}$, then a small fraction of the edges covers most of the triangles. My second small remark is that when we bound the normal triangles separated by a permutation, then we consider the sets right from the leftmost (having the smallest last point) middle element and and the sets left from the rightmost (having the largest first point) element. These are both bounded by Cd and they span at most $(Cd)^3$ separated triples, as Tim said. I’m just emphasizing the selection of the left size here, only a technical detail. This theorem is more restricted than Tim’s original statement, but I think it might be useful. In DHJ the dense subset is given and we will work with the induced substructure. 36. jozsef Says: February 7, 2009 at 7:01 am | Reply 334. Sperner I was wondering what would be the proper Sperner-type theorem in ${}[3]^n$ similarly useful as Sperner in ${}[2]^n$. Is there something before DHJ? 37. gowers Says: February 7, 2009 at 10:22 am | Reply 335. Sperner Jozsef, I think I have the answer to your last question. Let’s define the Kneser product of two set systems $\mathcal{A}$ and $\mathcal{B}$ to be the set of all pairs $(U,V)$ such that $U\in\mathcal{A}$, $V\in\mathcal{B}$ and $U\cap V=\emptyset$. Then the theorem would be that every Kneser product that is dense in the set of all disjoint pairs (of which there are $3^n$) contains a corner: that is, a configuration of the form $(U,V)$, $(U,V\cup D)$, $(U\cup D,V)$ with all three of $U$, $V$ and $D$ disjoint. I’ve stated it like that to make it as similar as I can to Sperner. But a more symmetrical version is this. Suppose that $\mathcal{S}$, $\mathcal{T}$ and $\mathcal{U}$ are set systems such that there are at least $c3^n$ ways of partitioning $\null [n]$ as $S\cup T\cup U$ with $S\in\mathcal{S}$ etc. Then the corresponding collection of sequences contains a combinatorial line. To see that this is a natural extension of Sperner, consider the following silly way of stating (weak) Sperner. If $\mathcal{A}$ and $\mathcal{B}$ are two set systems such that there are at least $c2^n$ ways of partitioning $\null [n]$ as $A\cup B$ with $A\in\mathcal{A}$ and $B\in\mathcal{B},$ then the corresponding collection of 01-sequences contains a combinatorial line. It looks to me as though there should be a two-dimensional family of density Hales-Jewett statements. That is, for each $1\leq j<k$ there should be a DHJ(j,k), where Sperner is DHJ(1,2), density Hales-Jewett in $\null [k]^n$ is DHJ(k-1,k), and the theorem just discussed (for which it looks as though we may have a proof) is DHJ(1,3). The idea would be that for DHJ(j,k), whether or not a sequence belongs to the set depends on a j-uniform hypergraph with $2^{[n]}$ as its vertex set. We should see if there is any relation between this and Terry’s DHJ(2.5) problem . I suspect there isn’t, in which case it’s amusing to have come up with a different way of “interpolating” between Sperner and DHJ. 38. gowers Says: February 7, 2009 at 2:32 pm | Reply 336. Sperner and DHJ. A small observation. One could regard the previous comment as showing that at least one natural generalization of Sperner to $\null [3]^n$ turns out not to be the statement we want. That is unfortunate, though by no means catastrophic. In this comment I want to point out that there is a different way of generalizing Sperner that leads (I think) to a deeper theorem, though it still isn’t DHJ. Let’s go back to comment 331 and try to generalize the proof of Sperner and see what comes out, not worrying if it isn’t DHJ. To do this, we associate with any triple $a+b+c=n$ the three sets $\{1,2,\dots,a\}$, $\{a+1,\dots,a+b\}$ and $\{a+b+1,\dots,n\}$. Let’s call this the triple $U_{a,b,c}$. Now let’s suppose we have a corner, by which in this context I mean a triple of triples of the form $(a+r,b,c)$, $(a,b+r,c)$ and $(a,b,c+r)$ (with $a+b+c+r=n$). What combinatorial configuration do we get from the corresponding triples $U_{a+r,b,c}$, $U_{a,b+r,c}$ and $U_{a,b,c+r}$? The answer is that we partition $\null [n]$ into sets $A_1=\{1,2,\dots,a\}$, $A_2=\{a+1,\dots,a+r\}$, $A_3=\{a+r+1,\dots,a+b\}$, $A_4=\{a+b+1,\dots,a+b+r\}$ and $A_5=\{a+b+r+1,\dots,n\}$. Then the sequence corresponding to $U_{a+r,b,c}$ is 1 on $A_1$ and $A_2$, 2 on $A_3$ and $A_4$, and 3 on $A_5$. The sequence corresponding to $U_{a,b+r,c}$ is 1 on $A_1$, 2 on $A_2$, $A_3$ and $A_4$, and 3 on $A_5$. Finally, the sequence corresponding to $U_{a,b,c+r}$ is 1 on $A_1$, 2 on $A_2$ and $A_3$, and 3 on $A_4$ and $A_5$. Thus, we have three fixed sets, namely $A_1$, $A_3$ and $A_5$, and two variable sets, namely $A_2$ and $A_4$. If we identify each set to a point, we can think of the three sequences as $11223,$ $12223$ and $12233$. If we forget about the fixed sets, the restrictions to the variable sets are going $12,22,23$. Also, and this is important, the two variable sets have the same size. (If they didn’t, then the resulting theorem would be much easier.) Given a dense set of triples that start with 1s, continue with 2s and end with 3s, then the corners theorem will tell you that you must have a configuration of the above kind. I haven’t checked this, but I’m about 99% certain that if you imitate the proof of Sperner by averaging over a random permutation, then you will be able to deduce that a dense subset of $\null [3]^n$ will contain a “peculiar combinatorial line” of the above form. (Just to be clear, a peculiar combinatorial line is a triple of sequences that are fixed outside two sets $V$ and $W$ that have the same size, and inside $V$ and $W$ they go $12,22,23$.) The reason I call this a deeper theorem is that it relies on the corners theorem. Indeed, it implies the corners theorem if you restrict to sets that are unions of slices. (This is because we insist that $V$ and $W$ have the same size.) Again, this is a slightly depressing observation because it shows that a reasonably natural generalization of the Sperner proof gives us the wrong theorem. However, again it doesn’t actually tell us that we can’t find a different way of generalizing Sperner that gives the right theorem. 39. gowers Says: February 7, 2009 at 4:05 pm | Reply 337. Sperner and DHJ A tiny observation, following on from 332. The main difficulty seems to be as follows. Let’s define the order profile of a sequence to be what you get when you collapse each block of equal coordinates to just one coordinate. For example, the order profile of $113323311113$ is $132313$. In $\null [2]^n$ you can have a combinatorial line where every element has the same order profile: e.g. $11122$ and $12222$. But in $\null [3]^n$ that is impossible. (Proof: if the last fixed coordinate before a variable string starts is $i$ and the first fixed coordinate after it finishes is $j$, then the variable string can only take the values $i$ and $j$.) I’ve vaguely wondered about having a two-dimensional ground set, so that it’s possible for three sets to meet at a point, but I haven’t managed to define a nice two-parameter family of sequences. 40. gowers Says: February 7, 2009 at 4:13 pm | Reply 338. Sperner and DHJ Actually, I can’t resist just slightly expanding on the last paragraph of 337. Let’s take not $\null[3]^n$ but ${}[3]^{n^2}$, which we think of as the space of all functions from $\null [n]^2$ to $\{1,2,3\}$. I’d like to regard such a function as “simple” if you’ve basically got three big simply connected patches, of 1s, 2s and 3s, and if the boundaries between these patches are not too wiggly. A combinatorial line of functions like this would have the desirable property that it wouldn’t be obvious from just one of the functions what the variable set was: in some sense the “two-dimensional order profile” would be the same for all three points. One could now ask whether the simpler structure here would make DHJ easier to prove. If so, then one could average up in the Sperner fashion. But it doesn’t look easy to say exactly what this simpler structure is. 41. gowers Says: February 7, 2009 at 4:58 pm | Reply 339. Sperner and DHJ I take that back. Here’s an actual conjecture. Let’s take as our basic object the kind of thing one talks about in Sperner’s proof of the Brouwer fixed point theorem (surely a coincidence though). Chop out a large hexagonal piece from a tiling of the plane by hexagons and colour two of its edges (next to each other) red, two green and two blue, and extend these colourings from the boundary into the inside of the hexagon until you’ve coloured it all, and do so in such a way that you have three simply connected regions that meet at some point. Colourings of this kind are the basic objects that we’ll consider. Of course, “red”, “green” and “blue” is another way of saying 1, 2 and 3. The question is, if you have a dense set of such colourings, must you have a combinatorial line? A combinatorial line in this context will be like what I described except that the three regions meet not at a point but round the boundary of a fourth region, which is the set of variable coordinates. At the moment I know pretty well nothing about this question. I don’t know whether there’s an easy counterexample, I don’t know how to think about the measure on all those colourings (which feels like a fairly nasty object of a kind that the late and much lamented Oded Schramm would have been able to tell us about), and I don’t know whether a positive answer follows from density Hales-Jewett. The one thing I feel fairly confident of is that this, if true would imply DHJ. Oh, and one other thing is that I don’t have any feel for whether this is more or less equal to DHJ in difficulty, or whether there is a chance that it is easier. Actually, there’s another thing that I’m confident of — in fact, certain of — which is that it is a natural generalization of the trivial fact that underlies Sperner. (That is the fact that if you have a dense set of 2-colourings of a line of points, then two of those 2-colourings will form a combinatorial line.) This conjecture is probably not to be taken too seriously, but I quite like the fact that it doesn’t follow easily from DHJ. 42. Sune Kristian Jakobsen Says: February 7, 2009 at 7:09 pm | Reply 340. Variant of Sperner First I must admit, that I haven’t read much in this thread, so I don’t know if this has been said before. I got this idea while I tried to find some low values of $c_n$. In 331 Gowers gives a proof that if $\mathcal{A}$ is a antichain, \$\sum_{A\in\mathcal{A}}\frac{1}{\binom n{|A|}}\leq 1\$. Using the same idea it is possible to show that if $\mathcal{A}$ is a family, that intersects every chain $\emptyset=A_0\subset A_1\subset A_2\subset\dots\subset A_n=[n]$, then \$\sum_{A\in\mathcal{A}}\frac{1}{\binom n{|A|}}\geq 1\$. Notice that when the elements of $\mathcal{A}$ is only allowed to have one of two given sizes, these two theorems are equivalent. The later theorem seems to be easier to generalize to the k=3 case, and it might be useful, at least in the 200-thread. 43. jozsef Says: February 7, 2009 at 9:56 pm | Reply 341. Sperner I’m reading 335, and I’m trying to understand the conjecture. First, there is a coloring theorem for subsets that says that one can find three disjoint sets in the same colour class that their unions are also in that colour class. It is a special case of the Finite Union Theorem (Folkman’s Theorem) which is equivalent to the finite sums theorem. It states that any finite colouring of the natural numbers contains arbitrary large monochromatic IP sets. The density statement is clearly false without extra conditions on the dense subset. It is obvious that the finite union theorem implies the finite sum theorem, however the other direction is difficult; one can use Van der Waerden for the reduction. Maybe we can also prove some density theorem for disjoint set unions using Szemeredi’s thm. It is still not Tim’s cross-product conjecture, but it seems to be quite promising. Let me also mention that the other proof of finite union theorem uses HJ! About the details: a possible reference is Graham, Rothschild, Spencer, “Ramsey Theory”. Now it’s family lunch-time, so I will be away for a while, but I’m quite excited about this possible “density finite union theorem” so I will come back a.s.a.p. 44. gowers Says: February 7, 2009 at 10:42 pm | Reply 342. Sperner Jozsef, What I wrote in 335 was meant to be more or less the same as what you suggested in 333 — a very slight generalization perhaps, but you can make it the same if you look at the special case $\mathcal{S}=\mathcal{T}=\mathcal{U}$. And I think that, as you say, it can be proved by the argument of 308. Or are we talking about different things? 45. Boris Says: February 7, 2009 at 11:37 pm | Reply 343. In comment 335 what is meant in general by DHJ(j,k)? 46. gowers Says: February 8, 2009 at 12:02 am | Reply 344. Sperner/DHJ Boris, I hadn’t actually worked it out, but I think I can. Basically, DHJ(j,k) is density Hales-Jewett in $\null [k]^n$ but with $j$ placing a restriction on the complexity of the dense set $A.$ Let me give the case $j=k-1$ first. This is equivalent to the normal density Hales-Jewett theorem, but I’ll state it slightly differently. Actually, I think I may as well go for the whole thing right from the start. Suppose that for each subset $E$ of $\null [k]$ of size $j$ you have a set $\mathcal{A}_E$ of functions from $E$ to the power set of ${}[n]$ such that the images of all the points in $E$ are disjoint sets. Let’s write $(U_j:j\in E)$ for a typical element of $\mathcal{A}_E$. I now define a set $\mathcal{A}$ of (ordered) partitions of $\null [n]$ into $k$ sets $(U_1,\dots,U_k)$ by taking all partitions $(U_1,\dots,U_k)$ such that for every $E$ of size $j$ we have that $(U_j:j\in E)$ belongs to $\mathcal{A}_E$. Thus, in a certain sense the set of partitions (which are of course in one-to-one correspondence with sequences in $\null [k]^n$) depends only on what collections of $j$ sets there are in the partition. Then DHJ(j,k) is the claim that if your dense set $\mathcal{A}$ has this special form, then it must contain a combinatorial line. My suggestion, which I think I could justify completely if pushed to do so, is that if $j=1$ then one can prove DHJ(j,k) by a fairly simple adaptation of the proof of Sperner as given by Jozsef. Sperner, by the way, is DHJ(1,2), and the problem we have been considering is DHJ(2,3). In general, I suspect that DHJ(j,k) is of roughly equal difficulty to DHJ(j,j+1). It might be interesting to see if we can deduce DHJ(j,k) from DHJ(j,j+1), though at the moment my feeling is that we can prove DHJ(1,k) not by deducing it from DHJ(1,2) but by imitating the proof of DHJ(1,2), which is not quite the same thing. 47. jozsef Says: February 8, 2009 at 12:03 am | Reply 345. Triangle Removal in Kneser Graphs I will go back to the “density finite union theorem” soon as I think it is relevant to our project, but first here is a general remark about the removal lemma. I just want to make it clear that in a certain range we have a real removal lemma. Given three families of sets A,B, and C. If the number of disjoint triples (one element from each) is smaller than X (something depending on the sizes |A|,|B|, and |C|) then a small fraction of the disjoint pairs covers EVERY triangle. To see that, we just have to iterate the counting of “normal” triangles. (post 333.) 48. Boris Says: February 8, 2009 at 12:17 am | Reply 346. I think I misunderstand comment 344. Consider DHJ(k-1,k). We have $k$ sets $\mathcal{A}_E$ each of which gives a rise to a subset of ${}[k]^n$ which I call $B_E$. Then the subset $B$ of ${}[k]^n$ that corresponds to $\mathcal{A}$ that is the intersection $\bigcap_{E} B_E$. However, nothing stops us from make $B_E$ large, and their intersection empty. 49. gowers Says: February 8, 2009 at 12:33 am | Reply 347. Boris, I wasn’t clear enough in what I meant. Of course, you are right in what you say, but the statement is that if the intersection is dense then you get a combinatorial line, and not the weaker assumption that each individual $B_E$ is dense. Of course, in the case $j=k-1$ you can consider just a set system $B_{[k-1]}$, just as in Sperner you can consider just the set of points where your sequence is 0. 50. jozsef Says: February 8, 2009 at 1:22 am | Reply 348. Density Finite Union (DFU) Theorem My previous remark was about a possible DFU thm. Let us see the simplest version. What would be a necessary density condition to guarantee a pair of disjoint sets A and B that A,B, and $A\cup B$ are in this “dense” set? Considering the arithmetic analogue, if we have a set $S\subset [n]$ with cn elements a that 2a is also in S, then there is a nontrivial solution for x+y=z. 51. jozsef Says: February 8, 2009 at 1:37 am | Reply 349. DFU contd. The first density condition would be that the number of disjoint pairs is at least ${}c3^n$.The second condition should be something related to sets having 2/3n+-k elements. I’m still not sure what could be a meaningful notation for density here. 52. jozsef Says: February 8, 2009 at 2:03 am | Reply 350. DFU A possible statement could be the following; Given a family F of subsets of [n]. Let us suppose that the number of disjoint pairs in F is at least ${}c3^n.$ We are going to define a graph on F. The verices are the sets in F and two are connected iff they are disjoint and their union is also in F. Then, either one can remove ${}o(|F|)$ vertices to destroy all edges, or there are ${}c'3^n.$ edges where c’ depends on c only. I’m not sure that the statement is true, but something like this should be true. 53. jozsef Says: February 8, 2009 at 2:14 am | Reply 351. DFU (Density Finite Union) theorem. Well, now I see that ${}c'3^n$ is way too ambitious; let me change it to f(n)|F| where f(n) goes to infinity. 54. gowers Says: February 8, 2009 at 2:30 am | Reply 352. DFU I’ve got to go to bed, but as a last comment, let me ask whether 350 is intended as a set-theoretic analogue of Ben Green’s theorem that if you have a set with few solutions of $x+y=z$ then you can remove a small number of elements and end up with no solutions of $x+y=z$. It would be very nice to be able to prove something like that (and a generalization to larger finite unions). If one were going to try to find a counterexample, one would have a large collection of sets of size around $n/3$, and one would partition them into disjoint pairs, and one would add to the collection the set of unions of these pairs, which would turn them into a matching in your graph. The question would then be whether one could do this without creating $c'3^n$ further edges. Here’s an attempt to do so. First we choose a random collection $\mathcal{F}$ of sets of size $2n/3$, big enough that its $n/3$ lower shadow is all of $\binom{[n]}{n/3}$. Next, we choose all sets of size at most $n/3$. Then the number of disjoint pairs is certainly at least $c3^n$. Also, the number of edges is at most $2^{n/3}|\mathcal{F}|$. Next, it seems to me that it will be very hard to remove all edges: the fact that $\mathcal{F}$ is random and that every vertex is contained in an edge (and we could make $\mathcal{F}$ slightly bigger so that each vertex is in several edges) ought to mean that we can choose a large matching. How big is $\mathcal{F}$? Well, each set in $\mathcal{F}$ covers $2^{n/3}$ small sets, so by my calculations the number of edges is $2^{2n/3}$, which is a lot smaller than $3^n$. Sorry if some of this turns out to be nonsense — I must must sleep now. 55. gowers Says: February 8, 2009 at 2:32 am | Reply 353. DFU Just to say that I wrote 352 before seeing your 351. 56. jozsef Says: February 8, 2009 at 8:36 am | Reply 354. DFU (Density Finite Union conjecture) Let me state here a conjecture. It might not be directly connected to our project, however I’ve been looking for the right formulation for a while and I think that this is the right one; First we have to find the proper notation of density since simple density wouldn’t guarantee the result we are looking for. The “weighted density” of a set $S\subset [n]$ is given by the sum of the (normalized) number of pairwise disjoint k-tuples in S for all 1<kc>0 there are m pairwise disjoint sets, $B_1,...,B_m$ that $\cup_{i\in I} B_i \in S$ for any nonempty $I\subset [m]$. This form is clearly a density version of Folkman’s theorem; If we colour the subsets of [n] by K colours, then at least one class has density > 1/K. 57. jozsef Says: February 8, 2009 at 8:45 am | Reply Somehow I lost the middle of the text between 10 355. DFU (Density Finite Union conjecture) Let me state here a conjecture. It might not be directly connected to our project, however I’ve been looking for the right formulation for a while and I think that this is the right one; First we have to find the proper notation of density since simple density wouldn’t guarantee the result we are looking for. The “weighted density” of a set $S\subset 2^{[n]}$ is given by the sum of the (normalized) number of pairwise disjoint k-tuples in S. $D(S)={\frac{1}{n-1}}\sum_{k=2}^n |\{(A_1,...,A_k):A_i\in S, |A_i\cap A_j|=0\}|/(k+1)^n$. The maximum number of pairwise disjoint k-tuples is $(k+1)^n$ so D(S) is a number between 0 and 1. The conjecture is that for any number m, if S contains no m pairwise disjoint elements with all $2^m-1$ possible nonempty unions, then it is sparse. D(S) goes to 0 as n goes to infinity. With different words, if S is dense (and large enough) then there are m pairwise disjoint sets, $B_1,...,B_m$ that $\cup_{i\in I}B_i\in S$ for any nonempty index set $I\subset [m]$ 58. gowers Says: February 8, 2009 at 11:51 am | Reply 356. DFU Dear Jozsef, I’ve edited your last comment so that the formulae show up, and made a couple of other tiny changes (such as changing $S\subset [n]$ to $S\subset 2^{[n]}$). Please check that I haven’t introduced any errors. As it stands, I don’t understand your conjecture. A typical disjoint $k$-tuple has all its sets of size around $n/(k+1)$, so it seems to me that the set of all sets of size around $n/(k+1)$ is dense in your sense. And there is of course no hope of getting finite unions in this set. Am I misunderstanding something? 59. DHJ — quasirandomness and obstructions to uniformity « Gowers’s Weblog Says: February 8, 2009 at 5:43 pm | Reply [...] and lower bounds for the density Hales-Jewett problem , hosted by Terence Tao on his blog, and DHJ — the triangle-removal approach, which is on this blog. These three threads are exploring different facets of the problem, though [...] 60. jozsef Says: February 8, 2009 at 6:22 pm | Reply 357. DFU Tim, there is a 1/(n-1) multiplier, so I thought that one layer (k) will contribute a small fraction only. In particular, I think that rich layers form arithmetic progressions and it might be useful in a possible proof. 61. gowers Says: February 8, 2009 at 6:37 pm | Reply 358. DFU Ah — I hadn’t spotted that you were averaging over lots of layers. Now it starts to make more sense to me. 62. Boris Says: February 8, 2009 at 7:25 pm | Reply 359. Averaging over multiple layers does not seem to help in comment 355. Consider the family $S$ of all sets of odd size. It contains no disjoint $B_1,B_2\in S$ such that $B_1\cup B_2\in S$. 63. Boris Says: February 8, 2009 at 7:30 pm | Reply Oops… $d(S)=1/n$ for the example I gave. 64. Boris Says: February 8, 2009 at 8:05 pm | Reply 361. I think Jozsef’s conjecture is true, but not for an interesting reasons. I claim that for every fixed positive integer $t$ the condition $d(S)>0$ implies that $|S\cap \binom{[n]}{t}|\geq (1-o(1))\binom{n}{t}$. Suppose not, let $T=\binom{[n]}{t}\setminus S$. Now let us generate the partition of ${}[n]$ into $k$ parts as follows. Let $P_1,\dotsc,P_k$ be the partition of ${}[n]$ into $k$ random parts. If $k=cn$ with $c>0$, then $Pr[P_i\in T]=p$ is positive, and is independent of $i$. Had $P_1,\dotsc,P_k$ been independent, it would have implied that $Pr[\exists i\ P_i\in T]=p^k$ is exponentially small. Clearly, $P$‘s are not independent, but they are nearly independent. More precisely, let $R_i$ be the event that $|P_i|\leq n^{1/4}$. Then $Pr[ P_i\cap P_j\neq \emptyset |R_i\wedge R_j]=1-O(n^{-1/2})$. Hence, sampling $n^{1/6}$ random sets by choosing each element with probability $1/(k+1)$ is virtually indistinguishable from choosing a random $k$-tuple of disjoint sets, and then looking at first $n^{1/6}$ of them. I believe I can write the appropriate string of inequalities to show that if asked. Since density of $S$ is nearly $1$ on each $\binom{[n]}{t}$ the existence of any given configuration follows by simple averaging. 65. jozsef Says: February 8, 2009 at 9:18 pm | Reply 362. Thanks Boris! It seems that this is still not the right formulation for a DFU conjecture. Actually, I should have noticed that a random selection of sets with probability 1/2 already gives density zero. So, I’m still looking for the right density definition. Boris, do you have any suggestion? 66. Terence Tao Says: February 9, 2009 at 12:03 am | Reply 363. DHJ(j,k) I had to struggle a bit to understand what DHJ(j,k) meant – I had not followed this thread for a while – but when I finally got it, I can see how it fits nicely with the whole hypergraph regularity/hypergraph removal story, and so does look quite promising. Anyway, I thought I’d try to restate it here in case anyone else also wants to know about it… In ${}[3]^n$, we can define the notion of a “(basic) complexity 1 set” to be a set $A \subset [3]^n$ of strings whose 1-set lies in some fixed class ${\mathcal A}_1 \subset 2^{[n]}$, whose 2-set lies in some fixed class ${\mathcal A}_2 \subset 2^{[n]}$, and whose 3-set lies in some fixed class ${\mathcal A}_3 \subset 2^{[n]}$. A simple example here is $\Gamma_{a,b,c}$; in this case ${\mathcal A}_1$ is the set of subsets of ${}[n]$ of size a, and so forth. As I just recently understood over at the obstructions to uniformity thread, basic complexity 1-sets are obstructions to uniformity, and so we definitely need to figure out how to find lines in these sets. In analogy with hypergraph regularity (and also with ergodic theory), the next more complicated type of set might be “non-basic complexity 1 sets” – sets which are unions of a bounded number of basic complexity 1 sets. (In graph theory, basic complexity 1 corresponds to a complete bipartite weighted graph between two cells with some constant weight, and non-basic complexity 1 corresponds to a Szemeredi-type weighted graph between a bounded number of cells, with any pair of cells $V_i, V_j$ having a constant weight $d_{ij}$ on the edges connecting them.) But it seems fairly obvious that if dense basic complexity 1 sets have lines, then so do dense non-basic complexity 1 sets (though one may have to be careful with quantifiers, and use the usual Szemeredi double induction to organise the epsilons properly). Then we have the basic complexity 2 sets in ${}[3]^n$, which are those sets A in which the 12-profile of A (i.e. the pair consisting of the 1-set and 2-set of A; one can also identify this with an element of $3^{[n]}$) lies in some fixed class ${\mathcal A}_{12} \subset 3^{[n]}$, the 23-profile of A lies in some fixed class ${\mathcal A}_{23} \subset 3^{[n]}$, and the 31-profile of \$A\$ lies in some fixed class ${\mathcal A}_{31} \subset 3^{[n]}$. But of course when k=3, any one of these complexity 2 profiles determine the entire set A, so every set A is complexity 2, which is why DHJ(2,3) is the same as DHJ(3). [Incidentally, to answer a previous question, this hierarchy seems to be unrelated to my DHJ(2.5) - I was simplifying the lines rather than the sets.] If we knew that sets A which were uniformly distributed wrt complexity 1 sets were quasirandom (a “counting lemma”), then we would be well on our way to reducing DHJ(2,3) to DHJ(1,3) by the usual regularity type arguments. (I’m sure Tim and the others here already know this, this is just me trying to get up to speed.) 67. gowers Says: February 9, 2009 at 12:24 am | Reply 364. Big picture There is now a definite convergence of this thread and the obstructions-to-uniformity thread, which perhaps illustrates the advantages of not splitting the discussion up. At any rate, this last comment of Terry’s is closely related to comment 411, and also to earlier comments on the 400-499 thread that attempt to establish some sort of counting lemma. I now get the feeling that the problem has been softened up, and that it is not ridiculous to hope that we might actually solve it. But it could be that certain difficulties that feel technical will turn out to be fundamental. Unfortunately, I’ve got two big teaching days coming up, so my input is going to go right down for a while. But perhaps that’s in the spirit of polymath — we get involved, then we go off and do other things, then we come back and catch up with what’s going on, but all the while polymath continues churning away at the problem. 68. Terence Tao Says: February 9, 2009 at 2:36 am | Reply Metacomment: Ideally, I suppose we should be working in an environment in which comments from different threads can somehow overlap, coalesce, or otherwise interlink to each other. But, failing that, I think this division into subthreads has been the least worst solution (e.g. the $c_n$ for small n thread seems to have taken off once it was separated from the “mainstream” threads of the project). And revisiting the threads every 100 comments or so seems to be just about the right tempo, I think. 69. gowers Says: February 9, 2009 at 10:49 am | Reply 365. Technical clean-up idea This is another comment that could go either here or on the obstructions-to-uniformity thread, but I think it is slightly more appropriate here. Terry, that’s a good point about the 100-comments-per-post rule, and if the 300s and 400s threads really do seem to be converging, then they can merge into the 500s thread. My comment is that I have a variant of DHJ that ought to be equivalent to DHJ itself but has the potential to tidy up a lot of the annoying technicalities to do with slices being of different sizes. (I don’t know for certain that it will work, however.) It’s motivated by the proof of Sperner that I gave in 331. There one shows the stronger result that if the slice densities add up to more than 1 then there is a “combinatorial line”. What should the analogue be for DHJ? My contention is the following: define the slice density $\delta_{a,b,c}$ to be $|A\cap\Gamma_{a,b,c}|/|\Gamma_{a,b,c}|$. Then I claim that if $\sum_{a+b+c=n}\delta_{a,b,c}\geq cn^2$ (for fixed positive $c$ and sufficiently large $n$), then $A$ contains a combinatorial line. This conjecture bears the same relation to corners that the strong form of Sperner bears to the trivial fact that if you have at least two points in $\null[n]$ then you get a configuration of the form $x,x+d$ with $d\ne 0$. Given that the averaging argument for Sperner naturally proves this stronger result, I think there is reason to hope that the numbers will all work out miraculously better if we go for this revised version of DHJ. I hope that way we can avoid boring details about being near the middle. I haven’t begun to check whether this miracle really does happen, but I think it’s definitely worth investigating. 70. gowers Says: February 9, 2009 at 11:07 am | Reply 366. Technical clean-up idea. Another way of looking at 365. All we’re doing is choosing a random point of ${}[3]^n$ not with the uniform measure but by first choosing a slice and then choosing a random point in that slice. It seems to me that this should have huge advantages. For instance, and most notably, the distribution of a random point used to be radically different if you conditioned on it belonging to a typical combinatorial line. Now it’s different but not that different. (With the new measure the situation is on a par with the situation for corners, where the difference between the distributions of random point and random vertex of a random corner is not a serious problem.) But I think there may be all sorts of arguments that will now work out exactly. Perhaps even the quasirandomness calculations I was trying to do will become clean and nice. No time to check though! 71. Nicholas Locascio Says: November 4, 2011 at 12:35 am | Reply Hi everyone. While the Hales-Jewett theorem may be a little beyond my current level of mathematics, I did put something together that I believe the mathematicians that solved this problem would have interest in. I created a fully playable 12 dimensional tic-tac-toe iPhone game. http://itunes.apple.com/us/app/twelvetacto/id457438285?mt=8 My calculation gives 121,804,592 possible solutions for an unaltered playing space via the self-discovered formula: s(d) = (1/2) * ( (k +2)^d – k^d ) where d = # of dimensions and k = length of side I’ve also defined it recursively as: s(d) = (k+2)*s(d-1) + k^(d-1) In my game, d = 12 and k = 3. Using the basic framework of the game, I do believe a computer-assisted proof would be possible. I do know that computer proofs are generally considered less elegant in the world of higher mathematics, but if anyone is interested, I would be willing to share my source code and collaborate on such a proof. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 667, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9444642663002014, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/21477/could-we-prove-that-neutrinos-have-mass-by-measuring-their-gravitational-signatu?answertab=oldest
# Could we prove that neutrinos have mass by measuring their gravitational signature? It is now said that neutrinos have mass. If an object has mass then it also emits a gravitational field. I appreciate the neutrinos mass is predicted to be small, but as there are so many produced by our sun and gravity works collectivity should we not be able to detect neutrinos through gravitational differences? An extract from Wikipedia states:- "Most neutrinos passing through the Earth emanate from the Sun. About 65 billion ($6.5 \times 10^{10}$) solar neutrinos per second pass through every square centimeter perpendicular to the direction of the Sun in the region of the Earth." That is a great deal of neutrinos. Would that many not produce a noticeable gravitational signature that we could detect? If we can't detect it does that mean that neutrinos are massless? - 2 Actually a pretty good question, just somewhat naive. – dmckee♦ Feb 26 '12 at 1:12 ## 3 Answers The neutrino masses are probably on order of $1\text{ eV}$, but we need to consider their total energy (I feel like a dunce having to wait for Ron to point that out in the comments). The solar neutrinos have energies on order of $1\text{ MeV}$. So 65 billion neutrinos accordingly have mass of $6.5 \times 10^{16} eV = 65 \times 10^8\text{ GeV}$ which is about the mass of a million atoms of zinc or about $65 \text{ g/mole}* (10^8\text{ atoms})/(6 \times 10^{23}\text{atoms/mole}) \approx 1 \times 10^{-15}\text{ g}$. The volume occupied is 1 square centimeter times $300,000,000$ m (lightspeed times 1 second), or about $30,000$ cubic meters. This gives us a total density of $3 \times 10^{-20} \text{ g}/\text{m}^3$. So, short answer: In principle yes, in practice no. - This calculation is faulty, because the energy is the source of the gravitational field, not the mass. If you like, it's only the relativistic mass that is important. You need to consider the total energy of the neutrinos, and for ordinary energies the mass only affects the velocity in a slight way. – Ron Maimon Feb 26 '12 at 1:20 @RonMaimon: Ah....yes. Moment. – dmckee♦ Feb 26 '12 at 1:21 – Manishearth♦ Feb 26 '12 at 1:32 Or is that avogadro's number? Now I'm confused. Did you convert eV to zinc to grams? – Manishearth♦ Feb 26 '12 at 1:38 The gravitational field of a fast moving particle is from its energy, not its rest-mass. The source of the gravitational field is the energy divided by c2 if you are using unnatural units, or what used to be called "relativistic mass" before that term fell out of favor. The neutrinos we observe are moving at essentially the speed of light, so we cannot distinguish their gravitational signature from that of a massless particle. Both a neutrino at 1KeV and an exactly massless fermion at 1KeV have pretty much the same gravitational field. The difference is suppressed by the ratio of the mass to the energy, and it is as hard to detect as the difference in the neutrino speed from the speed of light (which is a more direct way to measure the mass, but also impractical). There are not enough neutrinos which are moving slowly compared to the order .01 eV masses, so that the number of nonrelativistic neutrinos is too small to allow their mass to be measured gravitationally. In short, the answer is just no. - Neither do we know the speed of the neutrinos. Since mass=energy, anything with energy gravitates as well. Light gravitates in that manner. Neutrinos could have zero mass, but they'd still gravitate if they went at the speed of light (energy=$\frac{m_0c^2}{\sqrt{1-v^2/c^2}}$. If $m_0=0,v=c$, energy is not necessarily zero). Since their speed is disputed as well, knowing their gravitational effects doesn't help. If the "faster than light" thing pans out, they may even have complex mass (which you can see directly from the above equation). So we can't really measure if they have mass if they go faster than light by looking for a signature. OK, if you're talking about measuring the exact mass-energy of the neutrino (which is unknown atm), that would be theoretically possible, practically impossible. Mass of $$\nu_e<2.2 eV$$ $$\nu_\mu<0.17 eV$$$$\nu_\tau<15.5 MeV$$ Pretty tiny. Multiply by 65 billion and it's still tiny. Taking only the tau neutrinos (IIRC 1/3rd of them) into account, a safe assumption as the others have negligible mass; we get $65\times10^{15} eV/c^2=1.1\times10^{-19} kg$. That may be measurable in a laboratory, but impossible to measure around the Earth. Earth's mountains, irregulatiries in the crust/mantle, and other cosmic rays would create a larger effect. The main issue is that we can't "turn off" the neutrinos to get a control case so that we can eliminate the other effects. If we do this in an accelerator, a few neutrinos have even less mass, and it's even more uncertain. Add quantum mechanics to it and I think you could say that there are no gravitational effects at all (becomes less than the planck length, &c; though mashing GR and QM like this isn't exactly accepted) Edit: As @dmckee pointed out in the comments, I've used obsolete neutrino masses. The actual masses are much smaller, though that doesn't change the final conclusion. - – dmckee♦ Feb 26 '12 at 1:46 @dmckee Hmm, you're right. But it doesn't change the final conclusion.. – Manishearth♦ Feb 26 '12 at 12:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451857805252075, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/chaos-theory
# Tagged Questions The chaos-theory tag has no wiki summary. 1answer 53 views ### Are there real life applications for Hausdorff dimensions, specifically crack formations? I was curios about Hausdorff dimensions. They seem to neatly describe rough surfaces. So I was wondering if there are common applications of Hausdorff dimensions in things like complicated friction ... 1answer 181 views ### Ljapunov exponent of driven damped pendulum I have written a computer simulation of the driven damped pendulum, pretty much as the one shown here, only that I did it Python. Next, I have found some parameters for which the pendulum behaves ... 2answers 48 views ### What can be the smallest chaotic system? As I am talking about 'smallest' can I expect that it should be a quantum system? I understand that we use quantum chaos theory instead of perturbation theory when the perturbation is not small. For ... 0answers 69 views ### Is that true that real quantum chaos doesn't exist? I read several books and papers on quantum chaos, to my understanding they all emphases that the quantum chaos does not really exist because the linearity of the Schrodinger equation. Some works were ... 4answers 587 views ### Staying in orbit - but doesn't any perturbation start a positive feedback? I am not a physicist; I am a software engineer. While trying to fall asleep recently, I started thinking about the following. There are many explanations online of how any object stays in orbit. The ... 2answers 305 views ### Does chaos theory occur in quantum mechanics? Or in any non-newtonian physics? Does chaos theory occur in quantum mechanics? Or in any non-newtonian physics? Apart from perhaps thermodynamics? 1answer 92 views ### Fractal Cosmology and Misner's Chaotic Cosmology I have a question pertaining to the ideas behind the considered homogeneity and isotropic nature of the universe (at a grand scale) versus the theory of a chaotic and anisotropy structure of the ... 4answers 186 views ### Are a quantum mechanical system a chaotic (yet deterministic) system? The title is slightly misleading. I really want to know if the randomness and probabilities observed in quantum mechanics is really just the result of a chaotic (yet deterministic) system. If it is ... 3answers 89 views ### Reference for the predictability of rigid body dynamics I'm looking for a reference, journal article, paper, etc. that supports the idea that classical mechanics, in particular rigid body dynamics, is largely predictable. A view coming from the background ... 1answer 139 views ### Chaos is predictable? I'm reading a book of computational physics [1] where the driven nonlinear pendulum is studied in deep. This is the equation used in the book: \frac{d^2\theta}{dt^2} = -\frac{g}{l}\sin\theta - ... 1answer 490 views ### A good, concrete example of using “chaos theory” to solve an easily understood engineering problem? Can anyone suggest a good, concrete example of using "chaos theory" to solve an easily understood engineering problem? I'm wondering if there is a an answer of the following sort: "We have a high ... 0answers 67 views ### The ideal trampoline Suppose we have a mass attached to the top of an ideal (linear and massless) spring oriented vertically in a uniform gravitational field, and on top of that mass there is another mass resting on it. ... 2answers 237 views ### How and why can random matrices answer physical problems? Random matrix theory pops up regularly in the context of dynamical systems. I was, however, so far not able to grasp the basic idea of this formalism. Could someone please provide an instructive ... 2answers 252 views ### Current scope of Chaos theory and non-linear dynamics? I am a physics undergrad interested in stuff like dynamical systems, chaos theory etc. Is there ongoing research in these fields? I am talking about pure research and not applications to things like ... 0answers 52 views ### spectral eigenvalue staircase and quantum system in a d-dimensional system of Quantum physics , does the Eigenvalue staircase $N(E)= \sum_{E_{n}\le E} 1$ determine ALL the properties of Quantum System ?? for example, let us assume that the ... 1answer 72 views ### SOC and the butterfly effect We knows that in a critical system and self organized criticality we have long range interaction due power law decay in correlation. Is this fact equivalent to the butterfly effect? 1answer 70 views ### Bifurcation of convection of fluid in container, when adding temperature I once read a paper, in which: a fluid in a container was heated from below, after reaching temperature $T_1$, a circular motion (convection) was clearly distinguishable, in form of cylinder, after ... 1answer 429 views ### Is chaos theory essential in practical applications yet? Do you know cases where chaos theory is actually applied to successfully predict essential results? Maybe some live identification of chaotic regimes, which causes new treatment of situations. I'd ... 3answers 214 views ### What is the Quantum equivalent of chaos on a classical system? (if there's any) This is a question that bugging me around for some time now. It is not clear to me what is the meaning of a chaos if we consider a quantum system. What is the mathematical formalism (or the quantum ... 4answers 500 views ### Does the “Andromeda Paradox” (Rietdijk–Putnam-Penrose) imply a completely deterministic universe? Wikipedia article: http://en.wikipedia.org/wiki/Rietdijk–Putnam_argument Abstract of 1966 Rietdijk paper: A proof is given that there does not exist an event, that is not already in the past for ... 1answer 208 views ### Renormalizing Chaos: Transition in a Logistic Map I am currently trying to understand the analysis of a logistic-like map $$f_\mu (x) = 1-\mu x^2$$ after section 2.2 in "Renormalization Methods" by A. Lesne. As I understand it, the physical ... 2answers 378 views ### What is the highest energy position for a double pendulum? And for which energy positions is it chaotic? Math/physics teachers love to break out the double pendulum as an example of chaotic motion that is very sensitive to initial conditions. I have some questions about specific properties: For a ... 2answers 274 views ### Fractals in physics. Suggest book titles I'm looking for some good books on Fractals, with a spin to applications in physics. Specifically, applications of fractal geometry to differential equations and dynamical systems, but with emphasis ... 1answer 121 views ### Is the orbit of earth around the sun chaotic? The orbit of the earth seems to be very predictable. But as it is a many-body problem having sun, earth, moon, jupiter and so on, is it really that stable or will it start making strange movements ... 3answers 335 views ### Randomness, Chaos, Quantum mechanical probability functions Can someone explain these 3 concepts into a unified framework. Randomness : Randomness as seen in a coin toss, where the system follows known and deterministic (at the length and scale and precision ... 3answers 354 views ### Question on the stability of the solar system One of the pertinent questions about many body systems that causes me much wonder is why the solar system is so stable for billions of years. I came across the idea of "resonance" and albeit an useful ... 4answers 310 views ### 'A' butterfly effect If a butterfly did not flap its wings some time ago, but instead decided to slide for that millisecond, can this cause a tornado on the other side of the earth if we just wait long enough? Does this ... 2answers 321 views ### Chaos and quantum physics: How many ways can a bonfire burn? I'm interested in the extent to which quantum physical effects are seen at a macroscopic level. I might get some of the physics wrong, but I think I'll get it close enough that I can ask the ... 1answer 126 views ### Chaos and continuous flow What needs to be the case for a dynamical system with a continuous flow to exhibit chaos? It looks like 1D systems with a continuous flow can't exhibit chaos. Are two dimensions enough or do you need ... 4answers 1k views ### Chaos theory and determinism My professor in class went a little over chaos theory, and basically said that Newtonian determinism no longer applies, since as time goes to infinity, no matter how close together two initial points ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196742177009583, "perplexity_flag": "middle"}