text
stringlengths
104
605k
× Back to all chapters # Polynomial Arithmetic From profit/cost models to the flight of a baseball to approximating the motion of a wave, polynomials are a mathematical way to represent values which are sums of powers of variables. # Applying the Perfect Square Identity If $x^2+20xy+100y^2=(x+ay)^2,$ what is the value of $$a$$? If $x^2+121-(x-a)^2=22x,$ what is the value of $$a$$? If $16x^2+56xy+49y^2=(ax+by)^2,$ what is the value of $$ab$$? Given $a-b=\sqrt{42+12\sqrt{6}}, b-c=\sqrt{42-12\sqrt{6}},$ what is the value of $a^2+b^2+c^2-ab-bc-ca?$ What is the coefficient of $$x$$ in the expansion of $$(x+3 ) ^ 2$$? ×
Question # An electron in the hydrogen atom jumps from excited state n to the ground state. The wavelength so emitted illuminates a photosensitive material having work function . If the stopping potential of the photoelectron is , then the value of  is :- A B C D Medium ## Solution Verified by Toppr Correct option is B) ## Differenced of 4 and 1 energy level is 12.75 eV  So higher energy level is 4 to ground and Excited state is Solve any question of Dual Nature of Radiation And Matter with:-
These are my live-TeXed notes for the course Math 268x: Pure Motives and Rigid Local Systems taught by Stefan Patrikis at Harvard, Spring 2014. Any mistakes are the fault of the notetaker. Let me know if you notice any mistakes or have any comments! 01/28/2014 ## Motivation Let be a field with algebraic closure (so ). Consider smooth projective varieties over (either dropping the word smooth or projective will force us to enter the world of mixed, rather than pure, motives). There are several nice cohomology theory. • the Betti realization (a -vector space), the singular cohomology of the topological space . • the de Rham realization (a -vector space with a Hodge filtration), the algebraic de Rham cohomology = the hypercohomology of the sheaf of algebraic differential forms on . • the -adic realization (a -vector space with -action), the -adic etale cohomology. It is not even clear a priori that these -vector space, -vector space and -vector have the same dimension. But miraculously there are comparison isomorphisms between them. For example, Theorem 1 (Comparison for B-dR) There are isomorphisms These isomorphisms are functorial and satisfy other nice properties (indeed an isomorphism of Weil cohomology, more on this later). This suggests that there is an underlying abelian category (of pure motives) that provides the comparison between different cohomology theory. Slogan "sufficient geometric" pieces of cohomology have comparable meaning in all cohomology theory We will spend a great amount of time on the foundation of all these different cohomology theory. But notice the comparison isomorphisms already suggest the various standard conjectures, for examples, • Standard conjecture D: numerical equivalence = cohomological equivalence • Standard conjecture C: Kunneth (the category of pure motives is graded and has a theory of weights). • Standard conjecture B: Lefschetz (the primitive cohomology should be "sufficiently geometric") Question Why should one care about the existence of such a category? One motivation is that one gets powerful heuristic for transferring the intuition between different cohomology theory. Example 1 By the early 60's, one knew that if is a smooth projective variety, then it follows from Hodge theory that naturally carries a pure Hodge structure of weight , i.e. a -vector space with a bi-grading such that , where is the complex conjugation with respect to . On the other hand, Weil had conjectured that for a smooth projective variety . The -representation is pure of weight , in the sense that the eigenvalues of the geometric Frobenius are algebraic numbers and for each embedding of into the complex numbers, . When is smooth but not projective, people played with examples and found that can be filtered (the weight filtration) such that the Frobenius eigenvalue is pure on each graded piece. Example 2 Let be a smooth projective curve, be a finite set of points and . Then we have an exact sequence Here is pure of weight 1 and (, the -adic cyclotomic character) is pure of weight 2 and is also pure of weight 2. Therefore one obtains an increasing weight filtration on : The above mentioned -adic intuition (generalized to higher dimension) lead Deligne to mixed Hodge theory. To give a mixed Hodge structure for not smooth projective, the key point is to find a spectral sequence such that its term is (conjecturally) pure of weight . In Hodge II, Deligne treated the case of smooth but no longer projective varieties . The (-adic analogue of the) spectral sequence is the Leary spectral sequence for , where a smooth compactification of with is a union of smooth divisors with normal crossings, One can explicitly compute the sheaf where is smooth. Therefore is pure of weight . Let us look at the differential : notice both the target and the source are pure of weight (all are pure of weight ), nothing is weired. But on the -page, , where the source has weight and the target has weight respectively. The mismatching of the weight of the Frobenius eigenvalues implies that for . Therefore the Leray spectral sequence degenerates at -page. One can compute that The Betti analogue (of maps of pure Hodge structure) is provided by the reinterpretation that and the differentials 's are simply Gysin maps ( = Poincare dual to pullbacks), which are also maps of pure Hodge structures. The upshot is that the -adic Leray spectral sequence gives the weight filtration (= the Leary filtration up to shift), and the graded piece is pure of weight . The Betti Leray sequence also gives a weight (defined to be) filtration on such that we already know that are naturally pure Hodge structures. Another motivation for considering the category of pure motives is toward a motivic Galois formalism. Example 3 Let be a field. The classical Galois theory establishes an equivalence between finite etale -schemes with finite sets with -actions. Linearizing a finite set with -action gives finite dimensional -vector spaces with the continuous -action . The linearization of finite etale -schemes are the Artin motives (motives built out of zero dimension motives). The equivalence between the two linearized categories is then given by . Generalizing to higher dimension: the category of finite etale -schemes is extended to the category of pure homological motives. The Standard conjectures then predict that it is equivalent to the category of representations of a certain group , which is a extension of classical Galois theory These are still conjectural. But one can replace the category of pure homological motives by something closely related and obtain unconditional results. In this course we will talk about one application of Katz's theory of rigid local systems (these are topological gadgets but surprisingly produce motivic examples): to construct the exceptional as a quotient of (the recent work of Dettweiler-Reiter and Yun). 01/30/2014 ## Weil cohomology We now formulate the notion of Weil cohomology, in the frame work of motives. Definition 1 Let be a field. Let be smooth projective (not assumed to be connected) variety over . Let be the category of such varieties. Then is a symmetric monoidal via the fiber product with the obvious associative and commutative constraints and the unit . Definition 2 Let be a field. Let be the category of finite dimensional graded -vector spaces in degrees with the usual tensor operation It is endowed with a graded commutative constraint via Definition 3 A Weil cohomology over (a field of characteristic 0) on is a tensor functor , namely, comes with a functorial (Kunneth) isomorphisms respecting the symmetric monoidal structure. Notice the monoidal structure induces a cup product making a graded commutative -algebra. We require it to satisfy the following axioms. 1. (normalization) . In particular, is invertible in . We define the Tate twists (this is well motivated by -adic cohomology). 2. (trace axiom) For any of (equi-)dimension , there is a trace map satisfying 1. Under , one has . 2. and the cup product induces a perfect duality (Poincare duality) 3. (cycle class maps) Let be the -vector spaces with a basis consisting of integral closed schemes of codimension . Then there are cycle class maps satisfying 1. factors through the Chow group (modulo the rational equivalence). 2. is contravariant in , i.e., for a morphism and a cycle of codimension , we have whenever this makes sense. This will always make sense after passing to the Chow group. In general, one cannot always define on . But if is flat , then one can: in fact, by flatness has all its components of codimension in (but is not necessarily integral). Let be the (reduced structure) of the irreducible components. One then associates a cycle where , the length of the local ring at . We also require it to be compatible with pushforward (defined in Definition 10) that 3. For , . Notice is not necessarily a combination of integral closed subscheme (e.g., is nonreduced), the cycle should be understood as the reduced structure with multiplicity. 4. (pinning down the trace) the composite sends to , where are closed points. Remark 1 Sometimes the Lefschetz axiom is also thrown in the definition of Weil cohomology. We will talk about this later. Remark 2 We set . This means via the comparison isomorphism, the image of is . If we take granted that the comparison isomorphisms are compatible with Mayer-Vietoris. Applying Mayer-Vietoris to , then we are reduced to the calculation on (). The isomorphism is given by here is a smooth 1-chain. A good -basis of is given by the differential . Choosing a simple loop around the origin, then one obtains . Example 4 (Trace in Betti cohomology) Let be smooth projective of dimension . Define to be the composite where is the smooth de Rham complex with -coefficients. Notice is an isomorphism because the sheafy singular cochain complex is a flasque resolution of and is an isomorphism because is a fine resolution of . The choice will chancel out the choice of the orientation we made on complex manifold when we do integration and one can check that lands in . ## Algebraic de Rham cohomology Suppose is a field of characteristic 0 and smooth (not necessarily projective). Definition 4 We define the algebraic de Rham cohomology to be the hypercohomology of the sheaf of algebraic differential forms on . Theorem 2 is a Weil cohomology. We now selectively check a few of the axioms. Lemma 1 is a tensor functor. Proof Given a morphism . Let be an injective resolution of and be an injective resolution of . Then is quasi-isomorphic to . The map induces the pull-back on The Kunneth isomorphism is explicitly given by , where and are the natural projections. ¡õ Remark 3 To check that is finite dimensional for smooth projective, one can use the Hodge to de Rham spectral sequence and the fact that each is finite dimensional (this may fail when is not projective) and lives in a bounded region. For not necessarily projective or smooth varieties, is still finite dimensional (by comparison, and by the resolution of singularities in characteristic 0). may fail to be finite dimensional in characteristic for nonprojective varieties (see the next example). Example 5 When is affine, we have (this follows from the vanishing of for affine and quasi-coherent; in particular, ), which makes the computation feasible. For example, the de Rham complex for is simply . So taking cohomology gives and . This also gives an example in characteristic that is infinite dimensional because , so one don't really want to work with the algebraic de Rham cohomology in characteristic ! 02/04/2014 In general, one covers by open affines . For any quasi-coherent sheaf on , one then obtains the Cech complex , a resolution of by acyclic sheaves, defined by Now we have a double complex whose columns are acyclic resolutions of . The general formalism implies that the cohomology of the global sections of the total complex. Recall the total complex is defined by where . Example 6 (normalization) Let and be a covering of . The Cech double complex looks like The total complex is thus where the two differentials are given by and One can easily compute , and is 1-dimensional generated by . Indeed one sees the computation really shows This is an instance of the Hodge to de Rham spectral sequence. Remark 4 The Hodge de Rham spectral sequence comes from filtering the total complex of the double complex . Take the filtration (cut out by right half planes) So itself forms a complex. The general machinery implies that the spectral sequence associated to this filtered complex is Here defines the filtration on (so the grading on the right hand side makes sense). Notice in our case, is simply and is nothing but . Remark 5 What happens when one filter the total complex in the other way (by upper half planes)? Namely Then we see that . The corresponding sequence is the Mayer-Vietoris spectral sequence. When the covering consists of two affines, it recovers the usual Mayer-Vietoris long exact sequence. Theorem 3 If and is projective, then the Hodge to de Rham spectral sequence degenerates at the -page. Remark 6 It is not clear how to obtain the splitting without Hodge theory. To define the trace for the algebraic de Rham cohomology, we proceed in two steps. We first show that is abstractly the right thing, i.e., and is if is geometrically connected. Then we pin down that actual map after defining the de Rham cycle class map. The first step uses the Serre duality. By the Hodge to de Rham spectral sequence, we have a map By Serre duality ( is the dualizing sheaf), one has the trace map So we want to say that the map is an isomorphism. By the Hodge de Rham spectral sequence, it is enough to show that (or because is a free -module of rank 1). This can be checked bare-handed by reduction to : choose a finite flat map to get the trace map (so ). It follows that is injective. It suffices to prove that which boils down to the direct computation that using the Hodge to de Rham spectral sequence. To pin down , we need to choose carefully a generator of and set i.e., . ## De Rham cycle class maps (via Chern characters) We seek a cycle class map such that as follows. 1. Define the Chern class of line bundles. 2. Define the Chern class of vector bundles. 3. Define the Chern character of vector bundles on . 4. One knows that factors through , Grothendieck group of vector bundles on . Using the fact that is smooth, the latter can be identified with , the Grothendieck group of coherent sheaves on . 5. For a codimension cycle, makes sense and we define the cycle class map Now we describe each step in details. Step a We want a group homomorphism . Identify . The map induces a map and hence induces the desired map . Step b Let be a vector bundle of rank on . Denote the projective bundle . Notice on has a tautological line subbundle . Let . The fact (the Leray-Hirsch theorem, a special case of the Leray spectral sequence) is that is a free module over with basis . We then define by Notice this agrees with the previous definition of of line bundles and is functorial in . 02/06/2014 Define the total Chern class The key is the following multiplicative property. Proposition 1 For any sort exact sequence of vector bundles we have . Proof To show this, one first show that if is a direct sum of line bundles, then Then reduce the general to the first case by showing that there exists a map such that splits as direct sum of line bundles and is injective on (the splitting principle). For the first case, since the statement is invariant under twist, one can assume each is very ample of the form and reduce to the case to the case of being a product of projective spaces. Notice that each gives a section and by definition . Write . Pullback the defining relation for along each , we obtain the relation So the polynomial in has roots . But has the advantage of being like a polynomial ring, Assume , then the defining relation must be , which shows that is the -th symmetric polynomial of as desired. To reduce the general case to the first case, arrange so that has a full flag of subbundles by iterating the projective bundle construction, then split the extension by further pullback: if one has a surjection of vector bundles, then the sections form an affine bundle over ; pulling back along this affine bundle splits and induces an isomorphism on cohomology. ¡õ Step c Using the multiplicative property, we can define formally the Chern roots of so that . Here the Chern roots don't not make sense but their the symmetric polynomials do make sense in cohomology. Define the Chern character This makes sense in cohomology. Now we have the additivity in exact sequences. Moreover . Therefore we obtain a ring homomorphism When is smooth, one can form finite locally free resolutions of any coherent sheaves on , and taking the alternating sum of the terms in the resolutions induces the inverse of natural map . Thus (see Hartshorne, Ex III.6.8). Step d For a codimension cycle, makes sense and we define the cycle class map In particular, our choice of the basis for is given by for any closed point of , This is the choice we made to normalize the trace map. We need to check that is independent on the choice of (this follows from connecting two points by a curve in and the invariance of in a flat family). We also need to check that . This reduce to the case of projective spaces. Let be a closed point. One can put in a chain Using the short exact sequences of the form (given a choice of a section of ), for each , it follows that in , we have Applying the Chern character we obtain that for , which is nonzero. ## Formalism of cohomological correspondences Let be a Weil cohomology. Definition 5 Given a morphism , we define the Gysin map to be the transpose of under the Poincare duality. At the level of cycles, is basically when and zero otherwise (this matches the degree shift in ). Proposition 2 (Projection formula) Let , , then . Proof The property characterizes . ¡õ Remark 7 Using Gysin maps, one has an alternative construction of cycle class maps. For a smooth cycle of codimension , we define where is "1" in . This can be extended to non-smooth cycles by a resolution and defining . Definition 6 A cohomological correspondence from to is an element interpreted (using the Poincare duality and the Kunneth formula) as a linear map . Explicitly, if , then (extended to be zero away from top degree). Let , be the natural projections. Then another way of writing is Namely, pullback , intersect with , then pushforward to . One can check that by the projection formula. Definition 7 Define , . Definition 8 The transpose of is defined to be the image of in under . One can check that . Definition 9 (Composition of correspondences) For , , we define . Lemma 2 . Proof Notice that . The claim then follows from the associativity of composition of correspondences. For details, see Fulton, Intersection theory, Chapter 16. ¡õ Lemma 3 Let be the graph of the morphism . Then , , . Remark 8 Everything makes sense in Chow groups too. We will not repeat it later. Our next goal is to deduce the Weil conjecture (except the Riemann hypothesis) from a Weil cohomology (hence the name). We will later see that the Riemann hypothesis follows from the standard conjectures. 02/11/2014 ## Formal consequences of a Weil cohomology Let be a Weil cohomology. Proposition 3 (Lefschetz fixed point) Suppose is algebraically closed. Let be connected. If , are of degree and respectively (namely, and induces ; similar for ). Then Proof We compute by each Kunneth component so let , . Let be a basis of and let be a dual basis of such that . So we can write Here So the left hand side is equal to Switching and introduces another sign which cancels out the sign since . So the left hand side is equal to To compute the trace on the right hand side, we notice that Since we care only about the -term when taking the trace, this matches the left hand side. ¡õ Let be a cohomological correspondence so that on . Write where is the cohomological correspondence . So . Corollary 1 Let (so is of degree zero). Then Remark 9 When is the graph of then is the fixed points of counted with suitable weights. Taking and using , we obtain the following refinement. Corollary 2 Remark 10 The existence of a cycle giving rise to is still conjectural. Now let and be the (absolute) Frobenius morphism. Then is the fixed point of for any . Theorem 4 (Grothendieck and others) There exists a Weil cohomology on , . Corollary 3 To interpret the left hand side as the fixed points of , we need the following lemma. Lemma 4 and intersect properly: every irreducible component of is of codimension (i.e., the codimensions add). So can be computed as a sum of local terms, one for each point in . Moreover, the local terms are multiplicity-free (by computing the tangent space intersection at an intersection point ). Therefore we conclude that The Weil conjecture (expect the Riemann hypothesis) the follows. Corollary 4 The zeta function can be computed as Proof The previous corollary of the Lefschetz fixed point theorem and the easy linear algebra identity proves the claim. ¡õ Combining this cohomological expression of with the Poincare duality, we also obtain the functional equation of (part of the Weil conjecture). Corollary 5 Here is the Euler characteristic of . ## Intersecting cycles Definition 10 Let be a smooth quasi-projective variety. For any proper (this is not serious since we will be working in ), we define pushforward cycles by when and 0 otherwise. On the other hand, we defined pullback of cycles along a flat morphism (Definition 3 c)). We would like to make sense of pullback for more general classes of morphisms. Moreover, such pullback should be compatible with the pullback on cohomology under the cycle class maps. This can be done if there is a cup product (intersection pairing) on the group of cycles, by intersecting with the graph of . This is not naively true since the two cycles may not intersect properly (the codimension is wrong). So first we restrict to properly intersecting cycles whose intersection has all components of the right codimension. Then should be a sum of irreducible components of with multiplicities here is the local ring of at the an irreducible component of the intersection . This formula of intersection multiplicities (due to Serre) defines an intersection product for properly intersecting cycles. To deal the general case, the classical approach is to jiggle to make the intersection properly meanwhile staying in the same rational equivalence class (moving lemma). Definition 11 We say two cycles of dimension are rationally equivalence if is generated by terms of the following form. Let be a -dimensional closed subvariety and take its normalization ; these generators are the proper pushfowards for . An alternative approach is to consider a dimension closed subvariety. Then the rationally equivalent to zero cycles are generated by , here is the fiber of in . These two definitions are equivalent. One can check that being rational equivalent is a equivalence relation. We denote it by . Definition 12 The Chow group (with -coefficient) . Chow's Moving Lemma then gives a well defined intersection pairing on the Chow groups This makes a graded and commutative unital ring. The proper pushforward descends to the level of Chow groups. Definition 13 We define the pullback on Chow groups for proper by . 02/13/2014 ## Adequate equivalences on algebraic cycles Definition 14 An adequate equivalence is an equivalence relation on on for any such that 1. it respects the linear structure; 2. becomes a ring under intersection product (the intersection product is defined by demanding the analogue of Chow's moving lemma for ). 3. For any (since is proper, makes sense at the level of cycles), if , then . So descends to . 4. Similarly, the pullback descends to . 5. and are related by the projection formula . Example 7 We showed last time that on is an adequate relation. Example 8 For any Weil cohomology , the cohomological equivalence is an adequate relation. Here if in . Notice that a priori these cohomological equivalences may not be independent of the choice of . If two such Weil cohomology theories are related by comparison, e.g., and , then the corresponding cohomological equivalences are the same. Example 9 We say is numerically equivalent 0 if for all , , here the degree map , (one can think of it as , for the structure map ). Then is an adequate relation. Lemma 5 1. is the finest adequate equivalence relation. 2. is the coarsest adequate equivalence relation. Proof 1. Let be an adequate relation. We want to show that if , then . By definition, is linear combination . Let and be the projections. Then Suppose we knew that . Then by the definition of adequate relation , we know . So we reduced to show that on . Let (assume for simplicity). Since is adequate, we can find intersecting properly with , i.e., with . We can certainly write down a map such that and . Explicitly, Therefore we have a chain of equivalences as desired. 2. The second part is basically a tautology. ¡õ Definition 15 Let be an adequate equivalence relation on . Let be field of characteristic 0 (e.g., ). We define , the ring of cycles on modulo . Remark 11 The composition law we gave for cohomological correspondences works as well for correspondences (Definition 9). Namely, the composition is given by In particular, becomes a ring, which will end up being endomorphisms of as a motive modulo . Definition 16 Let be the category with objects (usually write it as thought as a cohomological object), and (Think: graphs of homomorphisms .) This is an -linear category, with There is a functor We want to enlarge to include images of projectors. There is a universal way of doing this by taking the pseudo abelian envelope. We also want duals to exist in our theory (this amounts adding Tate twists). Combining these two steps into one, Definition 17 We define the category (the coefficient field is implicit) of pure motives over modulo . Its object is of the form , here is an idempotent in and is an integer (Think: ). The morphisms are given by Here the existence of Tate twists allows one to shift dimensions (e.g, a map . Proposition 4 • is pseudo abelian ( = preadditive and every idempotent has a kernel). • is -linear. The addition is given by (if ) Here we think of as the summand of and identify • (next time) There is a -structure Grothendieck conjectured (Standard Conjecture D) that for , for any Weil cohomology . He also conjectured that is abelian. Hence under Conjecture D, is abelian. Conjecture D is still widely open, but in the early 90s, Jannsen proved the following startling theorem. Theorem 5 (Jannsen) The followings are equivalent: 1. is semisimple abelian. 2. . 3. For any , is a finite dimensional semisimple -algebra. That means that the numerical equivalence is arguably the "unique" right choice for the theory of motives. 02/18/2014 ## Tannakian theory Let be an additive tensor ( = symmetric monoidal) category. One can check for the unit object , then endomorphisms is a commutative ring and becomes an -linear category. Definition 18 We say a category is rigid if for any there exists ("dual") and morphisms such that the composite map is and the composite map is . Remark 12 The rigidity condition gives internal homs: the functor is represented by an object . Indeed, given a morphism , one obtains a morphism (and vice versa). Definition 19 Let be a field. A neutral Tannakian category is a rigid abelian tensor category with and for which there exists a fiber functor . By a fiber functor, we mean a faithful, exact, -linear tensor functor. It is neutralized by a choice of such a fiber functor. (Think: the category of locally constant sheaves of finite dimensional -vector spaces on a topological space ; a fiber functor is given by taking the fiber over ). Definition 20 Define a functor sending to the collection of such that for any , 1. the diagram commutes. 2. the diagram commutes. 3. is the identity on . We have a natural functor sending to the representation which on -points is given by for . The main theorem of Tannakian theory is the following. Theorem 6 Let be a neutral Tannakian category over and let be a fiber functor. Then the functor on -algebras is represented by an affine group scheme over and is an equivalence of categories. Remark 13 Even if we assume the Standard Conjecture D that , is not Tannakian. This is because any rigid tensor category has an intrinsic notion of rank: for , the composite in is called the rank of . For example, in , the rank of is simply the dimension of ; in , because introduces a sign , the rank of is the alternating sum , where is the -th graded piece of . But any tensor functor preserves the rank, so the tensor functor tells us that has objects of negative rank, hence (using the usual commutativity constraint) does not admit any fiber functors. 02/25/2014 ## The Kunneth Standard Conjecture (Conjecture C) Lemma 6 Suppose . Assume (Conjecture D) and that all Kunneth projectors are all algebraic cycles (Kunneth). Then (with -coefficients) is an a neutral Tannakian category over . Proof By Jannsen's theorem, is abelian. We saw last time that with its given naive commutative constraint could not be Tannakian. So we will keep the same tensor structure but modify the commutativity constraint using Kunneth. Kunneth tells us that is -graded via the projectors , i.e., for any , we get a weight decomposition Now for any , we define the modified commutativity constraint given by Now is a fiber functor. ¡õ Example 10 1. For any , and an abelian variety, is true. 2. For a finite field, then is true for any (with respect to any Weil cohomology satisfying weak Lefschetz). This is a theorem of Katz-Messing. Deligne's purity theorem on allows one to distinguish different degrees. Katz-Messing shows that for any Weil cohomology with weak Lefschetz, the characteristic polynomial of the Frobenius on agrees with that on the -adic cohomology. Choose a polynomial such that (for ) and , then is algebraic (as the combinations of the graphs of ) and is the projection onto . Remark 14 Here is a consequence of Kunneth: For any , lies in by Corollary 3. In particular, the minimal polynomial of on has -coefficients. If is further is an isomorphism on , then is also algebraic as . ## The Lefschetz Standard Conjecture (Conjecture B) Definition 21 Let be a Weil cohomology. We say satisfies the hard Lefschetz theorem if for any , any ample line bundle and any , is an isomorphism. Here . Example 11 When and , this is part of Hodge theory. For any and , this is proved by Deligne in Weil II. The hard Lefschetz gives the primitive decomposition of . Definition 22 For all , define (this depends on the choice of ). Then Remark 15 and are always primitive. One should think of as a nilpotent operator on , then the Jacobson-Morozov theorem implies that this action can be extended to a representation of . The primitive parts are exactly the lowest weight spaces for this -action. Theorem 7 (Jacobson-Morozov) Let be a semisimple Lie algebra over a field of characteristic 0. Let be nonzero nilpotent element. Then 1. There exists a in extending . 2. Given , for any semisimple such that , there exists a unique -triple . Let . Then is semisimple and sends to . So applying Jacobson-Morozov gives a unique -triple (the name comes from Hodge theory). Moreover, it follows that . Remark 16 Explicitly, we can write as Here . Then 02/27/2014 Definition 23 A more convenient operator, the Hodge star , can be extracted as follows. The -action on gives rise to a representation of on . Suppose is the weight eigenspace for . Then is in the -eigenspace. But is not quite an involution. So we renormalize and define on and then . Definition 24 Another variant is the Lefschetz involution for . Then as well. It differs from from certain rational coefficients on each primitive component. Now we have the following cohomological correspondences: 1. , , , , ( is the inverse to on the image of ), 2. Kunneth projectors , 3. Primitive projectors : 1. For , for and 0 on ; 2. For , for (so it satisfies ). The following lemma is immediate. Lemma 7 , , , , , are all given by universal (noncommutative) polynomials in and . Corollary 6 The following -subalgebra of are equal: 1. , 2. , 3. , 4. , 5. . All of them contain and . Proof One can show that . ¡õ Now we can state various versions of the Lefschetz Standard conjecture. Conjecture 1 (Weak form ) For , is an isomorphism (i.e., it is surjective). Conjecture 2 (Strong form ) The operator is algebraic. Namely, it equals to the cohomology class a cycle in . Proposition 5 The followings are equivalent: 1. , 2. is stable under , 3. is stable under (or ), 4. is stable under (or ). In particular, . Proposition 6 The followings are equivalent: 1. , 2. are algebraic, 3. (or ) is algebraic, 4. is algebraic, 5. For all , the inverse of is algebraic. Remark 17 It follows from previous discussion that a)-d) are equivalent. e) a) uses something we haven't written down (but not harder). Because , we know that Corollary 7 . Corollary 8 Under and , if is algebraic and induces an isomorphism . Then is also algebraic (see Remark 14). Proof Notice gives a map . Under and , this map is algebraic and an isomorphism. Hence is an algebraic and an isomorphism. Therefore is algebraic by Remark 14, so is also algebraic. ¡õ Corollary 9 is independent of the choice of the ample line bundle giving rise to . Proof Suppose is given by another ample line bundle . Then the hard Lefschetz tells us that is an algebraic isomorphism (notice the correspondence is equal to , hence is algebraic when is algebraic). Hence its inverse is also algebraic by the previous corollary. Now use e) of the previous proposition. ¡õ ## The Hodge Standard Conjecture (Conjecture I) The standard conjectures B and C both follow from the Hodge conjecture. The only standard conjecture does not follow from Hodge conjecture is the Hodge Standard conjecture. It concerns a basic positivity property of motives. Take . For any , carries a pure -Hodge structure of weight . More fundamental in algebraic geometry is the polarizable -Hodge structure. Definition 25 For and an ample line bundle, we have The class can be thought of as the Kahler form in (valid for general Kahler manifolds). Define Extending -linearly we define the sesquilinear pairing Remark 18 Notice that unless (i.e., ), hence unless . Remark 19 Notice also that different pieces of the primitive decomposition are orthogonal. Namely, if , , where are primitive, then unless . In fact, we may assume that , then The claim follows because and is primitive. We would like to study the positivity properties of by reducing to particular pieces of the bigrading and the primitive decomposition. Theorem 8 (Hodge index theorem) On , is definite of sign . Example 12 On a curve , has sign on and on . Suppose . Then Example 13 On a surface , on has sign , and sign on . So is negative definite on , positive definite on and positive definite on . For example, if is a K3 surface, then has signature on and has signature on . This theorem is the source of polarization in Hodge theory. Definition 26 A weight -Hodge structure ( is a -vector space, ) is polarizable if there exists a morphism of Hodge structures such that is positive definite. So the Hodge index theorem has the following corollary. Corollary 10 For any , is a polarizable -Hodge structure. A polarization is given by , where 03/04/2014 Proof Let . We need to show that satisfies is positive on . Let Then Now using the Hodge index theorem we see the sign cancels out and takes value in . ¡õ Remark 20 For general Kahler manifolds, the Kahler form only gives rise to polarization of the real Hodge structure. Remark 21 One can further polarize by variants of with sign changes for non-primitive pieces. Now we would like a (weak) version of this that makes sense for any field and any Weil cohomology satisfying hard Lefschetz (so the primitive cohomology still makes sense). Inside there is -vector subspace . Conjecture 3 (Hodge Standard Conjecture ) For any , the pairing on given by is positive definite. By Corollary 10(take ), Corollary 11 For and , holds unconditionally. We now explain that for , the Hodge Standard conjecture implies the Riemann hypothesis. A more convenient reformulation of is that the pairing is positive definite. It follows that there is a positive involution on (acting on ) given by Explicitly, (which is algebraic under Lefschetz). So we want that the eigenvalues of the Frobenius on are pure of weight . We renormalize the Frobenius (acting on ) as Under Lefschetz, . We want all eigenvalues of has absolute value 1 for all complex embeddings. This can be obtained by realizing as a unitary operator on the inner product space (). We notice that commutes with and , so We claim that , so that is -invariant. This follows from the following more general lemma. One can check that ( is the chosen ample line bundle), so the following lemma applies to . Lemma 8 If such that 1. , 2. , 3. . Then is invertible and . Proof b), c) implies that . Therefore is invertible. In fact, for nonzero, find such that , then has nonzero trace, so . Now So . ¡õ It follows that is unitary with respect to the inner product (the positivity follows from and the fact that ). In particular, the eigenvalues acting on have all absolute values 1. Hence by Cayley-Hamilton, the roots of characteristic polynomials of on have all absolute values 1, as desired. Remark 22 1. (so in characteristic 0, the hard Lefschetz implies everything). In fact, if the intersection pairing is non-degenerate, then follows. But we know from that is non-degenerate, so implies that the first intersection pairing is also non-degenerate. 2. : Jannsen's theorem implies the algebra is semisimple, then Smirnov's algebraic result on semisimple algebra's with raising operators implies . ## Absolute Hodge cycles Our next goal is to construct a modified category of pure motives such that 1. Under the standard conjectures, . 2. has all categorical properties we want: (say ) it is -linear, semisimple, neutral Tannakian (this gives unconditional motivic Galois formalism). 3. lets you prove some unconditional results and formulate interesting but hopefully more tractable than the standard conjecture problems. The basic strategy is to redefine correspondences using one of these larger classes of cycles: algebraic cycles motivated cycles (Andre) absolute Hodge cycles (Deligne) Hodge cycles Remark 23 These inclusions are unconditionally true. The Hodge conjecture says that the Hodge cycles are algebraic cycles, so all inclusions are conjectured to be equal. One can try to prove the last two inclusions are equal, which would already be a big step further towards the Hodge conjecture. Definition 27 An absolute Hodge cycle on (suppose has characteristic 0 and finite transcendence degree) is a class in where , such that for all , the pullback class comes from a Hodge cycle in (a -vector space) via the comparison isomorphisms. Remark 24 Due to the transcendental nature of , the last imposed condition is rather deep. Theorem 9 (Deligne) Any Hodge cycle on an abelian variety () is absolutely Hodge. One should think of this as a weakening of the Hodge conjecture for abelian varieties. We will define Andre's notion of motivated cycles next time. Along this line, Theorem 10 (Andre) Any Hodge cycle on an abelian variety () is motivated. Example 14 One classical application of absolute Hodge cycles is the algebraicity of (products of) special values of the -function like (with refinements giving the Galois action). The origin of this comes the periods (i.e. coefficients of the matrices in the B-dR comparison theorem) of the Fermat hypersurface For an algebraic cycle (defined over ) and a differential form such that , then one obtains a period The same principle applies for an absolute Hodge cycle. A good supply of absolute Hodge cycles for Fermat hypersurfaces are the Hodge cycles by Deligne's theorem for abelian varieties (the motive of Fermat hypersurfaces lie in the Tannakian subcategory generated by Artin motives and CM abelian varieties). 03/06/2014 More generally, let be a number field and a smooth projective variety. Let be the field generated by the coefficients of the period matrix. The relations between periods are predicted by the existence of algebraic cycles. The transcendence degree of is equal to the dimension of the motivic Galois group (when one makes sense of it). For the motive (defined by absolute Hodge cycles), we have . Deligne's theorem implies that the later is equal , the dimension of the Mumford-Tate group (the Hodge theoretic analogy of the motivic Galois group). Definition 28 Let be a -Hodge structure. The Mumford-Tate group is the -Zariski closure of the image of (i.e., the smallest -subgroup of whose -points containing . Example 15 ; . Example 16 ; . Notice a priori, one only knows the inequality (since absolute Hodge cycles Hodge cycles). Example 17 Here is another application due to Andre. Suppose is finitely generated. Let , are K3 surfaces over with polarizations (the important fact is that for K3 surfaces). Then any isomorphism of -modules arises from a -linear combination of motivated cycles. Also the Mumford-Tate conjecture is true for : namely, is equal to the connected component of the Zariski closure of the image of on . This is not known even for abelian varieties: there are a lot of possibilities of Mumford-Tate groups for abelian varieties, but for K3 surfaces they are quite restricted. Let be the orthogonal complement of Hodge cycles (the transcendence lattice which is 21 dimensional generically). Then is a field because , and is either totally real or CM due to the polarization. A theorem of Zarhin shows that in the totally real case the Mumford-Tate is a special orthogonal group over and in the CM case a unitary group over , with the pairing coming from the polarization. ## Motivated cycles Let be a Weil cohomology with hard Lefschetz. Fix a subfield (e.g. ). Definition 29 (Motivated cycles) is defined to be the subset of elements of of the form for any and any algebraic cycles. Here is the Lefschetz involution associated to a product polarization on . The idea is that we don't know Lefschetz and so we manually to add all classes produced by the Lefschetz operators to algebraic cycles to get motivated cycles. Remark 25 One can relate to (this is is cleaner in terms of the Hodge involution ). Under Kunneth, maps to the raising operator and the semisimple element maps to . Since Kunneth is an isomorphism of -representations, we know that is equal to . The basic calculation (with the above remark) shows the following. Lemma 9 1. is an -subalgebra of (with respect to the cup product). 2. . 3. . As for algebraic cycles, we define the motivated correspondences similarly. Definition 30 Define with the similar composition law (the target is correct by the previous lemma). Then is a graded -algebra. We also have a formalism of and projection formulas for . Lemma 10 Remark 26 For comparable Weil cohomology theories, one obtains a canonical identifications of corresponding spaces of motivated cycles. Remark 27 One can restrict the auxiliary varieties to some full subcategory of stable under product, disjoint union, passing to connected components and containing . The following definition works with replaced by these 's. Definition 31 The category of motivated motives is defined as 1. An object is a triple , , an idempotent in , . 2. Morphism: . We will write for short. Remark 28 1. If is true for all , then . 2. As before, is -linear and pseudo-abelian. Theorem 11 (analogue of Jannsen's theorem) For any , is a finite dimensional semisimple -algebra, hence is semisimple abelian. Proof We define an analogue of numerical equivalence: is called to be motivated numerically equivalent to 0 if for any , . Then Jannsen's argument shows that is semisimple. But since and holds for motivated cycles by construction, the motivated equivalence is the same as the motivated numerical equivalence (Remark 22). Therefore is semisimple. ¡õ Remark 29 is also a rigid tensor category. Because we always have Kunneth projectors, we can modify the commutativity constraint to obtain a neutral Tannakian category over with fiber functor given by . This gives an unconditional Tannakian formalism. Remark 30 Restricting to some family of varieties , we also obtain a neutral Tannakian category , the smallest full Tannakian subcategory containing (the objects are subquotients of direct sums of ). For example, one can take to be a singleton. Then one can define the motivic Galois group , or . This allows us to talk about the motivic Galois group of a particular object . This is the motivic analogue of . 03/11/2014 Remark 31 For any field , we define to be the category of motives with -coefficients (the objects are -modules in , Tannakian over ). Define to be the motivic Galois group. Proposition 7 (Properties of ) 1. is pro-algebraic, even pro-reductive over . 2. splits over the maximal CM extension of (i.e., for any and , is isomorphic to for some , where is the maximal CM subfield). Remark 32 The second part follows from the existence of polarization (the Hodge index theorem). Other manifestations of the principle "arithmetic objects have CM coefficients": Frobenius eigenvalues of pure -adic Galois representations are Weil numbers, hence lies in CM fields; the finite component of algebraic automorphic representations should be defined over CM fields (automorphic representations are unitary). ## The Motivated variational Hodge conjecture Source of motivated cycles: the motivated analogue of variational Hodge conjecture. Conjecture 4 (Variational Hodge conjecture) Over , let be a smooth projective morphism, let . If is algebraic for some , then for any , is also algebraic. Theorem 12 (Andre) The variational Hodge conjecture holds with "motivated" in place of "algebraic". Remark 33 The Key arguments: 1. is abelian; 2. The theorem of the fixed part (from Hodge II). Let us review the theorem of the fixed part and the necessary background in mixed Hodge theory. Theorem 13 Suppose is smooth projective and is smooth. Let be a smooth compactification. Then is surjective. In other words, the image is the fixed part under the monodromy, i.e., . Proof The above maps are given by Here 1. is the edge map in the Leary spectral sequence associated to . By a theorem of Deligne, when is smooth projective, the Leary spectral sequence degenerates at . So is surjective. 2. and have the same image. Since is injective, it follows that and have the same image. Hence is surjective. More generally, if is smooth projective (applied to ), is smooth, Then the image of the composite map is the same as the image of the latter map. The reason is that each of these cohomology groups has a weight filtration such that is a pure Hodge structure of weight . Since , are smooth, their weight filtration looks like Since is smooth but not projective, its weight filtration looks like The general important fact is that the morphisms of mixed Hodge structure are strict for the weight filtration (one consequence: mixed Hodge structures form an abelian category), i.e., for of mixed Hodge structure, then for any , Now the strictness implies that it suffices to check the images on each are the same. The results then follows from . To see this, it essentially follows from the definition of the weight filtration as the shift of the Leray filtration associated to : and by definition is the whole thing. ¡õ Proof (Theorem 12) To prove the motivated variational Hodge conjecture. We may assume that is connected, smooth and affine. Given motivated, we want to show that all are motivated. By the theorem of the fixed part, we have Notice that has kernel independent of the choice of . In the abelian category , then also has kernel independent of (since the fiber functor is exact and faithful). So in . Applying , we obtain that carries motivated cycles to motivated cycles. ¡õ Remark 34 The argument shows that the standard conjecture for implies the variational Hodge conjecture. Corollary 12 Let be a motivated cycle such that a finite index subgroup of acts trivially on , then all parallel transport of are still motivated. Proof Apply the previous theorem after a finite base change. ¡õ Example 18 Let be an abelian variety. Then the Hodge cycles on are known to be motivated, due to Deligne-Andre. The idea of the proof is to put in a family with the same generic Mumford-Tate group, prove for Hodge cycles special abelian variety in the family and then use the variational Hodge conjecture. More precisely, any Hodge cycle on has the form , where we can take to be the product of an abelian variety and abelian schemes over smooth projective curves. So the Hodge conjecture for abelian varieties (not known) reduces to the Lefschetz standard conjectures for abelian schemes over smooth projective curves. Corollary 13 For any abelian variety , . Proof This follows from Hodge cycles on abelian varieties are motivated and that the product of abelian varieties are still abelian varieties: . ¡õ ## Mumford-Tate groups Lemma 11 For , define . 1. A -subspace is a sub Hodge structure if and only if is stabilized by . 2. is a Hodge class if and only if is fixed by . Proof 1. Let be the stabilizer of . Then is a sub Hodge structure if and only if stabilizes if and only if factors through if and only if . 2. Apply the first part to the subspace . ¡õ Corollary 14 The natural functor from to the category of -Hodge structures is fully faithful and realize as the Tannakian group of (as a subcategory -Hodge structure). Lemma 12 The full subcategory of polarized -Hodge structures is semisimple. Corollary 15 When is polarizable, is a (connected) reductive group. Proof The connectedness follows from the definition. To show that is reductive, we only need to exhibit a faithful and completely reducible representation of . The standard representation works: the subrepresentations exactly corresponds to the sub Hodge structures of , whose complete reducibility is ensured by the previous lemma. ¡õ Corollary 16 When is polarizable, is exactly the subgroup that fixes all Hodge tensors. 03/13/2014 Proof This follows from the following general results. Let be a reductive group and be a subgroup of . Define If is reductive, then ( a priori). The claim follows from taking and . For any (reductive or not), by the theorem of Chevalley, there exists a representation of and a line such that is the stabilizer of . If is further reductive, there exists a -complement . Then and consists of the elements fixing any generator of this line. So . ¡õ Corollary 17 Let , then giving rises to is exactly the subgroup of fixing all motivated cycles in all tensor constructions. Because motivated cycles Hodge cycles, Corollary 18 Let , then . Remark 35 The Hodge conjecture implies that this is indeed an equality. Remark 36 The following much weaker conjecture is incredibly hard: is connected. Unknown except for abelian varieties. Remark 37 The calculation of possible Mumford-Tate groups of abelian varieties, or more generally Mumford-Tate groups of objects of is essentially the Hodge theoretic content of Deligne's canonical models paper in Corvallis. Remark 38 The soft general result of Zarhin gives an upper bound on possible Mumford-Tate groups and algebraic representations occurring in for any smooth projective. In the case, Zarhin showed that simple factors of are all classical groups; any nontrivial representations of a simple factor must be minuscule (the weights have only a single orbit under the Weyl group). In general, the degree controls how large the group and representations can occur. In particular, any exceptional group can't occur as Mumford-Tate groups of abelian varieties. Does even arises as for some polarized -Hodge structure ? This is at least necessary for it to be a motivic Galois group. Proposition 8 A semisimple adjoint group is a Mumford-Tate group of a polarizable -Hodge structure if and only if contains a compact maximal torus. Remark 39 So or for can't arise as Mumford-Tate groups. Any -form of can't arise (though it does arise as the Mumford-Tate group of non-projective K3 surfaces); for a generic projective K3 surface, the Mumford-Tate group is ( has a compact maximal torus if and only if is even). Let explain the case when is simple with compact maximal torus over . Write . Let be a compact maximal torus, fixed by some Cartan involution of . The Cartan involution is essential for the polarization. Namely, is an involution on satisfying the following positive condition: is positive definite. Decompose into the and eigenspaces for . Here matches up with the Lie algebra of the maximal compact subgroup . Now any yields a polarizable -Hodge structure on if and only if is a Cartan involution on . Let us write down . Choose a cocharacter such that for any compact roots and for any noncompact roots . Notice such cocharacters is in bijections with . Extend (trivial on ) to obtain Then acts on the root space by , which is 1 when is compact and when is noncompact. Now use is negative definite on and positive definite on . One knows that gives a polarization on . Using this framework, it is easy to check can't arise as . Example 19 Consider the split form of . . The two compact roots are , . After the break we will construct as a motivic Galois group via the theory of rigid local systems. This is originally due to Dettweiler-Reiter using Katz's theory. Zhiwei Yun gives an alternative proof (also for and ). We will focus on the former, since the latter needs more machinery from geometric Langlands. ## Applications of the motivated variational Hodge conjecture Example 20 The Kuga-Satake construction is "motivated", i.e., for a projective K3 surface, the attached abelian variety such that which is a priori a morphism of -Hodge structure, is indeed a motivated cycle (i.e., a morphism in ). This implies the Mumford-Tate conjecture for K3 surfaces, etc.. Example 21 (Variation of motivic Galois group in families). Take a -variational Hodge structure (e.g., , for smooth projective) with the holomorphically varying Hodge filtrations on the fibers . (In general, a homomorphic family of Hodge structures on is a local system on with a filtration by holomorphic subbundles ). How does vary for ? For example, let be a modular curve and be the universal elliptic curve. Let . At CM points , is simply a rank 2 torus over . At non-CM points, . Notice that the CM points are dense in the analytic topology. Roughly speaking, there is a generic Mumford-Tate group () and it drops on a countable union of closed analytic subvarieties (the CM points). Now let us give the motivated analogue of a refinement of this assertion. So we need a notion of a family of motivated motives. Definition 32 A family of motivated motives parametrized by (assume is smooth, ) is given by 1. smooth projective -schemes , of relative dimension , , equipped with relatively ample line bundle . 2. -linear combinations , of closed integral -subschemes of , flat over , such that lies in and is idempotent for any . 3. . We denote this family by . Theorem 14 1. Let the exceptional locus does not contain the image of a finite index subgroup of . Then is contained in a countable union of closed analytic subvarieties of . 2. (Refinement) There exists a countable collection of algebraic subvarieties such that is contained in the union of . (In Hodge theory, this continues to hold for arbitrary -polarized variational Hodge structure. This "algebraicity of the Hodge loci" is a strong evidence for the Hodge conjecture.) 3. There exists a local system of algebraic subgroups of such that 1. for any . 2. for all . 3. contains the image of a finite index subgroup of (notice the latter is a purely topological input!) 03/25/2014 ## Rigid local systems Let be a smooth projective connected curve. Let be a finite set of points. Let . For the time being, we work with the associated complex analytic spaces (so implicitly). Definition 33 A local system of -vector spaces on is a locally constant sheaf of -vector spaces. Remark 40 For , gives an -vector space and for any path , gives an isomorphism (depending only on the homotopy class of ) . So choosing a base point gives rises an equivalence between local systems on with -representations of . Example 22 The case is most interesting for our purpose. Here is a free group on generators. Question Given a local system on , when does come from geometry? By coming from geometry, we mean there exists a smooth projective family such that for some (notice itself is a local system). One necessary condition for to come from geometry is that the local monodromy at each puncture should be quasi-unipotent (some power of it is unipotent, equivalently, all its eigenvalues are roots of unity). This follows from the local monodromy theorem: Theorem 15 Any polarizable -variational Hodge structure over a punctured disc has quasi-unipotent monodromy. Remark 41 This is a hard theorem. See "Periods of integrals on algebraic manifolds III", Publ. Math. IHES 38 (1970) by Griffiths; "Variation of Hodge structure: the singularities of the period mapping" Invent. math. 22 (1973) by Schmid. Both the integral structure and the polarization are important for the theorem to be true. Remark 42 Recall that a -variational Hodge structure is a -local system on and a filtration by holomorphic subbundles of such that on each fiber one obtains a -Hodge structure with the Hodge filtration on induced from , satisfying the Griffiths transversality: . Remark 43 Though the local monodromy generators are quasi-unipotent, when is smooth projective, the global monodromy representation of is semisimple (Hodge II). This is because one gets a polarizable variational Hodge structure. The sufficient condition to come from geometry is still a total mystery. Simpson's guiding philosophy is that rigid local systems shall always come from geometry. Katz's book proves this is the case for irreducible rigid local systems on . There are several notions which you may want to call rigid local systems. Definition 34 Let be a -local system on . is physically rigid if for local system such that for any , , then . In terms of the monodromy representation: if the generators are conjugate (possibly by different matrices), then they are globally conjugate. A slight variant: Definition 35 is physically semi-rigid if there exists finitely many local systems such that if is locally isomorphic to for all (as in the previous definition), then for some . Remark 44 One can also define general notion of -rigid local system for any reductive group . For (the notion defined above), physically semi-rigid implies physically rigid. These two notions are very intuitive but extremely hard to check. The following definition provides a numerical condition and is easier to check. Definition 36 is cohomologically rigid if . Here . Notice is still a local system on (but is no longer a local system on ). Remark 45 Intuitively, being cohomologically rigid means that there is no infinitesimal deformation of with prescribed local monodromy. We shall now explain this intuition in more detail. Fix . Let be the category of local Artinian -algebras with residue field . We define the functor , such that is the set of all liftings of to . Then the familiar fact is that the tangent space Taking the -equivalence into account, and assume that is irreducible (so implies that ), we are motivated to consider the deformation functor , such that is the set of all liftings of up to -equivalence. Then So when is irreducible, measures the space of the infinitesimal deformations of . We further want the deformations with prescribed local monodromy. Let be the generator of . We now consider , sending to the set of -equivalence classes of liftings such that for , are conjugate by an element of . Then the tangent space consists of cocycles such that is conjugate to by an element of . It follows that for any , for some . Namely, Moreover, we claim that this restriction map can be identified with the edge map in the Leray spectral sequence for , So we can identify This motivates the definition of cohomologically rigid local systems. We briefly indicate why the claim is true. For any local system on , notice is the sheaf associated to the presheaf (which is if ). Covering by simply-connected opens, we know that . In a neighborhood of a puncture , we get , which glue to get . The following lemma gives a very useful numerical criterion for cohomologically rigidity. Lemma 13 Let be an irreducible local system of rank on . Then is cohomologically rigid if and only if , if and only if 03/27/2014 Proof Notice for any local system , the long exact sequence associated to gives . Also by Poincare duality, . The first equivalence then follows from the irreducibility of (i.e., ). For the second equivalence, we use the fact that for any local system on , we have the Euler characteristic formula Notice is nothing but the -invariants of , the desired equivalence follows by applying to (so . It remains to prove the Euler characteristic formula. The Leary spectral sequence for formally implies that The term gives and the term gives is (see the previous remark). But is a cyclic group, is simply the coinvariants , which has the same dimension as the invariants . So But the left hand side is equal to , as is locally constant. ¡õ Remark 46 In the algebraic setting, the lemma is true for lisse -sheaves on , as long as they are tamely ramified (i.e., is invertible in the base field, which is automatic in characteristic 0). When they are wildly ramified, more correction terms for the wild ramification are needed (known as the Grothendieck-Ogg-Shafarevich formula). Example 23 We denote the Jordan block of length with the eigenvalue by . Take , and . They give a local system on . They have Jordan forms , and , all are quasi-unipotent. It actually comes from geometry (classically known) as the local monodromies of the Legendre family Namely, it comes from the local system (e..g, one can see these matrices by Picard-Lefschetz). Moreover, it is cohomologically rigid by the previous lemma: and each . Example 24 More general classes of examples are provided by the hypergeometric local systems. These are given by such that 1. , 2. is a pseudo reflection (i.e., ). 3. , with for any . Let be the monodromy group generated by (hypergeometric group). Lemma 14 Hypergeometric local systems are irreducible. Proof If not, let be a subrepresentation and be the corresponding quotient. Since is a pseudo-reflection, we know that it must acts trivially on one of , hence on one of them, which contradicts the assumption . ¡õ Given , one can write down the explicit matrix description for the local monodromies. Theorem 16 (Levelt) Let , such that . Define , by and Define the companion matrices Then 1. , and gives a hypergeometric local system with parameters . 2. Any hypergeometric local system with parameters , is -conjugate to the one of the above form. (This is stronger than the physical rigidity because we don't need to specify the Jordan forms). Proof 1. It suffices to show that is pseudo-reflection: indeed has rank 1. 2. Given such an , Set , . Let . Then has dimension since is a pseudo-reflection). Hence has dimension at least one; let be a nonzero vector of this space. Thus . Therefore , , ..., . We claim that the span is the whole space. In fact, by Caylay-Hamilton stabilize on this span, so it must be the whole space by the irreducibility. In this basis, , , have the desired form. ¡õ Remark 47 The Jordan form of a companion matrix: when an eigenvalue has multiplicity , we obtain a Jordan block . Using this, one can check that a hypergeometric local system is cohomologically rigid. The dimension of the centralizer of is , and the dimension of the centralizer of is . Now adds to 2! Question 1. What are (the Zariski closure) of the monodromy group of hypergeometric local systems? For example, does appear? 2. Are those with roots of unity always geometric? 3. We saw that hypergeometric local systems are both physically rigid and cohomologically rigid. What is the relationship between physical and cohomological rigidity in general? 1. Not . Beukers-Heckman computed all possibilities: , , and some specific finite groups. 2. Yes, by Katz's theory. Now let us come to the third question in more detail. Proposition 9 Let be an irreducible local system on . Suppose is cohomologically rigid, then is physically rigid. Remark 48 The same result (with the same proof) works for lisse -adic sheaves (tamely ramified) in the algebraic setting. Proof Let be a local system with the same local monodromy as . Since and has the same local monodromies, the Euler characteristic formula implies that So and by Poincare duality, So at least one of the local systems , has a global section. Since is irreducible and , this global section gives an isomorphism . ¡õ For the other direction, we will use a transcendental argument. This direction is not known in the -adic setting (knowing local monodromy matrices is not enough in the -adic setting: one needs to know continuity). Proposition 10 Suppose . Let be an irreducible local system on . Suppose is physically rigid, then is cohomologically rigid. Proof We know a prior that . We need to show it is . Let be the local generators around the punctures. Suppose is given by matrices and is given by matrices . Since is physically rigid, if there exists such that , then there exists such that . We want to show that Consider the map The fiber corresponds to the local systems with the same local monodromies as . The group acts on equivariantly, where acts on the domain and codomain by In particular, acts on the fiber . Now is physically rigid if and only if acts transitively on . Therefore . But , it follows that which gives the desired inequality! ¡õ Remark 49 The implication physically rigid cohomologically rigid works for -local systems for general groups . But the converse is not true for general (on the automorphic side: multiplicity one may fail for general groups other than ). 04/01/2014 ## Perverse sheaves Theorem 17 Let be a field. For a separated finite type -scheme , we have a triangulated category (the bounded derived category of constructible -adic sheaves on ) equipped with a standard -structure such that there is an equivalence of categories For a morphism, we have adjoint pairs and . We also have adjoint pairs for . Remark 50 The category of -adic sheaves is an abelian category, whose objects are colimits of -sheaves, where runs over all finite extensions of . Here an -sheaf is a constructible -sheaves with -inverted. Remark 51 There is an analytification functor from to the corresponding analytic category . This functor is fully faithful but not essentially surjective. For example, take . Then sending the generator to does not extend to the etale fundamental group (). Nevertheless, given , for almost all , one can choose an isomorphism , such that lies in the essential image of the analytification functor. The triangulated category is defined to be the colimit of . The latter triangulated category is hard to define (it is not defined as the derived category of -sheaves, which do not have enough injectives). There are 3 approaches to define .. 1. Use the pro-etale topology introduced by Bhatt-Scholze ( becomes a genuine sheaf). 2. Taking limit is well-behaved for the stable -category version of . The triangulated limit comes for free. 3. Deligne's classical approach: replace with the full subcategory of very well-behaved complexed (these are quasi-isomorphic to bounded complexes of constructible -flat sheaves). Call this full subcategory . Then is naturally triangulated: is a distinguished triangle if is a distinguished triangle for any . Definition 37 is semi-perverse if for any , ; is perverse if and are both semi-perverse, where is the Verdier dual of . Example 25 Suppose is smooth of dimension . For lisse -sheaf on . Then is perverse (since ) but not for other shifts. In general, perverse sheaves are built out of lisse sheaves on smooth varieties. Introducing perverse sheaves allows one to define intersection cohomology for singular proper varieties satisfying the Poincare duality and purity. Another major motivation for us is the following function-sheaf dictionary. Definition 38 Take and a -sheaf on . We define for any , For example, when , produces the trace of the Frobenii on the cohomology of the fibers of the morphism . Generalizing this, for any , we define The key thing is that these functions interact nicely with the sheaf-theoretic operations. For example, 1. When is a distinguished triangle, we have . 2. . 3. For , we have for . 4. For , we have (think: is the integration over the fibers) This is essentially the Lefschetz trace formula. The moral is that if you have some classically understood operations on functions, you can mimic them at the level of sheaves. The key role of perverse sheaves that one can recover the perverse sheaves from their functions: Theorem 18 Suppose and are two semisimple perverse sheaves. Then and are isomorphic if and only if for any . Remark 52 A basic fact hinted in this theorem: the full subcategory of perverse sheaves is an abelian category and all objects have finite length. How do we produce more perverse sheaves from the "lisse on smooth" case (Example 25)? Theorem 19 1. Suppose is an affine morphism, the preserves semi-perversity (but not perversity). 2. Suppose is a quasi-finite morphism, then preserves semi-perversity (but not perversity). Corollary 19 If is both affine and quasi-finite (e.g., is an affine immersion), then both and preserve perversity. Proof Suppose is perverse, then is semi-perverse (by the previous theorem). Now (by duality). Since is perverse, is perverse (by definition), hence is also semi-perverse (by the previous theorem). ¡õ Here comes the key construction: intermediate extensions. Let be a locally closed immersion. For simplicity, let us assume that is affine, so is affine and quasi-finite. If . Then both and lie in . There is a natural map . Definition 39 Define the intermediate extension (or middle extension) (in the abelian category ). Proposition 11 1. is fully faithful. 2. . 3. preserves simple objects, injections and surjections. Theorem 20 Any simple perverse sheaf on is of the form for some smooth affine locally closed subvariety of , for some lisse sheaf on . Proof (Sketch) Define to be the closure of . Choose such that the constructible sheaves become lisse when restricted to . Take . This works. ¡õ Interesting things happen when extending to the boundary of . Example 26 Let be a smooth geometrically connected curve. For dense open and lisse on , we have (here , see Katz 2.8 or 2.9). Let and be a sheaf on , then . Namely, has no punctual sections. More generally, is perverse if and only if 1. for any ; 2. has no punctual sections; 3. is punctual. Hence the simple perverse sheaves on are either punctual or of the form for lisse on an open dense . 04/08/2014 ## The middle convolution Today we will introduce the key operation on perverse sheaves in Katz's classification of rigid local systems: the middle convolution. Example 27 The rigid local system considered in Example 23 is the sheaf of the local solutions of the Gauss hypergeometric equation. The solution has an integral representation Here the parameter determines the local monodromies. More generally, is the solution of This integral looks like namely the (additive) convolution of and . The function corresponds to the rank 1 Kummer sheaf associated to the representation . Similarly, the function corresponds to a tensor product of (translated) Kummer sheaves. So rigid local system can be expressed in terms of the convolution of simpler objects. Here is the precise construction of the convolution. Definition 40 Let be an algebraically closed field. Let be a connected smooth affine algebraic group. Let be the multiplication map. For , we can define two kinds of convolutions and . Remark 53 Even if are perverse, these two convolution may not be perverse. Remark 54 Since is affine, if is semi-perverse, then is also semi-perverse. Definition 41 Suppose such that for all , both and are perverse. We define the middle convolution to be the image of in the abelian category of perverse sheaves. Example 28 Take and . Let be the local system on associated to a nontrivial character . Let , where . Then the middle convolution makes sense: both , preserve perversity, by the following proposition. Proposition 12 Suppose . Let be irreducible such that its isomorphism class is not translation invariant. Then and both preserve perversity. Proof 1. The statement follows from the statement: because is also perverse and not translation invariant, so is perverse; taking dual implies that is perverse. 2. For , then is perverse if and only if is semi-perverse: is semi-perverse since is affine. 3. If is perverse, then the followings are equivalent: 1. is perverse for any ; 2. is perverse for any irreducible . In fact, because is an abelian category with all objects having finite length, we can induct on the length of . A distinguished triangle (with lower lengths) gives a distinguished triangle ; the long exact sequence in cohomology then implies that 4. So we reduce to the case of irreducible perverse sheaves . We now use the assumption that . Namely, we need to check that By Example 26, an irreducible perverse sheaf is either punctual or an intermediate extension . If either or is punctual, then is a translate of or , hence is perverse. So we can assume that there exists and lisse on such that and . The stalk of at a geometric point is This vanishes for since . It remains to check that for , this vanishes for at most finitely many . Now we need to use the assumption that is not translation invariant. Notice the fiber , so for , we have Since both and are lisse on , it is equal to Since both source and target are irreducible, this is zero unless there is an isomorphism . Since the right hand side does not depend on , either we win or there exists infinitely many such that there is such an isomorphism. Since these lie in the support of a constructible sheaf on a curve, the same would happen for in an open dense subset . Let , then the isomorphism class of is translation invariant under . Thus we obtain a subgroup containing , which must be the whole group, under which is translation invariant. A contradiction! ¡õ Henceforth we take . Definition 42 Let be the full subcategory of constructible -sheaves on satisfying 1. is an irreducible intermediate extension, i.e., there exists open dense on which is lisse, irreducible and . 2. is tame, i.e., the corresponding -representation is tamely ramified at the punctures . 3. has at least two singularities in . Notice if , then a) and b) implies c). (In particular, . Now we can state the main results (slightly specialized) about the middle involution. Theorem 21Fix a nontrivial tame character . Let Then 1. preserves . 2. We have composition laws if and 3. For lisse on , define the index of rigidity (so is rigid if and only if ). Then for any . 4. The local monodromies of can be computed using those of . (See Dettweiler-Reiter.) 04/10/2014 Let us explain some of the ideas of the proof without going into details. Remark 55 The key step is to show that preserves the subcategory consisting of irreducible perverse sheaves such that and preserve perversity (we will say satisfies property for short). Then and its complement can be described explicitly (e.g., ). Bare-hand calculation shows this complement is also preserved under . The first key step has a proof which works in any characteristic. But in characteristic , the approach of Fourier transform is more pleasant, which we shall now briefly discuss. For any algebraic closed field and any separated and of finite type, we define the subcategory of middle extensions consisting of , where is lisse for some . We have an operation If has characteristic and . It turns out that the category on satisfying is equivalent to via the Fourier transform. The middle convolution on then corresponds to on (as in the classical Fourier theory: the Fourier transform of the convolution is the product of the Fourier transforms). Definition 43 We now define the Fourier transform, which is a functor . Fix an additive character and denote the associated the Artin-Schrier sheaf on by . Let be the two projections . Motivated by the classical Fourier transform we define where is the pullback of via the multiplication map. Similarly define using . It turns out that and we denote it by for short. It follows that preserves since projection maps are affine and the duality switches and . Moreover, is involutive: In particular, is an auto-equivalence of . Remark 56 Now we can check that has property . Using the Fourier transform, it suffices to check that lies in . The first factor in since has property , and the second factor is in because also has property (Example 28). Hence the is in . Remark 57 Suppose is irreducible, then is also irreducible by the exactness of the Fourier transform. It is easy to see that with a rank one object is invertible on : i.e., if is rank one with lisse and with lisse, one check that is the identity map. Applying this to , we know that is again irreducible. ## Katz's classification Using Theorem 21, we can prove the Katz's classification theorem of tamely ramified cohomological rigid local systems. Besides , we also need a simpler twisting operation: If is rank 1 lisse on , we define The index of rigidity is easily seen to be preserved under . Given an irreducible tame cohomological rigid local system on , our aim is to apply a series of and 's (these are all invertible operations) to obtain a rank 1 object (which is easy to understand). Theorem 22 (Katz) Suppose . Assume is lisse on (so ) and cohomologically rigid. Then there exists a generic rank 1 lisse on and a nontrivial character such that has strictly smaller rank than . Also, we can arrange and to have local monodromies contained in the local monodromies of . So the local monodromies of is contained in the local monodromies of . In particular, if all th eigenvalues of the local monodromies of are roots of unity, then we can arrange the same for . Proof For , we write Similarly, at , we write For , we write to be the number of Jordan blocks of length in the unipotent matrix (which can be viewed as the dual partition of ). To guess what to do, we look at the rank formula (proved via Fourier transform by Katz): for any nontrivial , If we want to drop the rank, we want to maximize the number of eigenvalues 1 by twisting and then take the middle convolution with respect the that maximizes . For each , we choose such that is maximal. We form of rank 1 such that , i.e., Then has larger than for any . We replace by the resulting twist and choose such that is maximal. In order to apply Katz's rank formula, we claim that any such is nontrivial. Assume this claim is true. By the rank formula and the Euler characteristic formula, we have So to see the rank actually drops, we just need to show that For this we need the cohomological rigidity The last term can be written as By the maximality of , we know the last term is at most So the cohomological rigidity implies that which implies what we wanted since where by definition. It remains to prove the claim. Assume that is trivial, then the same argument implies that But since is irreducible, we know that , a contradiction. ¡õ 04/15/2014 Since and are both reversible operations, Corollary 20 Given a tuple of monodromies (at each ), we can apply Katz's algorithem above to determine whether this tuple actually arises as local monodromies of an irreducible cohomologically rigid local system. This solves the Deligne-Simpson problem in the cohomologically rigid irreducible case. Example 29 Consider the Dwork family over , The group acts on each fiber (where the diagonal acts trivially). Consider the local system of rank , is not rigid but there exists a rigid local system on such that . is actually hypergeometric with local monodromies: 1. regular nilpotent at , 2. a pseudo relfection at 1, 3. where is a primitive root of unity. Exercise 1 Starting with the above local monodromies, apply Katz's algorithem to reduce to the rank 1 case. ## Local systems of type Let be an algebraically closed field of characteristic not (or 2). Theorem 23 • Fix . Let such that . Then there exists an irreducible cohomologically rigid local system of rank 7 with the local monodromies: 1. at : , 2. at : , 3. at : any of the following (determined by the conditions on and ), 1. , 2. , 3. , 4. , 5. . In each case, the monodromy group (the Zariski closure of the image of the monodromy representation) is . • Let be cohomologically rigid, ramified at and have monodromy group . Then is ramified at exactly two points of . Moreover, up to permuting , is conjugate to one of the local systems above. Remark 58 There are more local systems which are wildly ramified. Proof We start with the second part. Suppose is lisse exactly on , we want to know how big is. Since is cohomologically rigid, we have We look at the table of the centralizers of conjugacy classes of in (copied from Dettweiler-Reiter), we see the largest dimension is 29. So , hence . When , then the four centralizer adds to dimension 100. We look at the table again and there are only the following three cases 1. , 2. , 3. . We can rule out all these three cases due to the necessary criterion for irreducibility: . Namely, the Euler characteristic formula tells us that or For example, case is ruled out because then the Jordan form is by the table: if we twist by the character with local monodromies at the three finite points (and at ), we then get a new local system with local monodromies , which violates the above irreducible criterion. Other cases are similar. Now . Again the cohomological rigidity implies that the sum of dimensions of the three centralizers is 51. The table implies that the possibilities are 1. , 2. , 3. , 4. , 5. , 6. , 7. . The necessary criterion of irreducibility implies that . This excludes the cases a), d), e), g). Since the monodromy group is assumed to be all of (so far we only used the monodromy group is contained in ), the (14 dimensional) adjoint representation of is also irreducible (the adjoint representation is irreducible for general simple reductive groups). Applying the irreducibility criterion gives So the sum of dimensions of the three centralizers in is less than 14. This excludes the cases b), f). The corresponding centralizers for case c) must be . The local monodromies are • at , • at , • several possibilities at . The first part the follows by checking which of these possibilities can arise using Katz's algorithm. For example, take the case at the third point . Write for the rank 1 local system with local monodromies at and at . Twisting by we get , and . Then the rank formula together with the table tells us that the rank of the middle convolution is equal to , with local monodromies , , . Twisting by and take middle convolution, we obtain local monodromies , , (rank 5),... until we get down to , , (rank 2) and (rank 1). The last rank 1 local system actually exists since . Now running the algorithm reversely proves that the original local system also exists. The final thing to do is to prove the monodromy group is actually . First notice that our monodromy representation is orthogonal. The dual representation has the same local monodromies (up to -conjugacy); since is physically rigid, this implies that . Since the dimension 7 is odd, must be orthogonal. So maps into . We use the fact that an irreducible subgroup of lies inside an -conjugate of if and only if . So we need to show that , i.e., . By the Poincare duality, this is equivalent to . We compute the Euler characteristic, • At : the is the image of of in via the 6-th symmetric power Since is the same as the number of irreducible constituents of this -representation, we know . • At : the local monodromy is semisimple and it is easy to see that the dimension (either two 's or no 's). • At : the local monodromy is the image of of via . The number of irreducible constituents of is 13. So the Euler characteristic is . Hence . Therefore our lands inside . The fact (going back to Dynkin) is that an irreducible subgroup of containing a regular unipotent element must be either or . The rules out the possibility of . ¡õ Remark 59 One reason that the same realization of is harder: an irreducible subgroup of containing a regular unipotent element can be . Another special feature about is that these rank 7 rigid local systems are also rigid when viewed as -local systems (via the adjoint representation). 04/17/2014 Remark 60 We wrap up with group theoretic consideration of the conjugacy classes of . In particular, we explain why the three conjugacy classes constructed as local monodromies actually lies inside . Take a basis of simple roots . So (simply-connected and adjoint). Take the dual basis , so . The fundamental weights are and . Using fact that ( are fundamental weights), one can find , . The 7-dimension representation has weights (the nonzero weights form a single Weyl orbit, so it is quasi-minuscule). The semisimple case is easy: gives the torus . Taking gives exactly . is the regular nilpotent orbit (the unique maximal nilpotent orbit) in . In general does not necessarily take regular nilpotents to regular nilpotents, but this is the case for . In fact, to compute where the principle -triple in goes in , just compute the pairings (since for any simple root ). It follows that the composite is the 6-th symmetric power, hence maps to the regular nilpotent class. The last case is . To any subset , let be the corresponding parabolic . For , we have two parabolic subgroups . Each has Levi subgroup with semisimple part a . To compute the image of the regular nilpotent in , just to compute , which is for . So the composite is the map as desired. (Similar, it gives when .) ## Universal rigid local systems Let be an algebraically closed field, . Fix an order of quasi-unipotence () and fix a primitive -th root of unity. Let and be a cohomologically rigid local system, lisse on with eigenvalues contained in . We will show how to produce a local system over the arithmetic configuration space of points, whose geometric fibers over are cohomologically rigid objects in lisse away from and one of which gives the original . Definition 44 Let and fix a nonzero map . The configuration space over is defined to be ( of) The universal rigid local system will live on , i.e., Notice that one can specialize to via . Theorem 24 Fix (inducing ). 1. There exists a lisse -sheaf on , which, after specializing along , recovers . 2. Let . The restriction of to any geometric fiber is a cohomologically rigid object in the corresponding category (i.e., specialization preserves index of rigidity, tameness, irreducibility, and in some sense preserves the local monodromies as ). 3. is pure of some integer weight. The characteristic polynomials of the Frobenius (when specializing to a finite field) lies in . 4. For any other prime and . There exists a lisse -sheaf on satisfying for any , the characteristic polynomials of on is equal to the characteristic polynomial of on in . In other words, we get a compatible system of -adic representations. Proof We will need to make sense of middle convolution in this relative setting to run Katz's algorithm. Admitting that there is a well-behaved notion of middle convolution on for some reasonable ring (e.g, ), we can induct on . When . Let be the local monodromy character, of order dividing ; so . Now we can spread this out by interpreting as associated to a Galois covering with Galois group . From the fixed map , we can identify , so we can view as a character of the Galois group of the covering In this way we obtain a lisse sheaf on . Then works. By Katz's algorithm, we can find of rank 1 lisse on and a nontrivial character such that has rank strictly smaller than . We can now again spread to as above and by induction we can spread out to . Now invoke the middle convolution with parameters is what we are looking for, where . In the next section, we will make sense of in a way that commutes with base change to and for specializes to the old notion of middle convolution. ¡õ 04/22/2014 ## Middle convolution with parameters Let be a normal domain, finite type over . Let be a divisor given by the equation , where are all distinct. Let be another divisor given by . Remark 61 For the application, , is union of -hyperplanes, given by ; , given by . The middle convolution is an operation where is given by . For the application, since . Definition 45 Define Then we have the following diagram where are the projections and is the difference morphism. We denote . Then the compactified projection is proper smooth. Definition 46 Let , , we define the middle convolution and the naive convolution Proposition 13 Assume either that everything is tame or has a generic point of characteristic 0. 1. and are lisse and tame. 2. Assume is pure of weight and is pure of weight . Then is mixed of weights ; is pure of weight . Even better, the middle convolution the top graded piece in the weight filtration of the naive convolution. 3. Assume or is geometrically irreducible and nonconstant. Write for short. Then unless . Proof (Ideas of the proof) a. Let us take . Then is the complement of hyperplanes in the and directions and the diagonal. Recall that is the sheaf associated to . Since the projection is trivialized with respect to the stratification , it can be computed as . It follows that the middle convolution is lisse. For a formal proof, see Katz, Sommes exponentielles, Section 4.7. b. Recall that if is an -sheaf on (a scheme of finite type over ), we say that is pure of weight if for any closed points , is pure of weight in the familiar sense. We say is mixed if it admits a filtration by subsheaves such that are all pure. Theorem 25 (Weil II) Suppose is a morphism of schemes of finite type over ( is invertible on ). If is mixed of weight on then is mixed of weight . It follows that the naive convolution is mixed of weight . The purity of the middle convolution follows from the analogous statement for curves over finite fields: Theorem 26 If is a smooth curve, lisse on pure of weight , then is pure of weight . This result is less surprising by noticing that (the source is mixed of weight and the target is mixed of weight ). To show the graded piece statement, we use the short exact sequence of sheaves which gives a long exact sequence The last term is zero since is punctual. Since is an open immersion, we know that has weights by 1.8.9 in Weil II. So we are done since -term is mixed of weight . c. We prove the ! version. We can check on geometric fibers and it suffices to show for . Notice that is lisse on and is the coinvariants under the local monodromy at . In our case, is unramified along , but is indeed ramified, and hence the coinvariants is 0. ¡õ Remark 62 We claim that Definition 46 does recover the old definition of the middle convolution (Definition 41) when is an algebraically closed field. This finalizes the description of the middle convolution algorithm in the universal context: it produces local systems on that specializes to cohomologically rigid tame irreducible local systems on for algebraically closed. Notice by definition where and . Since one can replace by (Example 26) and hence the claim follows from This is a special case of the following theorem (take , , ). Theorem 27 Suppose is affine open, is separated finite type of an algebraically closed field . is proper and is finite. Suppose such that and are perverse. Then in . Proof We have two exact sequences in , The kernel and cokernel are supported on , applying , we obtain two distinguished triangle on , four out of the six terms are perverse by assumption. One can check that is perverse by taking the long exact sequence on cohomology, hence the two distinguished triangles are indeed short exact sequences of perverse sheaves. Splicing together we obtain the result. ¡õ Remark 63 Our next goal is to show that these universal cohomologically rigid local systems have the following geometric realization: there exists a smooth family with a finite group action and an idempotent such that Moreover, for any , .
# Solve the following : Question: An aero plane has to go from point $A$ to another point $B, 500 \mathrm{~km}$ away due to $30^{\circ}$ east of north. A wind is blowing due north at a seed of $20 \mathrm{~m} / \mathrm{s}$. The air speed of the plane is $150 \mathrm{~m} / \mathrm{s}$. (a) Find the direction in which the pilot should head the plane to reach the point $B$. (b) Find the time taken by the plane to gr from $A$ to $B$. Solution: (a) In $\triangle \mathrm{ACB}$ Using sin formula $\frac{20}{\sin \phi}=\frac{150}{\sin 30^{\circ}}$ $\sin \emptyset=\frac{1}{15}$ $\emptyset=\sin ^{-1}\left(\frac{1}{15}\right)$ east of the line $A B$ (b) $\emptyset=3^{\circ} 48^{\prime}$. Angle between two vector $=30+3^{\circ} 48^{\prime}$ $R=\sqrt{A^{2}+B^{2}+2 A B \cos \theta}$ $\mathrm{R}=167 \mathrm{~m} / \mathrm{s}$ Time $=\frac{\text { distance }}{\text { speed }}=\frac{500 \times 10^{3}}{167}$ $=2994 \mathrm{sec}$ $\mathrm{T}=60 \approx 50 \mathrm{~min}$
doc: basis_cst Constant mean function Syntax B = basis_cst(X) Arguments • X matrix (n, d) where n is the number of data points and d is the dimension Outputs • B vector (n, 1) of ones
# Expected number of ratio of girls vs boys birth I have came across a question in job interview aptitude test for critical thinking. It is goes something like this: The Zorganian Republic has some very strange customs. Couples only wish to have female children as only females can inherit the family's wealth, so if they have a male child they keep having more children until they have a girl. If they have a girl, they stop having children. What is the ratio of girls to boys in Zorgania? I don't agree with the model answer given by the question writer, which is about 1:1. The justification was any birth will always have a 50% chance of being male or female. Can you convince me with a more mathematical vigorous answer of $\text{E}[G]:\text{E}[B]$ if $G$ is the number of girls and B is the number of boys in the country? - You are correct in your disagreement with the model answer because the M:F ratio of births is different from the M:F ratio of children. In real human societies, couples who wish to only have female children will likely resort to means like infanticide or foreign adoption to get rid of male children, resulting in a M:F ratio less than 1:1. –  Gabe Apr 16 '14 at 6:07 @Gabe There is no mention of infanticide in the question, it is a mathematical excercise as opposed to a gritty analysis of a real country where murder is common place. Equally the real ratio of births of boys to girls is closer to 51:49 (ignoring social factors) –  Richard Tingle Apr 16 '14 at 9:38 @MobiusPizza: No, the ratio is 1:1 no matter how many children you have! The reason China has a different ratio is due to social factors like infanticide, sex-selective abortion, and foreign adoption. –  Gabe Apr 16 '14 at 21:57 @MobiusPizza See my answer's sections 2 and 3; any rule you can come up with will still lead to a 1:1 ratio (unless you start killing babies - which sadly happens in the real world) –  Richard Tingle Apr 17 '14 at 7:50 @newmount Simulations are good, but they mean only as much as the assumptions built into them. Displaying only the code, without any explanation, makes it difficult for people to identify those assumptions. In the absence of some such justification and explanation, no amount of simulation output will address the question here. As far as the "real world" goes, anyone making that claim will have to support it with data about human births. –  whuber Apr 17 '14 at 14:49 repeat step { Every couple who is still having children has a child. Half the couples have males and half the couples have females. Those couples that have females stop having children } At each step you get an even number of males and females and the number of couples having children reduces by half (ie those that had females won't have any children in the next step) So, at any given time you have an equal number of males and females and from step to step the number of couples having children is falling by half. As more couples are created the same situation reoccurs and all other things being equal, the population will contain the same number of male and females - I think this is an excellent way of explaining the probability distribution without relying on a rigorous mathematical proof. –  LBushkin Apr 16 '14 at 2:55 What I like is that this also explains what happened to the excess girls your intuition expects: The excess girls are desired by the parents (they are the parents who try again), but those parents (on the whole) never successfully create an excess of girls. –  Ben Jackson Apr 19 '14 at 17:47 You could simplify even further by saying "repeat step { someone decides whether or not to have a child }". The rules by which they decide are completely irrelevant provided that everybody produces boys and girls independently with the same probability. It's not even necessary to assume a value for that probability, you could just say the frequency in the population will be the same as the frequency at birth. –  Steve Jessop Apr 19 '14 at 17:52 Why the downvote? If there's an issue with the answer it'd be helpful to point it out - I can't see one –  martino Oct 10 '14 at 11:43 Let $X$ be the number of boys in a family. As soon as they have a girl, they stop, so \begin{array}{| l |l | } \hline X=0 & \mbox{if the first child was a girl}\\ X=1 & \mbox{if the first child was a boy and the second was a girl}\\ X=2 & \mbox{if the first two children were boys and the third was a girl}\\ \mbox{and so on...} &\\ \hline \end{array} If $p$ is the probability that a child is a boy and if genders are independent between children, the probability that a family ends up having $k$ boys is $$\mbox{P}(X=k)=p^{k}\cdot (1-p),$$ i.e. the probability of having $k$ boys and then having a girl. The expected number of boys is $$\mbox{E}X=\sum_{k=0}^\infty kp^{k}\cdot (1-p)=\sum_{k=0}^\infty kp^k-\sum_{k=0}^\infty kp^{k+1}.$$ Noting that $$\sum_{k=0}^\infty kp^k=\sum_{k=0}^\infty (k+1)p^{k+1}$$ we get $$\sum_{k=0}^\infty kp^k-\sum_{k=0}^\infty kp^{k+1}=\sum_{k=0}^\infty (k+1)p^{k+1}-\sum_{k=0}^\infty kp^{k+1}=\sum_{k=0}^\infty p^{k+1}=p\sum_{k=0}^\infty p^{k}=\frac{p}{1-p}$$ where we used that $\sum_{k=0}^\infty p^{k}=1/(1-p)$ when $0<p<1$. If $p=1/2$, we have that $\mbox{E}X=0.5/0.5$. That is, the average family has 1 boy. We already know that all families have 1 girl, so the ratio will over time even out to be $1/1=1$. The random variable $X$ is known as a geometric random variable. - This, of course, assumes that p is the same for all families. If instead we assume that some couples are more likely to have boys than others (i.e., their p is higher) then the result changes, even if the average value of p is still 0.5. (Still, this is an excellent explanation of the basic underlying statistics.) –  Ben Hocking Apr 16 '14 at 11:47 @Ben Your comment contains a key idea. The same thing had occurred to me, so I have edited my question to include an analysis of this more realistic situation. It shows that the limiting ratio is not necessarily 1:1. –  whuber Apr 16 '14 at 14:27 @BenHocking Indeed! And as we know from both modern statistics and Laplace's classic analysis of birth ratios, $p$ is not really equal to $1/2$ anyway. :) –  MånsT Apr 16 '14 at 16:21 ### Summary The simple model that all births independently have a 50% chance of being girls is unrealistic and, as it turns out, exceptional. As soon as we consider the consequences of variation in outcomes among the population, the answer is that the girl:boy ratio can be any value not exceeding 1:1. (In reality it likely still would be close to 1:1, but that's a matter for data analysis to determine.) Because these two conflicting answers are both obtained by assuming statistical independence of birth outcomes, an appeal to independence is an insufficient explanation. Thus it appears that variation (in the chances of female births) is the key idea behind the paradox. ### Introduction A paradox occurs when we think we have good reasons to believe something but are confronted with a solid-looking argument to the contrary. A satisfactory resolution to a paradox helps us understand both what was right and what may have been wrong about both arguments. As is often the case in probability and statistics, both arguments can actually be valid: the resolution will hinge on differences among assumptions that are implicitly made. Comparing these different assumptions can help us identify which aspects of the situation lead to different answers. Identifying these aspects, I maintain, is what we should value the most. ### Assumptions As evidenced by all the answers posted so far, it is natural to assume that female births occur independently and with constant probabilities of $1/2$. It is well known that neither assumption is actually true, but it would seem that slight deviations from these assumptions should not affect the answer much. Let us see. To this end, consider the following more general and more realistic model: 1. In each family $i$ the probability of a female birth is a constant $p_i$, regardless of birth order. 2. In the absence of any stopping rule, the expected number of female births in the population should be close to the expected number of male births. 3. All birth outcomes are (statistically) independent. This is still not a fully realistic model of human births, in which the $p_i$ may vary with the age of the parents (particularly the mother). However, it is sufficiently realistic and flexible to provide a satisfactory resolution of the paradox that will apply even to more general models. ### Analysis Although it is interesting to conduct a thorough analysis of this model, the main points become apparent even when a specific, simple (but somewhat extreme) version is considered. Suppose the population has $2N$ families. In half of these the chance of a female birth is $2/3$ and in the other half the chance of a female birth is $1/3$. This clearly satisfies condition (2): the expected numbers of female and male births are the same. Consider those first $N$ families. Let us reason in terms of expectations, understanding that actual outcomes will be random and therefore will vary a little from the expectations. (The idea behind the following analysis was conveyed more briefly and simply in the original answer which appears at the very end of this post.) Let $f(N,p)$ be the expected number of female births in a population of $N$ with constant female birth probability $p$. Obviously this is proportional to $N$ and so can be written $f(N,p) = f(p)N$. Similarly, let $m(p)N$ be the expected number of male births. • The first $pN$ families produce a girl and stop. The other $(1-p)N$ families produce a boy and continue bearing children. That's $pN$ girls and $(1-p)N$ boys so far. • The remaining $(1-p)N$ families are in the same position as before: the independence assumption (3) implies that what they experience in the future is not affected by the fact their firstborn was a son. Thus, these families will produce $f(p)[(1-p)N]$ more girls and $m(p)[(1-p)N]$ more boys. Adding up the total girls and total boys and comparing to their assumed values of $f(p)N$ and $m(p)N$ gives equations $$f(p)N = pN + f(p)(1-p)N\ \text{ and }\ m(p)N = (1-p)N + m(p)(1-p)N$$ with solutions $$f(p) = 1\ \text{ and }\ m(p) = \frac{1}{p}-1.$$ The expected number of girls in the first $N$ families, with $p=2/3$, therefore is $f(2/3)N = N$ and the expected number of boys is $m(2/3)N = N/2$. The expected number of girls in the second $N$ families, with $p=1/3$, therefore is $f(1/3)N = N$ and the expected number of boys is $m(1/3)N = 2N$. The totals are $(1+1)N = 2N$ girls and $(1/2+2)N = (5/2)N$ boys. For large $N$ the expected ratio will be close to the ratio of the expectations, $$\mathbb{E}\left(\frac{\text{# girls}}{\text{# boys}}\right) \approx \frac{2N}{(5/2)N} = \frac{4}{5}.$$ The stopping rule favors boys! More generally, with half the families bearing girls independently with probability $p$ and the other half bearing boys independently with probability $1-p$, conditions (1) through (3) continue to apply and the expected ratio for large $N$ approaches $$\frac{2p(1-p)}{1 - 2p(1-p)}.$$ Depending on $p$, which of course lies between $0$ and $1$, this value can be anywhere between $0$ and $1$ (but never any larger than $1$). It attains its maximum of $1$ only when $p=1/2$. In other words, an expected girl:boy ratio of 1:1 is a special exception to the more general and realistic rule that stopping with the first girl favors more boys in the population. ### Resolution If your intuition is that stopping with the first girl ought to produce more boys in the population, then you are correct, as this example shows. In order to be correct all you need is that the probability of giving birth to a girl varies (even by just a little) among the families. The "official" answer, that the ratio should be close to 1:1, requires several unrealistic assumptions and is sensitive to them: it supposes there can be no variation among families and all births must be independent. The key idea highlighted by this analysis is that variation within the population has important consequences. Independence of births--although it is a simplifying assumption used for every analysis in this thread--does not resolve the paradox, because (depending on the other assumptions) it is consistent both with the official answer and its opposite. Note, however, that for the expected ratio to depart substantially from 1:1, we need a lot of variation among the $p_i$ in the population. If all the $p_i$ are, say, between 0.45 and 0.55, then the effects of this variation will not be very noticeable. Addressing this question of what the $p_i$ really are in a human population requires a fairly large and accurate dataset. One might use a generalized linear mixed model and test for overdispersion. If we replace gender by some other genetic expression, then we obtain a simple statistical explanation of natural selection: a rule that differentially limits the number of offspring based on their genetic makeup can systematically alter the proportions of those genes in the next generation. When the gene is not sex-linked, even a small effect will be multiplicatively propagated through successive generations and can rapidly become greatly magnified. Each child has a birth order: firstborn, second born, and so on. Assuming equal probabilities of male and female births and no correlations among the genders, the Weak Law of Large Numbers asserts there will be close to a 1:1 ratio of firstborn females to males. For the same reason there will be close to a 1:1 ratio of second born females to males, and so on. Because these ratios are constantly 1:1, the overall ratio must be 1:1 as well, regardless of what the relative frequencies of birth orders turn out to be in the population. - Interesting; this seems to be because although no rule can change the ratio from the natural ratio it can change the number of resulting children and that number of children is dependent on the natural ratio. So in your example you have two populations of parents and they are affected differently. (That said this this feels like a situation outside the scope of the implied fictional country which is more of a mathematical exercise) –  Richard Tingle Apr 17 '14 at 11:14 @Richard It might feel like that only because, for the sake of exposition, I have oversimplified. In reality one would model the population with a distribution of $p_i$ having a mean of $1/2$. Unless the variance of that distribution is zero, the same analysis implies the same conclusions, including that the expected girl:boy ratio will be strictly less than $1$. This shows that the popular conclusion (that the ratio must be 1:1) depends crucially on the no-variation assumption. I won't apologize for using mathematics to reason about this, which does not diminish the interest of the result. –  whuber Apr 17 '14 at 14:45 nor should you apologise, this is a very interesting result (I did actually think wow when I read it). I would just prefer it in the form "Original result", "More realistic situation". The way its written it feels like cheating (which is unfair because as i say it's very interesting) because I could just as easily say "Well obviously it's not 1:1 because male births are more common" (I believe due to our historical tenancies to die in armed conflict) –  Richard Tingle Apr 17 '14 at 14:54 @Richard That's a good point. I refrained from discussing more realistic versions of the question, such as changing the mean of the $p_i$ to about $0.51$ (which is unrelated to armed combat, by the way: it has a biological explanation), because the post was over-long as it is and it should be clear how to generalize its methods to that case. I would prefer to keep the focus on resolving the paradox, which is finding a natural (but perhaps overlooked) mechanism that clarifies and explains the apparent conflict among multiple seemingly-valid answers. –  whuber Apr 17 '14 at 15:00 @whuber Thanks for the informative answer. I do not understand why in your calculation you split the population into 2 families with different probability of giving birth to girls though. According to point 1 of your model assumption, the p_i should be the same for all families. So, why did you split the population into 2 kind of families? –  Mobius Pizza 2 days ago The birth of each child is an independent event with P=0.5 for a boy and P=0.5 for a girl. The other details (such as the family decisions) only distract you from this fact. The answer, then, is that the ratio is 1:1. To expound on this: imagine that instead of having children, you're flipping a fair coin (P(heads)=0.5) until you get a "heads". Let's say Family A flips the coin and gets the sequence of [tails, tails, heads]. Then Family B flips the coin and gets a tails. Now, what's the probability that the next will be heads? Still 0.5, because that's what independent means. If you were to do this with 1000 families (which means 1000 heads came up), the expected total number of tails is 1000, because each flip (event) was completely independent. Some things are not independent, such as the sequence within a family: the probability of the sequence [heads, heads] is 0, not equal to [tails, tails] (0.25). But since the question isn't asking about this, it's irrelevant. - As stated, this is incorrect. If the genders were unconditionally independent, in the long run there would be as many girl-girl sequences in births among the families as there are boy-boy-sequences. There are many of the latter and never any of the former. There is a form of independence, but it is conditional on birth order. –  whuber Apr 15 '14 at 16:39 @whuber We're not asked how many girl-girl sequences there are. Only the ratio of girls to boys. I did not state that the sequence of births by an individual mother is a series of independent events, like coin flips. Only that each birth, individually, is an independent event. –  Tim S. Apr 15 '14 at 16:42 You will need to be much clearer about that. I mentioned the sequences to demonstrate the lack of independence, so the burden is on you to state exactly in what rigorous sense "independence" applies here. –  whuber Apr 15 '14 at 16:47 @whuber The events are independent in the same way coin flips are. I've expounded on this in my answer. –  Tim S. Apr 15 '14 at 19:29 @whuber the girl-girl sequences turn up if you put all births in a line; after one couple finishs the next go in etc etc –  Richard Tingle Apr 15 '14 at 20:06 ## Couples with exactly one girl and no boys are the most common The reason this all works out is because the probability of the one scenario in which there are more girls is much larger than the scenarios where there are more boys. And the scenarios where there are lots more boys have very low probabilities. The specific way it works itself out is illustrated below NumberOfChilden Probability Girls Boys 1 0.5 1 0 2 0.25 1 1 3 0.125 1 2 4 0.0625 1 3 ... ... ... ... NumberOfChilden Probability Girls*probabilty Boys*probabilty 1 0.5 0.5 0 2 0.25 0.25 0.25 3 0.125 0.125 0.25 4 0.0625 0.0625 0.1875 5 0.03125 0.03125 0.125 ... ... ... ... n 1/2^n 1/(2^n) (n-1)/(2^n) You can pretty much see where this is going at this point, the total of the girls and boys are both going to add up to one. Expected girls from one couple=$\sum_{n=1}^\infty(\frac{1}{2^n})=1$ Expected boys from one couple=$\sum_{n=1}^\infty(\frac{n-1}{n^2})=1$ Limit solutions from wolfram ## Any birth, whatever family is it in has a 50:50 chance of being a boy or a girl This all makes intrinsic sense because (try as couples might) you can't control the probability of a specific birth being a boy or a girl. It doesn't matter whether a child is born to a couple with no children or a family of a hundred boys; the chance is 50:50 so if each individual birth has a 50:50 chance then you should always get half boys and half girls. And it doesn't matter how you shuffle the births between families; you're not going to affect that. ## This works for any1 rule For due to the 50:50 chance for any birth the ratio will end up as 1:1 for any (reasonable1) rule you can come up with. For example the similar rule below also works out even Couples stop having children when they have a girl, or have two children NumberOfChilden Probability Girls Boys 1 0.5 1 0 2 0.25 1 1 2 0.25 0 2 In this case the total expected children is more easily calculated Expected girls from one couple=$0.5\cdot1 + 0.25\cdot1 =0.75$ Expected boys from one couple=$0.25\cdot1 + 0.25\cdot2 =0.75$ 1As I said this works for any reasonable rule that could exist in the real world. An unreasonable rule would be one in which the expected children per couple was infinite. For example "Parents only stop having children when they have twice as many boys as girls", we can use the same techniques as above to show that this rule gives infinite children: NumberOfChilden Probability Girls Boys 3 0.125 1 2 6 1/64 2 4 9 1/512 3 6 3*m 1/((3m)^2 m 2m We can then find the number of parents with a finite number of children Expected number of parents with finite children=$\sum_{m=1}^\infty(\frac{1}{1/(3m)^2})=\frac{\pi^2}{54}=0.18277....$ Limit solutions from wolfram So from that we can establish that 82% of parents would have an infinite number of children; from a town planning point of view this would probably cause difficulties and shows that this condition couldn't exist in the real world. - That the births are not independent is evident by examining sequences of births: the sequence girl-girl never appears while boy-boy sequences occur often. –  whuber Apr 15 '14 at 16:45 @whuber I see your point (although arguably it is the decision to have a child at all that is dependant, rather than the outcome of the event itself) possibly it would be better to say "a future birth's probability of being a boy is independant from all past births" –  Richard Tingle Apr 15 '14 at 16:48 Yes, I think there is a way to rescue the use of independence here. But this gets--I think--to the heart of the matter, so it seems that to honor the OP's request for a "vigorous" (rigorous?) demonstration some careful reasoning about this issue is needed. –  whuber Apr 15 '14 at 16:50 @whuber To be honest that first paragraph is the handwavey bit, the further paragraphs (and specifically the limits) are supposed to be the rigourous bit –  Richard Tingle Apr 15 '14 at 16:51 No argument there--but the latter material has already been covered in the same way in answers at stats.stackexchange.com/a/93833, stats.stackexchange.com/a/93835, and stats.stackexchange.com/a/93841. –  whuber Apr 15 '14 at 16:53 Imagine tossing a fair coin until you observe a head. How many tails do you toss? $P(0 \text{ tails}) = \frac{1}{2}, P(1 \text{ tail}) = (\frac{1}{2})^2, P(2 \text{ tails}) = (\frac{1}{2})^3, ...$ The expected number of tails is easily calculated* to be 1. The number of heads is always 1. * if this is not clear to you, see 'outline of proof' here - You can also use simulation: p<-0 for (i in 1:10000){ a<-0 while(a != 1){ #Stops when having a girl a<-as.numeric(rbinom(1, 1, 0.5)) #Simulation of a new birth with probability 0.5 p=p+1 #Number of births } } (p-10000)/10000 #Ratio - Simulation results are good in that they can give us some comfort we haven't made a serious mistake in a mathematical derivation, but they are far from the rigorous demonstration requested. In particular, when rare events that contribute a lot to an expectation can occur (such as a family with 20 boys before a girl appears--which is highly unlikely to emerge in a simulation of just 10,000 families), then simulations can be unstable or even just wrong, no matter how long they are iterated. –  whuber Apr 15 '14 at 16:44 Recognizing the geometric distribution of # of boys in the family is the key step to this problem. Try: mean(rgeom(10000, 0.5)) –  AdamO Apr 15 '14 at 19:53 Mapping this out helped me better see how the ratio of the birth population (assumed to be 1:1) and the ratio of the population of children would both be 1:1. While some families would have multiple boys but only one girl, which initially led me to think there would be more boys than girls, the number of those families would not be greater than 50% and would diminish by half with each additional child, while the number of one-girl-only families would be 50%. The number of boys and girls would thus balance each other out. See the totals of 175 at the bottom. - What you got was the simplest, and a correct answer. If the probability of a newborn child being a boy is p, and children of the wrong gender are not met by unfortunate accidents, then it doesn't matter if the parents make decisions about having more children based on the gender of the child. If the number of children is N and N is large, you can expect about p * N boys. There is no need for a more complicated calculation. There are certainly other questions, like "what is the probability that the youngest child of a family with children is a boy", or "what is the probability that the oldest child of a family with children is a boy". (One of these has a simple correct answer, the other has a simple wrong answer and getting a correct answer is tricky). - Let $\text{$\Omega$={(G),(B,G),(B,B,G),$\dots$}}$ be the sample space and let $\text{X:$\Omega\longrightarrow\mathbb{R}$;$\omega\mapsto\vert\omega\vert$-1}$ be the random variable that maps each outcome, $\omega$, onto the number of boys it involves. The expected value of boys, $\text{E(X)}$, comes then down to $\text{E(X)=$\sum_{n=1}^\infty(\text{n-1})\cdot0.5^n$=1}$, Trivially, the expected value of girls is 1. So the ratio is 1, too. - It's a trick question. The ratio stays the same (1:1). The right answer is that it does not affect birth ratio, but it does affect the number of children per family with a limiting factor of an average of 2 births per family. This is the kind of question you might find on a logic test. The answer is not about birth ratio. That's a distraction. This is not a probability question, but a cognitive reasoning question. Even if you answered 1:1 ratio, you still failed the test. - I have recently edited my answer to show that the solution is not necessarily 1:1, which explicitly controverts your assertions. –  whuber Apr 16 '14 at 14:24 I read your answer. You have introduced a predicate that is not stated in the problem (variance in birth rate of females). There is nothing in the problem that asserts Zorganian Republic is representative of the human population or even humans. –  Andrew - OpenGeoCode Apr 16 '14 at 17:30 That is correct--but there equally well is nothing that justifies the oversimplified assumption that all birth probabilities are the same. Assumptions have to be made in order to provide an objective, defensible answer so at a minimum a good answer will be explicit about the assumptions it makes and provide support for those assumptions. Claiming "this is not a probability question" does not address the issues, but overlooks them entirely. –  whuber Apr 16 '14 at 17:45 @whuber - The birth ratio in this problem is an invariant. The variant in the problem is the number of births per family. The question is a distraction, it is not part of the problem. <br/> Lateral thinking, is the ability to think creatively, or “outside the box” as it is sometimes referred to in business, to use your inspiration and imagination to solve problems by looking at them from unexpected perspectives. Lateral thinking involves discarding the obvious, leaving behind traditional modes of thought, and throwing away preconceptions. [fyi> I am a principal scientist in Lab] –  Andrew - OpenGeoCode Apr 16 '14 at 19:11 You may, then, have overlooked a key point in my answer: its assumptions also keep the population-averaged chance of a female birth invariant at 1:1 (in a specific way that I hope was clearly described). I would maintain there is substantial "lateral thinking" involved in any resolution of a paradox in which assumptions are critically examined: it requires imagination and good analytical skills to see that one is making assumptions in the first place. Dismissing any question outright as a mere "trick," as you do here, would seem antithetical to promoting or celebrating such thinking. –  whuber Apr 16 '14 at 19:24 Let the random variable denoting the $i^{th}$ child in the country be $X_i$ taking on values 1 and 0 if the child is a boy or girl respectively. Assume that the marginal probability that each birth is a boy or girl is $0.5$. The expected number of boys in the country = $E[\sum_i X_i] = \sum_i E[X_i] = 0.5 n$ (where $n$ is the number of children in the country.) Similarly the expected number of girls = $E[\sum_i (1- X_i)] = \sum_i E[1-X_i] = 0.5 n$. The independence of the births is irrelevant for the calculation of expected values. Apropos @whuber's answer, if there is a variation of the marginal probability across families, the ratio becomes skewed towards boys, due to there being more children in families with higher probability of boys than families with a lower probability, thereby having an augmentative effect of the expected value sum for the boys. -
# Random Sequence : Definition of [closed] "A sequence of bits is random if there exists no Program shorter than it which can produce the same sequence." ~ Kolmogorov Q: How do the digits of Pi fall as a random sequence based on the above definition - Isn't it immediate that the digits of pi are not random? We can write a finite program to compute them. Or am I missing something? –  rghthndsd Jun 8 at 17:26 Which digits of pi? Kolmogorov talks about finite sequences –  Goldstern Jun 8 at 17:31 Please see Wikipedia on Randomness : "Randomness occurs in numbers such as log (2) and pi". Thats what i want to clear –  ARi Jun 8 at 17:32 There are many different notions of randomness. The digits of pi may satisfy some (such as "Uniform distribution"/"normality"), and fail some others. –  Goldstern Jun 8 at 17:39 So my question in generic form is : For any program/ Turing machine which generates a symbol from a finite alphabet of symbols in a finite time , and keeps doing so ad infimitum.. How can one say that sequence of symbols so generated is random Also does it make a difference if i know the Program –  ARi Jun 8 at 17:47 ## closed as off topic by Goldstern, Steven Landsburg, Benoît Kloeckner, Bruce Westbury, Douglas ZareJun 8 at 18:58 Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question. If you were buying a random sequence from a specialized firm for you poker website, and if you were handed the first 1,000,000 digits of $\pi$, you would be entitled to go ask for a refund. Indeed, gifted players on your website could figure out the pattern and use it to win games. This intuition is formalized by the fact that arbitrarily long sequences of digits of $\pi$ can be produced by a small algorithm, so it is the opposite of random. Even if you change a few of them from time to time, it is still not very random, as you can use a program for $\pi$ and just specify a few exceptions in your program. So in short, $\pi$ is not considered a random number according to Kolmogorov complexity, and that is good. Now it is true that the digits of $\pi$ and log(2) verify some necessary conditions to be random, regarding the distribution of their digits. But these conditions are not sufficient, and Kolmogorov complexity allows us to distinguish between what seems random (like $\pi$) and what really is. EDIT: More precisions Ok I will try to clarify this a bit more, after questions in the comments. First of all, you can consider finite or infinite sequences. Randomness is only defined for infinite ones, and we rather talk of complexity for finite sequences, i.e. what is the minimal size of a program generating this sequence. The important difference is that for infinite sequences, either a program exists (and it is not random) or no program exists (and it is random). So randomness is a yes/no question. Notice that almost all infinite sequences are random, as there are countably many programs and uncountably many infinite sequences. For finite sequences, a program always exists, but it can be as long as the sequence (and then the sequence is complex), or significantly shorter (then the sequence is simpler). So complexity of finite sequences is a quantitative question. The link between the two is that a sequence is random if and only if the complexities or its successives prefixes are "maximal" in a precise sense (which is up to some additive constant). You can read more precise statements here. The beauty of this notion is that like in Church-Turing thesis, several very different definitions boil down to the same notion of randomness. So if as you say we are given just the beginning of $\pi$ up to the $k^{th}$ digit, we can just talk how it is more or less complex. In fact for $\pi$ it won't be too complex, because you can fix your program computing $\pi$ once and for all, and just change the digit you want to stop at (which takes only a logarithmic space). And once again, this is because $\pi$ is not random. Now the last important point that was already emphasized by Andreas Blass in the comments, but it does not hurt to repeat, is the following: all these definitions are mathematical, so they are completely independent of our current knowledge. Either $\pi$ (as an infinite sequence rigorously defined) is random or it is not, no matter whether we already found an algorithm for it or not. Finding an algorithm is a proof for its non-randomness, but it does not change the fact that this algorithm existed, and $\pi$ IS a non-random sequence, it does not BECOME one when we find an algorithm for it. - I don't think it's known that the digits of $\pi$ satisfy even very rudimentary necessary conditions for randomness. It seems, though, to be widely believed that $\pi$ is normal, i.e., that every finite sequence of digits occurs in $\pi$ with the same asymptotic frequency that it would have in a genuinely random number. –  Andreas Blass Jun 8 at 19:35 Yes most properties on the distributions of digits of $\pi$ are still open, so the picture is even worse. –  D K Jun 8 at 20:03 So, let me understand this, digits in Pi can not be taken as a random sequence ? –  ARi Jun 9 at 6:01 Yes that's the main point... –  D K Jun 9 at 12:08 Thanks a ton, this does clarify the issue –  ARi Jun 11 at 16:45
binarray.h 2.14 KB thomas.forbriger committed Dec 06, 2002 1 /*! \file binarray.h thomas.forbriger committed Dec 20, 2002 2 * \brief just arrays class definitions (for binary libraray contents) thomas.forbriger committed Dec 06, 2002 3 4 5 * * ---------------------------------------------------------------------------- * thomas.forbriger committed Mar 28, 2006 6 * $Id: binarray.h,v 1.5 2006-03-28 16:03:02 tforb Exp$ thomas.forbriger committed Dec 06, 2002 7 * \author Thomas Forbriger thomas.forbriger committed Dec 08, 2002 8 * \since 08/12/2002 thomas.forbriger committed Dec 06, 2002 9 * thomas.forbriger committed Dec 20, 2002 10 11 * just arrays class definitions (for binary libraray contents) * thomas.forbriger committed Mar 28, 2006 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 * ---- * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA * ---- * thomas.forbriger committed Dec 20, 2002 28 29 * This file loads explicitely instantiated template code (compiled into * libaff.a) that is presented in namespace aff::prebuilt. thomas.forbriger committed Dec 06, 2002 30 * thomas.forbriger committed Dec 20, 2002 31 * \sa \ref sec_design_binary thomas.forbriger committed Dec 08, 2002 32 * \sa aff::Array thomas.forbriger committed Dec 06, 2002 33 34 35 36 * * Copyright (c) 2002 by Thomas Forbriger (IMG Frankfurt) * * REVISIONS and CHANGES thomas.forbriger committed Dec 08, 2002 37 * - 08/12/2002 V1.0 copied from libcontxx thomas.forbriger committed Dec 06, 2002 38 39 40 41 42 * * ============================================================================ */ // include guard thomas.forbriger committed Dec 08, 2002 43 #ifndef AFF_BINARRAY_H_VERSION thomas.forbriger committed Dec 06, 2002 44 thomas.forbriger committed Dec 08, 2002 45 #define AFF_BINARRAY_H_VERSION \ thomas.forbriger committed Dec 20, 2002 46 "AFF_BINARRAY_H V1.0" thomas.forbriger committed Dec 08, 2002 47 #define AFF_BINARRAY_H_CVSID \ thomas.forbriger committed Mar 28, 2006 48 "$Id: binarray.h,v 1.5 2006-03-28 16:03:02 tforb Exp$" thomas.forbriger committed Dec 06, 2002 49 thomas.forbriger committed Dec 08, 2002 50 51 #ifndef AFF_COMPILING_LIBRARY #define AFF_NO_DEFINITIONS thomas.forbriger committed Dec 06, 2002 52 53 #endif thomas.forbriger committed Dec 08, 2002 54 /*! \def AFF_PREBUILT thomas.forbriger committed Dec 06, 2002 55 * thomas.forbriger committed Dec 20, 2002 56 57 58 * This preprocessor macro is set in aff/binarray.h. * It requests to place all declarations in aff::prebuilt to match the * contents of the \ref sec_design_binary "precompiled binary library". thomas.forbriger committed Dec 06, 2002 59 60 */ #ifndef DOXYGEN_MUST_SKIP_THIS thomas.forbriger committed Dec 08, 2002 61 #define AFF_PREBUILT thomas.forbriger committed Dec 06, 2002 62 63 #endif // DOXYGEN_MUST_SKIP_THIS thomas.forbriger committed Dec 08, 2002 64 #include thomas.forbriger committed Dec 06, 2002 65 thomas.forbriger committed Dec 08, 2002 66 #endif // AFF_BINARRAY_H_VERSION (includeguard) thomas.forbriger committed Dec 06, 2002 67 68 /* ----- END OF binarray.h ----- */
How do I put a plane on a sphere Recommended Posts Hawkblood    1018 I need some math help. The image may help you understand what I want. Basically, I want to be able to show a plane anywhere on the surface of a sphere. I have tried some methods, but I can't seem to get them to come out right. It is probably simple, but I can't get my head around it. The orientation of the plane needs to be such that "up" is always pointing toward the "north pole" of the sphere. Any ideas? Share on other sites EWClay    659 What do you want, a plane equation or a set of axes to position an object? Planes don't have an up vector, they have a normal and a distance from the origin. The normal is the normalised vector from the contact point to the centre of the sphere. The distance is the radius of the sphere minus the sphere's centre position dotted with the normal. If you want a set of axes, you have the plane normal, as described above, and the up vector. Call the plane normal z, cross with the up vector to get x, then cross z and x to get y. Edit: many matrix libraries have a 'Look at" function which would do the same thing. Share on other sites JTippetts    12970 In addition to EWClay's questions, I'd like to ask: are you trying to project the plane onto the sphere so that it wraps around, or are you just "attaching" the plane to the sphere as in your illustration? Share on other sites Nercury    812 If you make plane's pivot point the same distance from plane as sphere radius, then no matter how you rotate the plane it will always be on the sphere. If you need "up" stay the same, just don't roll the plane. Edited by Nercury Share on other sites Hawkblood    1018 Thank you all for posting. The ultimate goal is for this plane to actually be a terrain area on a planet. It won't be an actual plane, but that explaination got me the answers I needed. D3DXVECTOR3 v2(0,0,1); D3DXVec3TransformCoord(&v2,&v2,&SolarSystem.SolarObject[os].Objects[on].RotMat);//this is the planet's rotation matrix (its spin) float flong=atan2(v2.x,v2.z)+D3DX_PI/2.0f; if (flong>D3DX_PI*2.0f) flong-=D3DX_PI*2.0f; if (flong<0.0F) flong+=D3DX_PI*2.0f;//convert the transformation into an angle float PA=(GE->AutoPilot.Lat-90.0f)/180.0f*D3DX_PI;//the lat and lon are in degrees so I need to convert them float YA=(GE->AutoPilot.Lon+90.0f)/180.0f*D3DX_PI-(flong-D3DX_PI/2.0f); D3DXQUATERNION rot; D3DXQUATERNION tQ; D3DXQuaternionRotationYawPitchRoll(&tQ,-YA,PA,0);//using a quaternion works better D3DXQuaternionRotationAxis(&rot, &Camera::WORLD_XAXIS, SolarSystem.Planet[P].AxialOffset);//this is the axis offset for the planet. Not all planets have their axis straight up and down D3DXQuaternionMultiply(&tQ, &tQ, &rot); D3DXMatrixRotationQuaternion(&OM, &tQ); D3DXVECTOR3 v(0,0,1); D3DXVec3TransformCoord(&v,&v,&OM);//get the final "positional direction" //multiply the vector by the radius and that gives the actual position at the sphere's surface //use OM for the plane's orientation matrix This solution works nicely. Create an account or sign in to comment You need to be a member in order to leave a comment Create an account Sign up for a new account in our community. It's easy! Register a new account
location:  Publications → journals → CMB Abstract view # Asymptotic Dimension of Proper CAT(0) Spaces that are Homeomorphic to the Plane Published:2010-07-26 Printed: Dec 2010 • Naotsugu Chinen, Hiroshima Institute of Technology, Hiroshima 731-5193, Japan • Tetsuya Hosaka, Department of Mathematics, Faculty of Education, Utsunomiya University, Utsunomiya, 321-8505, Japan Format: HTML LaTeX MathJax PDF ## Abstract In this paper, we investigate a proper CAT(0) space $(X,d)$ that is homeomorphic to $\mathbb R^2$ and we show that the asymptotic dimension $\operatorname{asdim} (X,d)$ is equal to $2$. Keywords: asymptotic dimension, CAT(0) space, plane MSC Classifications: 20F69 - Asymptotic properties of groups 54F45 - Dimension theory [See also 55M10] 20F65 - Geometric group theory [See also 05C25, 20E08, 57Mxx]
# How do you write the equation given (-6,7); parallel to 3x + 7y = 3? Jul 11, 2017 $s : y = - \frac{3}{7} x + \frac{31}{7}$ #### Explanation: We take the line above in the form $r : y = a x + b$ We know that any parallel is $s : y = a x + c$ We choose $\left(- 6 , 7\right) \in s$ $7 y = 3 - 3 x \implies r : y = - \frac{3}{7} x + \frac{3}{7}$ $s : 7 = - \frac{3}{7} \left(- 6\right) + c$ $49 = 18 + 7 c \implies c = \frac{31}{7}$ Jul 11, 2017 $y = - \frac{3}{7} x + \frac{31}{7}$ $$ or $7 y = - 3 x + 31$ #### Explanation: Change $3 x + 7 y = 3$ to standard form of $y = m x + c$ $7 y = 3 - 3 x$ $y = - \frac{3}{7} x + \frac{3}{7}$ gradient, $m$, can be determined as $- \frac{3}{7}$ Two parallel lines would have the same gradient, in this case gradient of $- \frac{3}{7}$ You can choose to use gradient formula Gradient, $m = \frac{{y}_{1} - {y}_{2}}{{x}_{1} - {x}_{2}}$ or general formula for straight line $\left(y = m x + c\right)$ I would first be attempting it using gradient formula replace $m$ with $- \frac{3}{7}$; replaace ${x}_{1} , {x}_{2} , {y}_{1} , {y}_{2}$ with $x$ and the x-coordinate, $y$ and the y-coordinate respectively in this case, you have only 1 point given, if there is more, the x and y coordinate must be from the same point. $- \frac{3}{7} = \frac{y - 7}{x - \left(- 6\right)}$ $- \frac{3}{7} = \frac{y - 7}{x + 6}$ rearrange $- \frac{3}{7} \left(x + 6\right) = \frac{y - 7}{x + 6}$ $- \frac{3}{7} x - \frac{18}{7} = y - 7$ $y = - \frac{3}{7} x - \frac{18}{7} + 7$ $y = - \frac{3}{7} x + \frac{31}{7}$ in case you don't like fractions, multiply whole equation by 7 $7 y = - 3 x + 31$ Using general formula replace $m$ with $- \frac{3}{7}$; $x$ with the x-coordinate, $y$ with the y-coordinate in this case, you have only 1 point given, if there is more, the x and y coordinate must be from the same point. $7 = \left(- \frac{3}{7}\right) \left(- 6\right) + c$ solve for c $7 = \frac{18}{7} + c$ $c = \frac{31}{7}$ substitute $m$ and $c$ into $y = m x + c$ $y = - \frac{3}{7} x + \frac{31}{7}$ $7 y = - 3 x + 31$ Both equation for straight line you get are the same, depending on which you prefer.
A service provided by the WU Library and the WU IT-Services # The Transformed Rejection Method for Generation Random Variables, an Alternative to the Ratio of Uniforms Method Hörmann, Wolfgang and Derflinger, Gerhard (1994) The Transformed Rejection Method for Generation Random Variables, an Alternative to the Ratio of Uniforms Method. Preprint Series / Department of Applied Statistics and Data Processing, 10. Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, Vienna. Preview PDF Theoretical considerations and empirical results show that the one-dimensional quality of non-uniform random numbers is bad and the discrepancy is high when they are generated by the ratio of uniforms method combined with linear congruential generators. This observation motivates the suggestion to replace the ratio of uniforms method by transformed rejection (also called exact approximation or almost exact inversion), as the above problem does not occur for this method. Using the function $G(x) =\left( \frac(a)(1-x)+b\right)x$ with appropriate $a$ and $b$ as approximation of the inverse distribution function the transformed rejection method can be used for the same distributions as the ratio of uniforms method. The resulting algorithms for the normal, the exponential and the t-distribution are short and easy to implement. Looking at the number of uniform deviates required, at the code length and at the speed the suggested algorithms are superior to the ratio of uniforms method and compare well with other algorithms suggested in literature. (author's abstract)
# Lesson 3 Interpreting & Using Function Notation ### Lesson Narrative In this lesson, students continue to develop their ability to interpret statements in function notation in terms of a situation, including reasoning about inequalities such as $$f(a) > f(b)$$. They now have to pay closer attention to the units in which the quantities are measured to effectively interpret symbolic statements. Along the way, students practice reasoning quantitatively and abstractly (MP2) and attending to precision (MP6). Students also begin to connect statements in function notation to graphs of functions. They see each input-output pair of a function $$f$$ as a point with coordinates $$(x, f(x))$$ when $$x$$ is the input, and use information in function notation to sketch a possible graph of a function. Students’ work with graphs is expected to be informal here. In a later lesson, students will focus on identifying features of graphs more formally. ### Learning Goals Teacher Facing • Describe connections between statements that use function notation and a graph of the function. • Practice interpreting statements that use function notation and explaining (orally and in writing) their meaning in terms of a situation. • Sketch a graph of a function given statements in function notation. ### Student Facing Let’s use function notation to talk about functions. ### Student Facing • I can describe the connections between a statement in function notation and the graph of the function. • I can use function notation to efficiently represent a relationship between two quantities in a situation. • I can use statements in function notation to sketch a graph of a function. ### Glossary Entries • dependent variable A variable representing the output of a function. The equation $$y = 6-x$$ defines $$y$$ as a function of $$x$$. The variable $$x$$ is the independent variable, because you can choose any value for it. The variable $$y$$ is called the dependent variable, because it depends on $$x$$. Once you have chosen a value for $$x$$, the value of $$y$$ is determined. • function A function takes inputs from one set and assigns them to outputs from another set, assigning exactly one output to each input. • function notation Function notation is a way of writing the outputs of a function that you have given a name to. If the function is named $$f$$ and $$x$$ is an input, then $$f(x)$$ denotes the corresponding output. • independent variable A variable representing the input of a function. The equation $$y = 6-x$$ defines $$y$$ as a function of $$x$$. The variable $$x$$ is the independent variable, because you can choose any value for it. The variable $$y$$ is called the dependent variable, because it depends on $$x$$. Once you have chosen a value for $$x$$, the value of $$y$$ is determined.
# To produce an item in-house, equipment costing $250,000 must be purchased. It will have a life of... To produce an item in-house, equipment costing$250,000 must be purchased. It will have a life of 4 years and an annual cost of $80,000; each unit will cost$40 to manufacture. Buying the item externally will cost \$100 per unit. At i=12% determine the breakeven production number. EOY CF Factor PV PW EOY CF Factor PV PW
# How to wrap text around part of a figure I am looking for wrapping some text around part of a figure in the following way: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean quis mi ut elit interdum imperdiet quis non ante. +---------------------------+ +-------------------------+ | | | | | | | | | | | | +---------------------------+ +-------------------------| (a) subfigure a (b) subfigure b +------------------------+ Sed imperdiet, sapien quis | | viverra rhoncus, tellus dui | | dictum nisl, at porta purus | | ipsum ac turpis. Fusce auctor | FIGURE | ullamcorper adipiscing. Nunc | HERE | non quam ac orci egestas con- | | sequat ut eget quam. Cras +------------------------+ blandit condimentum ornare. (c) subfigure c Curabitur aliquam, nulla sit amet iaculis tristique, mi Figure 1: demo nulla auctor magna, sit amet imperdiet ante arcu a libero. The example here How to wrap text around a subfigure? has it only for equal-sized subfigures (which, I usually do simply by putting subfloat in my wrapfigure). Is there any way to do what I am suggesting. I am on Fedora 29 which has the texlive distribution. Thanks in advance for any suggestions or pointers. The suggestion given works, but not for the subfig package (which uses subfloat that I thought was recommended over subfigure.) Here is the example text: \documentclass{article} \usepackage{graphicx} \usepackage{caption} \usepackage{lipsum} \usepackage{wrapfig} \usepackage{verbatim,subfig} \begin{document} \lipsum[1] \begin{figure}[h]\centering\ContinuedFloat* \mbox{ \subfloat[]{\label{a}\includegraphics[draft,width=0.5\textwidth]{foo.png}} \subfloat[]{\label{b}\includegraphics[draft,width=0.5\textwidth]{foo.png}}} \end{figure} \begin{wrapfigure}{r}{0.5\textwidth}\centering\ContinuedFloat \subfloat[]{\label{c}\includegraphics[draft,width=0.5\textwidth]{foo.png}} \caption{Demo} \label{fig} \end{wrapfigure} \lipsum[2] \end{document} The counter deprecates every time ContinuedFloats in used. I could perhaps add to the figure counter everytime continuedFloats is used but that does not seem kosher to me. I like clean solutions if available. • Regarding a comparison of the subfig package (used in your question) and the subcaption package (used in the answer), see for example here: tex.stackexchange.com/a/13778/134144 Feb 3, 2019 at 19:07 • This link: tex.stackexchange.com/questions/144782/… says that subfigure is deprecated, hence I switched to subfig. Is that not true? The remainder of my document uses subfloat so this would be painful. Feb 3, 2019 at 19:12 • There are three different packages, subfigure, subfig and subcaption. The first of them is deprecated. (The second one also introduces an environment called subfigure). Feb 3, 2019 at 19:41 • I thought that subfig introduces subfloat. Did not realize that it also did subfigure, but that does not address my question of the counter changing. Feb 3, 2019 at 21:05 Have you considered making use of \ContinuedFloat from the caption package (without defining a different label format for continued floats)? Code \documentclass{article} \usepackage{geometry} \usepackage{graphicx} \usepackage{caption} \usepackage{subcaption} \usepackage{lipsum} \usepackage{wrapfig} \begin{document} \lipsum[1] \begin{figure}[h]\centering\ContinuedFloat* \begin{subfigure}[b]{0.5\textwidth}\centering \includegraphics[draft]{foo.png} \caption{} \label{a} \end{subfigure}% \begin{subfigure}[b]{0.5\textwidth}\centering \includegraphics[draft]{foo.png} \caption{} \label{b} \end{subfigure} \end{figure} \begin{wrapfigure}{r}{0.5\textwidth}\centering\ContinuedFloat \begin{subfigure}{0.5\textwidth}\centering \includegraphics[draft]{foo.png} \caption{} \label{c} \end{subfigure} \caption{Demo} \label{fig} \end{wrapfigure} \lipsum[2] \end{document} Output: More info on \ContinuedFloat in the LaTeX Wikibook: 8 Figures in multiple parts. Edit: User wants a working example using the subfig package. The problem with your current example is that the first figure environment has no caption. The \phantomcaption command may be used to create a hidden caption which should clear up your issue: \documentclass{article} \usepackage{geometry} \usepackage{graphicx} \usepackage{caption} \usepackage{lipsum} \usepackage{wrapfig} \usepackage{subfig} \begin{document} \lipsum[1] \begin{figure}[h]\centering % \ContinuedFloat* % Remove this. \subfloat[][]{\includegraphics[draft]{foo.png}} \subfloat[][]{\includegraphics[draft]{foo.png}} \phantomcaption \end{figure} \begin{wrapfigure}{r}{0.5\textwidth}\ContinuedFloat\centering \subfloat[][]{\includegraphics[draft]{foo.png}} \caption{Demo} \label{fig} \end{wrapfigure} \lipsum[2] \end{document} (also, remove the \ContinuedFloat*` in the figure environment - it appears subfig does not use this - see section 2.2.3 of the subfig documentation). Relevant 8 year old question: numbering - ContinuedFloat, and Subfloat • Very interesting: I had no idea of this approach. This is a cool answer. I will wait to see if there are better answers before checking, but this does address my problem. Feb 3, 2019 at 17:03
### Math Notes Subjects #### Differential Calculus Solutions ##### Topics || Problems A 5m long ladder leans against a vertical wall 4m high. If the lower end is sliding at 1 m/sec, how fast is the tip of the ladder moving? Solution By phytagorean formula: $$5^2 = y^2 + x^2$$ At $$y = 4$$ the value of $$x$$ is $$3$$. $$0 = 2y\frac{dy}{dt} + 2x\frac{dx}{dt}$$ At $$y = 4$$, $$x = 3$$, $$\frac{dx}{dt} = 1$$ $$0 = 2(4)\frac{dy}{dt} + 2 (3)(1)$$ $$\frac{dy}{dt} = -\frac{3}{4} \text{m/s}$$. The negative sign indicates a downward movement. Thus the tip of the ladder moves at $$\frac{3}{4} \text{m/s}$$ downwards.
Polar Convex Body Let $C \subset \mathbb{R}^n$ be a convex body (i.e., full-dimensional, compact) that includes the origin in its interior. Its $polar$ convex body is defined as $C^\circ = \{ y \in \mathbb{R}^n : \langle y , x \rangle \leq 1, \forall x \in C\}$. (a) Show that $C^\circ$ is a convex body. (b) Let $C$ be the triangle with vertices $(-1,1)$, $(-1,-1)$ and $(a,0$, where $a>0$. Draw $C$ and $C^{\circ}$ as a function of the parameter $a$. (c) Let $C$ be an axis-aligned ellipse with semiaxes $a$ and $b$. What is $C^\circ$? (d) Let $C=\left\{x \in \mathbb{R}^n \, : \, ||x||_p:=\left(\sum_{i=1}^n |x_i|^p\right)^{\frac{1}{p}} \leq 1 \right\}$. Find a nice description of $C^{\circ}$. Hint: Use Hölder's inequality. ### Solution: (a) It is clear from the defining inequality that $C^\circ$ is both closed and convex. To see that it's also bounded, take $\epsilon >0$ such that $x \in C$ whenever $\|x\| \leq \epsilon$ (there exists such an $\epsilon$ because $0 \in int(C)$). If $0\neq y \in C^\circ$, then $\frac{\epsilon}{\|y\|}y \in C$ and we get $\|y\| \leq \frac{1}{\epsilon}$. We have shown that $C^\circ$ is a compact, convex set. To see that it is full-dimensional, let $M>0$ be a bound on $C$. Then whenever $\|y\| \leq \frac{1}{M}$ we have $y \in C^\circ$ since $\langle y, x \rangle \leq \|y\| \|x\| \leq \frac{1}{M}M = 1$ for all $x \in C$.-MH Remark on this argument: The polar of a disk of radius $r$ is a disk of radius $\frac{1}{r}$. This is true for every $n\in\mathbb{N}$. (c) First, let's prove a general statement: If $T: \mathbb{R}^n \to \mathbb{R}^n$ is an invertible and symmetric linear transformation, then $(T(K))^\circ = T^{-1}(K^\circ)$. This is true because $y \in T(K)^\circ \iff \langle y, Tx\rangle \leq 1, \forall x \in K \iff \langle Ty, x\rangle \leq 1, \forall x \in K \iff Ty \in K^\circ \iff y \in T^{-1}(K^\circ)$. Now notice that the given ellipse is $T(S^1)$ with $T$ defined by $T(x,y) = (ax,by)$. Since the unit ball is the polar of the unit circle, the polar of this ellipse is the convex hull of another axis-aligned ellipse of semiaxes $\frac{1}{a}, \frac{1}{b}$ -MH (b) The vertices of the triangle give the inequalities $ax \leq 1$, $-x-y \leq 1$, $-x+y \leq 1$, so the polar is a triangle with vertices $(-1,0), (1/a,1+1/a), (1/a, -1-1/a)$. (d) Let $q = p/(p-1)$ and let $D = \{x \in \mathbb{R}^n : ||x||_q \leq 1 \}$. For all $y \in D$, and $x \in C$ we have $\langle y, x \rangle \leq ||y||_q ||x||_p \leq 1$ by Hölder's inequality so $D \subset C^\circ$. For $y \notin D$, let $y' = y/||y||_q \in D$. Choose $x = (y'_1^{q/p},\ldots, y'_n^{q/p})$ which is in $C$. Then $\langle y', x \rangle = 1$ and so $\langle y, x \rangle > 1$ showing that $y \notin C^\circ$. Therefore $C^\circ = D$.
## What does Clippard add?July 28, 2015 Posted by tomflesher in Baseball, Economics. Tags: , “Tyler Clippard 2011″ by Keith Allison on Flickr The Mets acquired setup man/closer Tyler Clippard from Oakland for starting pitcher Casey Meisner. Oakland is going to eat $1 million of Clippard’s$8.3 million deal, making Clippard the Mets’ highest-paid reliever; Bartolo Colon is the only pitcher who earns more. Though Ty is arbitration-eligible this year, his yearly salary is about double Bobby Parnell‘s $3.7 million deal; for the record, Heath Bell was earning$9 million yearly in his last contract. Clippard’s contract is big, but not out of the question – his 2014 stats included a .995 WHIP and a 3.57 KBB ratio. Closing for Oakland, Tyler has a 1.19 WHIP and a 1.81 KBB. Somewhat alarming is his drop in BAbip this year – it was .255 in Washington, and only .217 this year in Oakland. That means that some of those hits are due to defense, but his walk percentage also ballooned from 8.3% to 12.6%. Of course, some of that is due to the fact that Clippard is facing American League batters, including specialized designated hitters. What the Mets know they’ll get out of Clippard is a solid reliever who can shore up what’s been a fairly lights-out bullpen, but help bridge the gap from the early innings. Yeah, yeah, Familia has blown some saves recently, but over the course of the season the Mets have 10 blown saves, which is below the National League median of 12. The Mets are also near the bottom of the league in losses by relievers – they have 9 losses in relief this year, behind only Milwaukee with 8. This will allow the Mets to go to a strong, reliable arm early, both relieving (ha!) some of the pressure on starting pitchers like Jon Niese (who’s been left in while struggling because, hey, what’s the alternative?) and preventing the Mets from needing to rely on Carlos Torres and Alex Torres. Though this leads to a higher number of pitchers per game, having a reliable endgame pipeline with Jenrry Mejia, Clippard, Bobby Parnell and Jeurys Familia makes it easier to go lights out. It will also allow the Mets to develop Hansel Robles by judiciously building him into high-pressure situations while maintaining some options behind him. ## Don’t Knock Curtis, Even if He Isn’t Knocking It Out of the ParkJuly 8, 2015 Posted by tomflesher in Baseball. Tags: , Curtis Granderson has, for some reason, developed a reputation as a streaky hitter. For example, Adam Rubin opened this article from June 27 commenting on it, although the thrust of the article was Granderson’s defensive issues. Amazin’ Avenue was justifiably a bit more nuanced, describing Curtis’s change of approach at the plate as a favorable influence on Mets scoring. What’s surprising to me is that Granderson’s hitting has been described as a ‘streak.’ Granderson’s hitting was unpredictable at the beginning of the season, certainly, but those sorts of fluctuations are natural with a small sample size. What’s visible from the time-series chart of Granderson’s first 85 games should be two things: his batting average has improved, and his hitting has been consistent if not trending upward. Some rudimentary data analysis bears that out. A time-series regression of batting average on game number shows an intercept of .148 and an increase of .0016 per game, both significant at the 99% level (showing a bad start and a slow but steady increase). However, Granderson’s hitting is coming at the expense of his OBP, which showed a 99%-significant .360 intercept and a 95%-significant decrease of .0002 each game. The fluctuation of OBP, which is almost certainly due to his high proportion of walks at the beginning of the season, is about an eighth of the increase in batting average; Curtis’ consistent production can be counted on, whether the rest of the team contributes or not. ## The Mets have the worst, but who has the best?July 7, 2015 Posted by tomflesher in Baseball. Tags: , , , , Earlier, I posted about the Mets’ anemic pinch-hitting performance this year, led by John Mayberry, Jr., whose .080 mark is the worst in the league among hitters with at least 20 plate appearances as a pinch hitter. Even more shocking is that Mayberry is seventh in the league in plate appearances as a PH. The Mets may have the worst pinch hitters in the league, but Cleveland may have the best. Cleveland’s David Murphy, who has a .333 batting average in 26 pinch-hit appearances, and Ryan Raburn, who is tied for highest OBP as a pinch hitter with .455 in 22 plate appearances, both lag behind Mayberry in appearances. (Arizona’s Cliff Pennington also has a .455 OBP in 22 plate appearances, and Washington’s Dan Uggla deserves an honorable mention for a .429 mark in 21 times at the plate.) Murphy’s monstrous batting average as a pinch hitter matches some general trends shown in his split page. Against a starter, Murphy hits a disgusting .357 the first time and an obscene .432 his second time up. His OBP during that second-appearance sweet spot is an unconscionable .476. Meanwhile, Raburn demonstrates the opposite trend, hitting uniformly better against starters his first time up: .333/.419/.593 the first time, versus .286/.333/.586 the second time. This, at least in theory, means that Raburn can hammer a pitcher the first time up and Murphy can maintain the pressure. Oh, and both Murphy and Raburn pitched on June 17th, making them part of an already unusually large Spectrum Club for 2015. ## In A PinchJuly 7, 2015 Posted by tomflesher in Baseball. Tags: , , , Much has been made of the Mets’ inability to hit, often with the tongue-in-cheek point made that Mets pitchers are hitting better than Mets pinch hitters. In fact, that’s true: Mets pitchers have made 178 plate appearances, owning a collective .165/.174/.213 slash line with a .255 BABIP, while pinch hitters get on base slightly more often but otherwise do worse. The pinch hitters have 118 plate appearances thus far, hitting .147/.248/.186 with a ,242 BABIP. Of course, a big portion of the Mets pitchers’ abysmal slugging average is Steven Matz‘ .500/.500/.667 in 6 plate appearances. Even so, the pitchers are still hitting fairly well – even without Matz, the pitchers have a higher batting average than the pinch hitters. John Mayberry, Jr., has taken the most plate appearances as a pinch hitter for the Mets. In his 30 PA, he’s hit – though I’m not sure ‘hit’ is correct – .080/.233/.080, although with a terribly unlucky .118 BABIP. Darrell Ceciliani, who was recently sent back down, had 20 plate appearances at .176/.263/.235, inflated by a .375 BABIP. The recently recalled Kirk Nieuwenhuis is 0-14 with a walk (.071 OBP) pinch hitting. Together, those 64 plate appearances make up about half of the Mets’ pinch hitting appearances. For comparison, MLB pitchers are hitting .132/.156/.163 this year collectively, while MLB pinch hitters have a collective.211/.283/.316 line. That means the Mets pitchers are decidedly above average hitters, but the thin bench is hurting their run production when it comes time to lift a pitcher for a bat. ## Logan Verrett’s Three-Inning SaveJuly 6, 2015 Posted by tomflesher in Baseball. Tags: , , During yesterday’s game, Mets reliever Logan Verrett came in to start the seventh inning during a 7-0 game. During the eighth, the Mets would add another run. Two interesting things happened. slgckgc on Flickr (Original version) UCinternational (Crop) First, Verrett made his first plate appearance in the majors. He’s a career .098/.132/.098 hitter in 56 plate appearances in the minors, so his groundout to second wasn’t a big surprise. Second, he earned a three-inning save. Those aren’t common – in fact, the last Met to do so was Raul Valdes in 2010. Valdes actually hit a double in that game. Three-inning saves are a fairly rare beast; the most in the 2000s was 35 in 2001, and in 2014 there were only 9. There have already been 10 in 2015, though, perhaps in keeping with the trend toward using strong minor league starters as bullpen arms. Matt Andriese of Tampa Bay leads the majors in three-inning saves this year (with two); Verrett is now tied for second (along with seven other pitchers). ## Why isn’t Robles the left-handed specialist?July 5, 2015 Posted by tomflesher in Baseball, Economics. Tags: , , , , “Alex Torres on April 23, 2015″ by slgckgc on Flickr, Cropped by UCinternational. In yesterday’s post, I made reference to Terry Collins‘ maddening habit of treating Alex Torres as a left-handed specialist against all better evidence. In 17 of Torres’ 33 appearances, he’s faced three batters or fewer; those numbers are similar to bridge man Hansel Robles‘ 26 appearances, in which 15 appearances have faced three batters or fewer (each has faced a maximum of eight batters). Robles’ median appearance is a full inning pitched, whereas Torres’ median was 2/3 of an inning. 19 of Torres’ appearances have come in a clean inning, whereas Robles has come in 16 times to start an inning and twice more with one batter on but 0 outs. Overall, the two pitchers are being used in very similar ways, except for one major factor: Almost 48% of the batters Alex Torres has faced are left-handed, as opposed to a hair over 38% for Hansel Robles. Against righties, Torres has a .297 OBP-against, compared to Robles’ .328, neither being much to write home about. (Closer Jeurys Familia allows a .225 OBP against right-handers and .254 against left-handers, and reliable eighth-inning dude Bobby Parnell carries .294 against righties and .222 against lefties, in a very limited sample this year.) But against lefties, Robles strictly dominates Torres. Robles has a .222 OBP allowed against right-handers, which is as good as Parnell and a smidge better than our closer. But Torres, who’s faced 59 lefties, more than anyone except Familia? Torres allows a monstrous .407 OBP when facing left-handers! .407. Four oh seven. That’s the worst platoon split of any active Mets pitcher. Not only is Alex Torres not even better facing lefties than righties, he’s so bad that Alex Torres Against Left-Handers should be sent down to keep Alex Torres Against Right-Handers on the roster! If Left-Handers Against Alex Torres were a single player, they would rank #3 in OBP in the National League, ahead of Anthony Rizzo with .405. Both Parnell and Robles are better against lefties than righties, but Parnell should be comfortable in his eighth-inning role. Why not bust out Robles against lefty-heavy lineups and see if he can keep up his difference? But for heaven’s sake, quit using Alex Torres against left-handers. ## One pitcher and two guys on the disabled listJuly 4, 2015 Posted by tomflesher in Baseball, Economics. Tags: , , This season, the Mets have been fighting against a pernicious series of injuries, mainly focused on the offense. Although we lost Jenrry Mejia, Zack Wheeler, and Jerry Blevins, we’ve also lost David Wright for much of the season and missed Daniel Murphy, Michael Cuddyer, and Juan Lagares for smaller pieces. Let’s take a look at some interesting statistics: Steven Matz leads the team in OBP (1.000) and total bases per game (4). Second to Matz in OBP is David Wright (.371); Travis d’Arnaud is second in total bases per game (2) and fifth in OBP (.338). Wright follows up with 1.75 total bases per game. In order to get to active position players, we have to go 3 deep to Lucas Duda (OBP of .358 and 1.56 TB/G) and Curtis Granderson (OBP of .348, 1.54 TB/G). In other words, of the Mets’ top 5 hitters, one is a pitcher who’s played one game, and two have spent more time on the disabled list than on the field. Argue with the choice of metric, but our best active hitter can’t touch Andrew McCutchen‘s 10th-best OBP (.370) or the total bases mark (Duda has 122, Granderson 125, and the bottom of the top 10 is a three way tie with 162 total bases involving Prince Fielder, J.D. Martinez, and Manny Machado). Of course, it could be worse: we could have Ike Davis (.322 OBP, 1.3 TB/G). (But I still like Ike.) So here’s the problem: When the Mets started off the season, they were hitting incredibly – during the first 25 games, they averaged 4.04 runs per game and allowed only 3.28. The league average this season is 4.01 runs scored to 4.11 allowed, so that was a pretty nice set of stats. But during games 26-50, those stats slid to 3.84 runs scored and 4.04 runs allowed, and in games 51-75, the Mets averaged only 3.16 runs scored to still 4.04 runs allowed. Our pitching, despite being at times inconsistent, is still better than the league, by average. Although the Mets have made some interesting moves in the bullpen, and Terry Collins‘ insistence on using Alex Torres as a left-handed specialist is maddening at times, the pitching side of the equation is okay. All the team needs is a break on the offensive side – Duda could break out. Cuddyer could stay healthy. Murphy can keep up his hitting and Wilmer Flores can continue developing. This season has been a comedy of errors offensively, but SOMETHING has to go right soon. ## Lucas Duda’s .422 OBP and Anthony Recker’s Weird Slash LineApril 28, 2015 Posted by tomflesher in Baseball. Tags: , Mets first baseman Lucas Duda has an alarming .422 OBP and .507 SLG this year, including 2 HBP and a 15/11 KBB ratio. That’s quite a bit above Lucas’ previous year OBPs – since 2011, Lucas had gotten on base .370, .329, .352, and .349 times per plate appearance, in order. That’s centered almost exactly around .350. In 20 games, Lucas has made 83 plate appearances. What are the chances that Lucas is hitting about where he did previously, but had a hot streak of facing pitchers who gave him what he needed? The standard error for an n-trial sample of a binary variable with probability p is $\sqrt{\frac{p(1-p)}{n}}$. If we assume Lucas’ ‘true’ OBP is .350, then the standard error of this 83-trial sample would be $\sqrt{\frac{.35(.65)}{83}} = \sqrt{\frac{.2275}{83}} = \sqrt{.00274} = .052$. That means about 66% of Lucas’s 83-plate-appearance streaks should be within one standard error, and about 95% should be within two. Due to the small sample size, it’s hard to be 95% sure that Lucas’ performance is due to actual improvement, but the upper bound of the 66% confidence window would be about .402. Lucas is outperforming that by about 20 basis points. Meanwhile, backup catcher, relief pitcher, and third baseman Anthony Recker is on the other side – he’s made 11 plate appearances, walking four times, striking out 4, and not yet hitting the ball. Though none of those walks are intentional, that leaves Recker with a .000/.364/.000 line – quite far off from his lifetime .194/.268/.364 BA/OBP/SLG line. No wonder Kevin Plawecki is starting. Recker’s hitting probably isn’t that much different from last year’s .200 average – we can be 95% sure he’s not likely to hit much above .300 without some extra batting practice, but otherwise it’s not unusual for a .200 hitter to have a streak of 7 at-bats with no hits, especially since he’s working walks, too. ## Jerry Blevins has some weird stats.April 27, 2015 Posted by tomflesher in Baseball. Tags: , Poor Jerry Blevins. He’s having a really rough season. I mean, there’s the obvious, in that he’s suffering from a fractured forearm that’s keeping him out of the best season he’d had yet. Blevins has pitched 5.0 innings – 15 batters up and 15 batters down (although they weren’t perfect – see below). Despite a meager career .042 platoon split, including a .025 BAbip platoon split, the Mets were using Blevins as a left-handed specialist (one right-handed batter faced in 2015), and he was rising to the occasion. Then, his pitching arm was broken by a comebacker. Blevins’ record is currently 1-0, and that one win was pretty filthy. It came on April 14, when Blevins came in to face Dee Gordon and Christian Yelich with one out and Ichiro Suzuki on third base. The Marlins trailed 5-4 in the top of the 7th, so this was technically a save opportunity for Blevins. Blevins pitched to Gordon, who grounded into a fielder’s choice, but Ichiro came around and scored on an error by second baseman Daniel Murphy. That unearned run was charged to Rafael Montero. Blevins then pitched to Yelich, who obligingly grounded into a double play and ended the top of the inning. For those keeping score at home, Blevins pitched to two batters and recorded two outs; one inherited runner scored an unearned run due to an error in the field. As a result, Blevins receives a blown save. Fortunately, the Mets scored two runs in the bottom of the 7th, and the tag team of Carlos Torres and Jeurys Familia tied up the win for Blevins. As a result, Blevins has the shame of his only win being a Vulture Win, and it even came out of an inning with no hits and no walks. At least Blevins got the win – as it happens, Burke Badenhop managed to blow a save on no runs, no hits, and no walks in 2014. Twice. ## I Still Like IkeApril 27, 2015 Posted by tomflesher in Baseball. Tags: , , 1 comment so far No, not because of his .345 batting average or .405 OBP, but because of his 0.00 ERA. The whole time Ike Davis was a Met, I would shout at the TV every time Terry Collins put in a tired reliever in a laugher that Ike needed to pitch. Ike was, after all, a starter in his freshman year at Arizona State, cobbling together 47.2 innings in 12 starts and 2 relief appearances for a 7.42 ERA. He got better, though – he spent most of his sophomore year in the outfield but still managed to make one start and six relief appearances, totalling 6.2 innings and a 1.34 ERA. In his junior year, Ike pitched in 16 games and 24 innings, going 4-1 with 4 saves and a 2.25 ERA. Ike was not a bad hurler. Buster Posey, of course, showed him up – while playing 68 games at catcher in 2007-08, Buster also made 9 relief appearances and collected 6 saves with a 1.17 ERA. It was only natural that the A’s turned to Ike to pitch the ninth inning of a 14-1 blowout on April 21, but Ike pitched a perfect inning on 9 pitches. No runs, no hits, no errors, no walks. (No strikeouts, either….) It’s a shame I waited through all those games and Ike finally pitched on the other side of the country.
Narrow your view on multiple-cursor marks The discussion in the comments of this post is great. It reveals a couple of ways to narrow your view, in a few frameworks. In particular it reveals that in multiple-cursors, all it takes is a call to mc-hide-unmatched-lines-mode. (mc-hide-unmatched-lines-mode &optional ARG) Minor mode when enabled hides all lines where no cursors (and also hum/lines-to-expand below and above) To make use of this mode press “C-‘” while multiple-cursor-mode is active. You can still edit lines while you are in mc-hide-unmatched-lines mode. To leave this mode press or “C-g” Just be sure to exit this mode before closing Emacs as it is a little confusing to return to nothing.
Out: Saturday March 19, 2016 Due: Friday March 25, 2016 at 11:59pm Work in groups of 2. ## Overview In this programming assignment, you will implement a new shading technique: "deferred" shading. To go with it, you'll also implement a post-processing filter to provide bloom around bright light sources. Running the program pa6.PA6, you should see the following window: You can use the combo boxes at the bottom of the window to change the scenes and the renderers. For the "deferred" renderer, you have the option of selecting which buffer among five to display on the screen. We hope this feature will be useful for you when debugging your shaders. We implemented forward shading for several shading models in PA1. Forward shading has one main drawback: if the scene has many overlapping objects, expensive lighting calculations are performed for all fragments, even those that will be overwritten by an object closer to the viewer. Deferred shading addresses the shortcoming as follows: 1. Instead of lighting each fragment as it is generated, the scene is first rendered into an off- screen buffer (the “g-buffer”) using simple shaders which just output material properties. Since no lighting or other computation has been done yet, overlapping objects are handled efficiently. 2. Run an “übershader” on the g-buffer to compute shading. This shader is usually called an "übershader" ("supershader") since it contains lighting code for all types of lights and materials. However, deferred shading cannot easily handle transparency or translucency. The class pa6.renderer.deferred.DeferredRenderer and pa6.renderer.deferred.DeferredMeshRenderer implements the deferred shading technique. It makes uses of the shaders located in the student/src/shaders/deferred directory. You will see that the directory contains vertex and fragment shaders for the five materials we have implemented for the forward renderer. However, only the "single color" material has been implemented, and it's your job to port the rest of the materials to the deferred shading world. This also involves editing the übershader so that it knows how to deal with other types of materials. As mentioned earlier, the shaders for each material will not compute the final fragment color, but will fill the g-buffers with information useful for computing it later. Depending on the material, this information includes the normal vector, the tangent vector, the diffuse and specular color, the ID of the material, and other material-specific parameters. DeferredRenderer uses 4 g-buffers each of whose pixels can store 4 floating points numbers, totalling 16 floating point numbers. The provided implementation of the single color material also requires that the first floating point number of the first g-buffer stores the material ID. As a result, you have 15 floating points numbers to encode all other information. We leave this encoding up to you, but it should be plenty of space. Note: You do not have to encode the eye-space position of the fragement. It is available in the position variable in the übershader. The übershader should figure out the material being shaded from the material ID and then compute the final fragment color accordingly. Since you have implemented all the materials in the forward renderer, implementing the übershader should be as simple as copying and pasting the relevant code from the forward shaders (with appropriate modifications, of course). You should check the correctness of your deferred renderer by comparing its output to that of the forward renderer. All renderings they generate should be the same. Edit: so that it renders the bloom effect when enabled. The bloom is a visual effect that aims to simulate the phenomenon in which imperfections in the optics of the eye (or of camera lenses) produces halos of light around bright objects. A nice model for this effect is described by Spencer et al.; essentially it causes the image you see to be convolved with a filter that is very sharp at the center but has long, very faint tails. When filtering most parts of the image, it will have essentially no effect, since the tails of the filter are so faint, but when something like the sun comes into the frame, the pixel values are so high that the faint tails of the filter contribute significantly to other parts of the image. The problem is, this filter is too big to work with directly: the tails should extend a large fraction of the size of the image. And worse, it is not separable. So doing a straight-up space-domain convolution is hopeless. Instead, we approximate this filter with the sum of an impulse and several Gaussian filters. Here is how we did it; the result is that the filter we will use is $0.8843 \delta(x) + 0.1 g(6.2,x) + 0.012 g(24.9,x) + 0.0027 g(81.0,x) + 0.001 g(263,x)$ where $g(\sigma, x)$ is a normalized gaussian with standard deviation $\sigma$. You'll find these weights and the standard deviations for the kernels in the bloomFilterScales and the bloomFilterStdev fields, respectively. There is a parameter BLOOM_AMP_FACTOR that you can increase from 1.0 to make the bloom more dramatic, which is fun. Then, we convolve the rendered images with 4 Gaussian kernels, each with different width, to blur it. The results of the convolutions are as follows: Blur #1 Blur #2 Blur #3 Blur #4 We scale each image by the constants stored in bloomFilterScales array and add the scaled images to the original image to produce the final image. $k_0$ $+$ $k_1$ $+$ $k_2$ $+$ $k_3$ $+$ $k_4$ $=$ Original Blur #1 Blur #2 Blur #3 Blur #4 Final The images below show the differences between the original and the final image with the bloom effect fully applied: Original image Final image Amped up bloom The first part of this task is to implement the effect directly, using the gaussian blur program you developed for PA5. You will need to use a temporary buffer to hold each blur result and then add it into the main image using additive blending. Use filter sizes that are 3 times the standard deviation. The problem with this approach is that it will be really slow. Replacing the non-separable glare filter with the sum of gaussians makes it possible to do this in a few seconds a frame by using separable filtering for each of the gaussians. But it is still just way too slow. A good way to speed up large blurs is to shrink the image, blur it, and then enlarge it back to size. If you resample the image so that is is smaller by a factor $\alpha$, then apply a gaussian of width $\alpha\sigma$, then resample back to the original size, the result will be quite a good approximation of blurring the full image by a gaussian of width $\sigma$, as long as $\alpha\sigma$ does not get too small. (We recommend keeping this effective standard deviation above 4 pixels.) You can do this in whatever way you like that produces results that look like the full-res filters but runs at full frame rate. A way to do this, reusing some machinery you have already built for previous assignments, is to shrink the image successively by powers of 2, in the same way you built the mipmap for PA4, until the size is appropriate, blur using the gaussian filter from PA5, then enlarge it again using the upsampling code from PA3. If you follow this approach, here is a recommended implementation approach: set up your program so that it shows the following when the "bloom" control is checked: • The rendered image blurred by a single gaussian (for this you just need the gaussian-blur program and some care with swapping of BufferCollections). • The rendered image with the blurred image added to it (for this you probably need a temporary buffer where you do the blurring, and you need to figure out how to enable additive blending to merge the blurred image back in). • The full model, computed slowly at full resolution (this is just wrapping a loop around the previous one). • The rendered image downsampled by a fixed factor, then upsampled again (this requires downsampling, either in a series of small steps like in the mimpap, for which the cubic B-spline reconstruction filter is sufficient as a downsampling filter, or with a new resampling program that can downsample by large factors at once). Start by using the copy program, resulting in severe aliasing, so you can clearly see what's happening, then switch to nice sampling filters. • The rendered image blurred by downsampling, blurring, and upsampling to approximate a large blur kernel. Compare it to the results of the first step to confirm all the factors are right. • The final model. Note that this task may involve the use of one or more shaders which we do not provide to you. Write your own shaders to get the job done. It also involes multi-step manipulation of the frame buffer objects and textures, and again we leave it to you to figure out how this should be done. You are free to declare new fields in the PA6Renderer class if needed be.
Chapter 103 Anemias associated with normocytic and normochromic red cells and an inappropriately low reticulocyte response (reticulocyte index <2–2.5) are hypoproliferative anemias. This category includes early iron deficiency (before hypochromic microcytic red cells develop), acute and chronic inflammation (including many malignancies), renal disease, hypometabolic states such as protein malnutrition and endocrine deficiencies, and anemias from marrow damage. Marrow damage states are discussed in Chap. 107. Hypoproliferative anemias are the most common anemias, and anemia associated with chronic inflammation is the most common of these. The anemia of inflammation, similar to iron deficiency, is related in part to abnormal iron metabolism. The anemias associated with renal disease, inflammation, cancer, and hypometabolic states are characterized by an abnormal erythropoietin response to the anemia. Iron is a critical element in the function of all cells, although the amount of iron required by individual tissues varies during development. At the same time, the body must protect itself from free iron, which is highly toxic in that it participates in chemical reactions that generate free radicals such as singlet O2 or OH. Consequently, elaborate mechanisms have evolved that allow iron to be made available for physiologic functions while at the same time conserving this element and handling it in such a way that toxicity is avoided. The major role of iron in mammals is to carry O2 as part of hemoglobin. O2 is also bound by myoglobin in muscle. Iron is a critical element in iron-containing enzymes, including the cytochrome system in mitochondria. Iron distribution in the body is shown in Table 103-1. Without iron, cells lose their capacity for electron transport and energy metabolism. In erythroid cells, hemoglobin synthesis is impaired, resulting in anemia and reduced O2 delivery to tissue. Table 103-1 Body Iron Distribution ### The Iron Cycle in Humans Figure 103-1 outlines the major pathways of internal iron exchange in humans. Iron absorbed from the diet or released from stores circulates in the plasma bound to transferrin, the iron transport protein. Transferrin is a bilobed glycoprotein with two iron binding sites. Transferrin that carries iron exists in two forms—monoferric (one iron atom) or diferric (two iron atoms). The turnover (half-clearance time) of transferrin-bound iron is very rapid—typically 60–90 min. Because almost all of the iron transported by transferrin is delivered to the erythroid marrow, the clearance time of transferrin-bound iron from the circulation is affected most by the plasma iron level and the erythroid marrow activity. When erythropoiesis is markedly stimulated, the pool of erythroid cells requiring iron increases and the clearance time of iron from the circulation decreases. The half-clearance time ... Sign in to your MyAccess Account while you are actively authenticated on this website via your institution (you will be able to tell by looking in the top right corner of any page – if you see your institution’s name, you are authenticated). You will then be able to access your institute’s content/subscription for 90 days from any location, after which you must repeat this process for continued access. Ok ## Subscription Options ### AccessMedicine Full Site: One-Year Subscription Connect to the full suite of AccessMedicine content and resources including more than 250 examination and procedural videos, patient safety modules, an extensive drug database, Q&A, Case Files, and more.
• 研究报告 • ### 樟树幼苗对干旱胁迫和种植密度的生理响应 1. (华南农业大学林学与风景园林学院, 广州 510642) • 出版日期:2017-06-10 发布日期:2017-06-10 ### Physiological responses of Cinnamomum camphora seedlings to drought stress and planting density. WANG Zhuo-min, ZHENG Xin-ying, XUE Li*#br# 1. (College of Forestry and Landscape Architecture, South China Agricultural University, Guangzhou 510642, China). • Online:2017-06-10 Published:2017-06-10 Abstract: In order to understand the mechanism of physiological response of plants to drought stress, one-year-old Cinnamomum camphora seedlings with different planting densities (10, 20, 40, 80 seedlings·m-2) were grown in drought stress environments by manual simulation, and their physiological indices were determined at 4, 8, 12, 16, 20 d after drought stress. The results showed that the water content of seedlings under drought stress decreased with the extension of drought duration and the decrease amplitude increased with increasing planting density in each drought stage. The chlorophyll content of seedlings except for the density of 10 trees·m-2 increased and then followed by a decrease with increasing drought duration, and the variation amplitude increased with increasing density. Soluble sugar, soluble protein contents and SOD activity of seedlings under different planting densities increased with increasing drought duration, whereas their proline and MDA contents tended to increase, and the variation amplitude of these indices increased with increasing density. Generally, the drought resistance of C. camphora decreased with increasing density of seedlings.
# Dataset: has_part Languages: en Multilinguality: monolingual Size Categories: 10K<n<100K Language Creators: found Annotations Creators: machine-generated # Dataset Card for [HasPart] ### Dataset Summary This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet. Text Classification / Scoring - meronyms (e.g., plant has part stem) English ## Dataset Structure ### Data Instances {'arg1': 'plant', 'arg2': 'stem', 'score': 0.9991798414303377, 'synset': ['wn.plant.n.02', 'wn.stalk.n.02'], 'wikipedia_primary_page': ['Plant']} ### Data Fields • arg1, arg2: These are the entities of the meronym, i.e., arg1 has_part arg2 • score: Meronymic score per the procedure described below • synset: Ontological classification from WordNet for the two entities • wikipedia_primary_page: Wikipedia page of the entities Note: some examples contain synset / wikipedia info for only one of the entities. ### Data Splits Single training file ## Dataset Creation Our approach to hasPart extraction has five steps: 1. Collect generic sentences from a large corpus 2. Train and apply a RoBERTa model to identify hasPart relations in those sentences 3. Normalize the entity names 4. Aggregate and filter the entries Rather than extract knowledge from arbitrary text, we extract hasPart relations from generic sentences, e.g., “Dogs have tails.”, in order to bias the process towards extractions that are general (apply to most members of a category) and salient (notable enough to write down). As a source of generic sentences, we use GenericsKB, a large repository of 3.4M standalone generics previously harvested from a Webcrawl of 1.7B sentences. ### Annotations #### Annotation process For each sentence S in GenericsKB, we identify all noun chunks in the sentence using a noun chunker (spaCy's Doc.noun chunks). Each chunk is a candidate whole or part. Then, for each possible pair, we use a RoBERTa model to classify whether a hasPart relationship exists between them. The input sentence is presented to RoBERTa as a sequence of wordpiece tokens, with the start and end of the candidate hasPart arguments identified using special tokens, e.g.: [CLS] [ARG1-B]Some pond snails[ARG1-E] have [ARG2-B]gills[ARG2-E] to breathe in water. where [ARG1/2-B/E] are special tokens denoting the argument boundaries. The [CLS] token is projected to two class labels (hasPart/notHasPart), and a softmax layer is then applied, resulting in output probabilities for the class labels. We train with cross-entropy loss. We use RoBERTa-large (24 layers), each with a hidden size of 1024, and 16 attention heads, and a total of 355M parameters. We use the pre-trained weights available with the model and further fine-tune the model parameters by training on our labeled data for 15 epochs. To train the model, we use a hand-annotated set of ∼2k examples.
# Cardinality of this set: $A=\{f: \mathbb{R} \rightarrow \mathbb{R} \text{ continuous} : f(\mathbb{Q})\subseteq\mathbb{Q}\}$ How can I show that the cardinality of this set: $A=\{f: \mathbb{R} \rightarrow \mathbb{R} \text{ continuous} : f(\mathbb{Q})\subseteq\mathbb{Q}\}$ is $2^{\aleph_{0}}$? I know that $A\subseteq \{f: \mathbb{R} \rightarrow \mathbb{R} \text{ continuous} \}$ so #$(A)\leq2^{\aleph_{0}}.$ But I don't know how to show the other inequality. Thanks a lot for your help! I know there is a post with the same question, but I don't understand the answer :( Cardinality of $A=\{f: \mathbb R \to \mathbb R , f \text{ is continuous and} f(\mathbb Q) \subset \mathbb Q\}$ • Can you please link to the post you've mentioned for context? – user61527 Dec 6 '13 at 4:33 • Here is the link math.stackexchange.com/questions/594915/… – Maxi Dec 6 '13 at 4:36 • it would help if you specify what is unclear about the other answer you mention. Dec 6 '13 at 5:11 • @Maxi: let me know if my answer seems unclear. I constructed an uncountable collection of continuous maps from $\mathbb Q \rightarrow \mathbb Q$ that extend continuously into continuous maps $\mathbb R \rightarrow \mathbb R$ Dec 6 '13 at 5:20 Idea: we construct , on each interval $[n,n+1]$ , a countably-infinite collection of linear functions (linear functions $ax+b$ , with both a,b in $\mathbb Q$) mapping $\mathbb Q$ to itself, that extend to continuous functions $f: \mathbb R \rightarrow \mathbb R$. Linear functions extend because they are uniformly-continuous on a dense subset, and uniform continuity is sufficient to extend a function from a dense subset into the whole space. Then we extend each of these functions from $[n,n+1]$ to $[n+1,n+2]$ continuously. Then, each of the $|\mathbb Q|$ functions on $[n,n+1]$ can be (linearly)extended to $[n+1,n+2]$ in $|\mathbb Q|$ ways, so that the total cardinality is $|\mathbb Q| \times |\mathbb Q| \times....$ $|\mathbb Q|$ times. Consider the integers $\mathbb Z$ . We construct an uncountable collection of linear maps from $\mathbb Q$ to $\mathbb Q$, and we use the fact that linear maps, being uniformly-continuous on the dense subset $\mathbb Q$ of $\mathbb R$, extend to a continuous map $f: \mathbb R \rightarrow \mathbb R$ .Start at, say $0$. Then, following the idea of the link, any line thru the point $0$ with rational slope maps $\mathbb Q$ to $\mathbb Q$: take $px+q$ , with $p,q$ both in $\mathbb Q$, since Rationals are closed under multiplication, then $px$ is Rational as a product of Rationals, and when we add $b$ to it we have a sum of Rationals, which is Rational. Notice this choice of line can be made in $|\mathbb Q|= \aleph_0$ ways . Now, extend the function at $x=1$ , starting at the image $a(1)+b$ , and then extend the same way from $x=2$ to $x=3$ , i.e., you defined $a'x+b'$ in $[1,2]$ to be $a'x+b'$ , with both $a',b'$ Rational. This means that each of the $|\mathbb Q|$ choices in each of the interval $[n,n+1]$ can be combined with $\mathbb Q$ choices in $[n+1, n+2]$ , for all integers $n$. So you have a total of $|\mathbb Q|\times |\mathbb Q |\times...|.....$ , all of this $|\mathbb Q|$ times, which gives you an uncountable collection of functions $f: \mathbb Q \rightarrow \mathbb Q$ , that extend to continuous functions $F: \mathbb R \rightarrow \mathbb R$, and you're done. • Thanks a lot! Now I understand! – Maxi Dec 6 '13 at 13:29 • I did another edit, I think it clarified a few things. Dec 6 '13 at 20:06 Define $T(x) = \max\{0,1-|2x|\}$, so that $T(x)$ is a continuous "tent function" supported on the interval $({-}\frac12,\frac12)$. For any subset $S\subseteq\mathbb Z$ of the integers, the function $$F_S(x) = \sum_{n\in S} T(x-n)$$ is a continuous function with a "tent" of width $1$ at every integer in $S$ and flat everywhere else. There are $2^{\aleph_0}$ subsets $S$ of $\mathbb Z$, hence $2^{\aleph_0}$ such continuous functions (they're all different, by checking their values on integers); and they all map rational numbers to rational numbers.
# Random subspaces of a tensor product (I) This is the first post in a series about a problem inside RMT QIT that I have been working on for some time now [cn2,bcn]. Since I find it to be very simple and interesting, I will present it in a series of blog notes that should be accessible to a large audience. I will also use this material to prepare the talks I will be giving this summer on this topic ;). In what follows, all vector spaces shall be assumed to be complex and are fixed constants. For a vector , the symbol denotes its ordered version, i.e. and are the same up to permutation of coordinates and . 1. Singular values of vectors in a tensor product Using the non-canonical isomorphism , one can see any vector as a matrix In this way, by using the singular value decomposition of the matrix (keep in mind that we assume ), one can write where , resp. are orthonormal families in , resp. . The vector is the singular value vector of and we shall always assume that it is ordered . It satisfies the normalization condition In particular, if is a unit vector, then , where is the probability simplex and is its ordered version. In QIT, the decomposition of above is called the Schmidt decomposition and the numbers are called the Schmidt coefficients of the pure state . 2. The singular value set of a vector subspace Consider now a subspace of dimension and define the set called the singular value subset of the subspace . Below are some examples of sets , in the case , where the simplex is two-dimensional. In all the four cases, and . In the last two pictures, one of the vectors spanning the subspace has singular values . 3. Basic properties Below is a list of very simple properties of the sets . Proposition 1. The set is a compact subset of the ordered probability simplex having the following properties: 1. Local invariance: , for unitary matrices and . 2. Monotonicity: if , then . 3. If , , then . 4. If , then . Proof: The first three statements are trivial. The last one is contained in [cmw], Proposition 6 and follows from a standard result in algebraic geometry about the dimension of the intersection of projective varieties. 4. So, what is the problem ? The question one would like to answer is the following: How does a typical look like ? In order to address this, I will introduce random subspaces in the next post future. In the next post, I look at the special case of anti-symmetric tensors. References [bcn] S. Belinschi, B. Collins and I. Nechita, Laws of large numbers for eigenvectors and eigenvalues associated to random subspaces in a tensor product, to appear in Invent. Math. [cn2] B. Collins and I. Nechita, Random quantum channels II: Entanglement of random subspaces, Rényi entropy estimates and additivity problems, Adv. in Math. 226 (2011), 1181--1201. [cmw] T. Cubitt, A. Montanaro and A. Winter, On the dimension of subspaces with bounded Schmidt rank, J. Math. Phys. 49, 022107 (2008).
## Section1.2The set of real numbers Note: 2 lectures, the extended real numbers are optional ### Subsection1.2.1The set of real numbers We finally get to the real number system. To simplify matters, instead of constructing the real number set from the rational numbers, we simply state their existence as a theorem without proof. Notice that $$\Q$$ is an ordered field. Note that also $$\N \subset \Q\text{.}$$ We saw that $$1 > 0\text{.}$$ By induction (exercise) we can prove that $$n > 0$$ for all $$n \in \N\text{.}$$ Similarly, we verify simple statements about rational numbers. For example, we proved that if $$n > 0\text{,}$$ then $$\nicefrac{1}{n} > 0\text{.}$$ Then $$m < k$$ implies $$\nicefrac{m}{n} < \nicefrac{k}{n}\text{.}$$ Let us prove one of the most basic but useful results about the real numbers. The following proposition is essentially how an analyst proves an inequality. #### Proof. If $$x > 0\text{,}$$ then $$0 < \nicefrac{x}{2} < x$$ (why?). Taking $$\epsilon = \nicefrac{x}{2}$$ obtains a contradiction. Thus $$x \leq 0\text{.}$$ Another useful version of this idea is the following equivalent statement for nonnegative numbers: If $$x \geq 0$$ is such that $$x \leq \epsilon$$ for all $$\epsilon > 0\text{,}$$ then $$x = 0\text{.}$$ And to prove that $$x \geq 0$$ in the first place, an analyst might prove that all $$x \geq -\epsilon$$ for all $$\epsilon > 0\text{.}$$ From now on, when we say $$x \geq 0$$ or $$\epsilon > 0\text{,}$$ we automatically mean that $$x \in \R$$ and $$\epsilon \in \R\text{.}$$ A related simple fact is that any time we have two real numbers $$a < b\text{,}$$ then there is another real number $$c$$ such that $$a < c < b\text{.}$$ Take, for example, $$c = \frac{a+b}{2}$$ (why?). In fact, there are infinitely many real numbers between $$a$$ and $$b\text{.}$$ We will use this fact in the next example. The most useful property of $$\R$$ for analysts is not just that it is an ordered field, but that it has the least-upper-bound property. Essentially, we want $$\Q\text{,}$$ but we also want to take suprema (and infima) willy-nilly. So what we do is take $$\Q$$ and throw in enough numbers to obtain $$\R\text{.}$$ We mentioned already that $$\R$$ contains elements that are not in $$\Q$$ because of the least-upper-bound property. Let us prove it. We saw there is no rational square root of two. The set $$\{ x \in \Q : x^2 < 2 \}$$ implies the existence of the real number $$\sqrt{2}\text{,}$$ although this fact requires a bit of work. See also Exercise 1.2.14. #### Example1.2.3. Claim: There exists a unique positive $$r \in \R$$ such that $$r^2 = 2\text{.}$$ We denote $$r$$ by $$\sqrt{2}\text{.}$$ ##### Proof. Take the set $$A := \{ x \in \R : x^2 < 2 \}\text{.}$$ We first show that $$A$$ is bounded above and nonempty. The equation $$x \geq 2$$ implies $$x^2 \geq 4$$ (see Exercise 1.1.3), so if $$x^2 < 2\text{,}$$ then $$x < 2\text{,}$$ and $$A$$ is bounded above. As $$1 \in A\text{,}$$ the set $$A$$ is nonempty. We can therefore find the supremum. Let $$r := \sup\, A\text{.}$$ We will show that $$r^2 = 2$$ by showing that $$r^2 \geq 2$$ and $$r^2 \leq 2\text{.}$$ This is the way analysts show equality, by showing two inequalities. We already know that $$r \geq 1 > 0\text{.}$$ In the following, it may seem we are pulling certain expressions out of a hat. When writing a proof such as this we would, of course, come up with the expressions only after playing around with what we wish to prove. The order in which we write the proof is not necessarily the order in which we come up with the proof. Let us first show that $$r^2 \geq 2\text{.}$$ Take a positive number $$s$$ such that $$s^2 < 2\text{.}$$ We wish to find an $$h > 0$$ such that $${(s+h)}^2 < 2\text{.}$$ As $$2-s^2 > 0\text{,}$$ we have $$\frac{2-s^2}{2s+1} > 0\text{.}$$ Choose an $$h \in \R$$ such that $$0 < h < \frac{2-s^2}{2s+1}\text{.}$$ Furthermore, assume $$h < 1\text{.}$$ Estimate, \begin{equation*} \begin{aligned} {(s+h)}^2 - s^2 & = h(2s + h) \\ & < h(2s+1) & & \quad \bigl(\text{since } h < 1\bigr) \\ & < 2-s^2 & & \quad \bigl(\text{since } h < \tfrac{2-s^2}{2s+1} \bigr) . \end{aligned} \end{equation*} Therefore, $${(s+h)}^2 < 2\text{.}$$ Hence $$s+h \in A\text{,}$$ but as $$h > 0\text{,}$$ we have $$s+h > s\text{.}$$ So $$s < r = \sup\, A\text{.}$$ As $$s$$ was an arbitrary positive number such that $$s^2 < 2\text{,}$$ it follows that $$r^2 \geq 2\text{.}$$ Now take a positive number $$s$$ such that $$s^2 > 2\text{.}$$ We wish to find an $$h > 0$$ such that $${(s-h)}^2 > 2$$ and $$s-h$$ is still positive. As $$s^2-2 > 0\text{,}$$ we have $$\frac{s^2-2}{2s} > 0\text{.}$$ Let $$h := \frac{s^2-2}{2s}\text{,}$$ and check $$s-h=s-\frac{s^2-2}{2s} = \frac{s}{2}+\frac{1}{s} > 0\text{.}$$ Estimate, \begin{equation*} \begin{aligned} s^2 - {(s-h)}^2 & = 2sh - h^2 \\ & < 2sh & & \quad \bigl( \text{since } h^2 > 0 \text{ as } h \not= 0 \bigr) \\ & = s^2-2 & & \quad \bigl( \text{since } h = \tfrac{s^2-2}{2s} \bigr) . \end{aligned} \end{equation*} By subtracting $$s^2$$ from both sides and multiplying by $$-1\text{,}$$ we find $${(s-h)}^2 > 2\text{.}$$ Therefore, $$s-h \notin A\text{.}$$ Moreover, if $$x \geq s-h\text{,}$$ then $$x^2 \geq {(s-h)}^2 > 2$$ (as $$x > 0$$ and $$s-h > 0$$) and so $$x \notin A\text{.}$$ Thus, $$s-h$$ is an upper bound for $$A\text{.}$$ However, $$s-h < s\text{,}$$ or in other words, $$s > r = \sup\, A\text{.}$$ Hence, $$r^2 \leq 2\text{.}$$ Together, $$r^2 \geq 2$$ and $$r^2 \leq 2$$ imply $$r^2 = 2\text{.}$$ The existence part is finished. We still need to handle uniqueness. Suppose $$s \in \R$$ such that $$s^2 = 2$$ and $$s > 0\text{.}$$ Thus $$s^2 = r^2\text{.}$$ However, if $$0 < s < r\text{,}$$ then $$s^2 < r^2\text{.}$$ Similarly, $$0 < r < s$$ implies $$r^2 < s^2\text{.}$$ Hence $$s = r\text{.}$$ The number $$\sqrt{2} \notin \Q\text{.}$$ The set $$\R \setminus \Q$$ is called the set of irrational numbers. We just saw that $$\R \setminus \Q$$ is nonempty. Not only is it nonempty, we will see later that it is very large indeed. Using the same technique as above, we can show that a positive real number $$x^{1/n}$$ exists for all $$n\in \N$$ and all $$x > 0\text{.}$$ That is, for each $$x > 0\text{,}$$ there exists a unique positive real number $$r$$ such that $$r^n = x\text{.}$$ The proof is left as an exercise. ### Subsection1.2.2Archimedean property As we have seen, there are plenty of real numbers in any interval. But there are also infinitely many rational numbers in any interval. The following is one of the fundamental facts about the real numbers. The two parts of the next theorem are actually equivalent, even though it may not seem like that at first sight. #### Proof. Let us prove i. Divide through by $$x\text{.}$$ Then i says that for every real number $$t:= \nicefrac{y}{x}\text{,}$$ we can find $$n \in \N$$ such that $$n > t\text{.}$$ In other words, i says that $$\N \subset \R$$ is not bounded above. Suppose for contradiction that $$\N$$ is bounded above. Let $$b := \sup \N\text{.}$$ The number $$b-1$$ cannot possibly be an upper bound for $$\N$$ as it is strictly less than $$b$$ (the least upper bound). Thus there exists an $$m \in \N$$ such that $$m > b-1\text{.}$$ Add one to obtain $$m+1 > b\text{,}$$ contradicting $$b$$ being an upper bound. Let us tackle ii. See Figure 1.2 for a picture of the idea behind the proof. First assume $$x \geq 0\text{.}$$ Note that $$y-x > 0\text{.}$$ By i, there exists an $$n \in \N$$ such that Again by i the set $$A := \{ k \in \N : k > nx \}$$ is nonempty. By the well ordering property of $$\N\text{,}$$ $$A$$ has a least element $$m\text{,}$$ and as $$m \in A\text{,}$$ then $$m > nx\text{.}$$ Divide through by $$n$$ to get $$x < \nicefrac{m}{n}\text{.}$$ As $$m$$ is the least element of $$A\text{,}$$ $$m-1 \notin A\text{.}$$ If $$m > 1\text{,}$$ then $$m-1 \in \N\text{,}$$ but $$m-1 \notin A$$ and so $$m-1 \leq nx\text{.}$$ If $$m=1\text{,}$$ then $$m-1 = 0\text{,}$$ and $$m-1 \leq nx$$ still holds as $$x \geq 0\text{.}$$ In other words, \begin{equation*} m-1 \leq nx \qquad \text{or} \qquad m \leq nx+1 . \end{equation*} On the other hand, from $$n(y-x) > 1$$ we obtain $$ny > 1+nx\text{.}$$ Hence $$ny > 1+nx \geq m\text{,}$$ and therefore $$y > \nicefrac{m}{n}\text{.}$$ Putting everything together we obtain $$x < \nicefrac{m}{n} < y\text{.}$$ So take $$r = \nicefrac{m}{n}\text{.}$$ Now assume $$x < 0\text{.}$$ If $$y > 0\text{,}$$ then just take $$r=0\text{.}$$ If $$y \leq 0\text{,}$$ then $$0 \leq -y < -x\text{,}$$ and we find a rational $$q$$ such that $$-y < q < -x\text{.}$$ Then take $$r = -q\text{.}$$ Let us state and prove a simple but useful corollary of the Archimedean property. #### Proof. Let $$A := \{ \nicefrac{1}{n} : n \in \N \}\text{.}$$ Obviously $$A$$ is not empty. Furthermore, $$\nicefrac{1}{n} > 0$$ for all $$n \in \N\text{,}$$ and so 0 is a lower bound, and $$b := \inf\, A$$ exists. As 0 is a lower bound, then $$b \geq 0\text{.}$$ Take an arbitrary $$a > 0\text{.}$$ By the Archimedean property there exists an $$n$$ such that $$na > 1\text{,}$$ or in other words $$a > \nicefrac{1}{n} \in A\text{.}$$ Therefore, $$a$$ cannot be a lower bound for $$A\text{.}$$ Hence $$b=0\text{.}$$ ### Subsection1.2.3Using supremum and infimum Suprema and infima are compatible with algebraic operations. For a set $$A \subset \R$$ and $$x \in \R$$ define \begin{equation*} \begin{aligned} x + A & := \{ x+y \in \R : y \in A \} , \\ xA & := \{ xy \in \R : y \in A \} . \end{aligned} \end{equation*} For example, if $$A = \{ 1,2,3 \}\text{,}$$ then $$5+A = \{ 6,7,8 \}$$ and $$3A = \{ 3,6,9 \}\text{.}$$ Do note that multiplying a set by a negative number switches supremum for an infimum and vice versa. Also, as the proposition implies that supremum (resp. infimum) of $$x+A$$ or $$xA$$ exists, it also implies that $$x+A$$ or $$xA$$ is nonempty and bounded above (resp. below). #### Proof. Let us only prove the first statement. The rest are left as exercises. Suppose $$b$$ is an upper bound for $$A\text{.}$$ That is, $$y \leq b$$ for all $$y \in A\text{.}$$ Then $$x+y \leq x+b$$ for all $$y \in A\text{,}$$ and so $$x+b$$ is an upper bound for $$x+A\text{.}$$ In particular, if $$b = \sup\, A\text{,}$$ then \begin{equation*} \sup (x+A) \leq x+b = x+ \sup\, A . \end{equation*} The opposite inequality is similar. If $$b$$ is an upper bound for $$x+A\text{,}$$ then $$x+y \leq b$$ for all $$y \in A$$ and so $$y \leq b-x$$ for all $$y \in A\text{.}$$ So $$b-x$$ is an upper bound for $$A\text{.}$$ If $$b = \sup (x+A)\text{,}$$ then \begin{equation*} \sup\, A \leq b-x = \sup (x+A) -x . \end{equation*} The result follows. Sometimes we need to apply supremum or infimum twice. Here is an example. #### Proof. Any $$x \in A$$ is a lower bound for $$B\text{.}$$ Therefore $$x \leq \inf\, B$$ for all $$x \in A\text{,}$$ so $$\inf\, B$$ is an upper bound for $$A\text{.}$$ Hence, $$\sup\, A \leq \inf\, B\text{.}$$ We must be careful about strict inequalities and taking suprema and infima. Note that $$x < y$$ whenever $$x \in A$$ and $$y \in B$$ still only implies $$\sup\, A \leq \inf\, B\text{,}$$ and not a strict inequality. This is an important subtle point that comes up often. For example, take $$A := \{ 0 \}$$ and take $$B := \{ \nicefrac{1}{n} : n \in \N \}\text{.}$$ Then $$0 < \nicefrac{1}{n}$$ for all $$n \in \N\text{.}$$ However, $$\sup\, A = 0$$ and $$\inf\, B = 0\text{.}$$ The proof of the following often used elementary fact is left to the reader. A similar statement holds for infima. To make using suprema and infima even easier, we may want to write $$\sup\, A$$ and $$\inf\, A$$ without worrying about $$A$$ being bounded and nonempty. We make the following natural definitions. #### Definition1.2.9. Let $$A \subset \R$$ be a set. 1. If $$A$$ is empty, then $$\sup\, A := -\infty\text{.}$$ 2. If $$A$$ is not bounded above, then $$\sup\, A := \infty\text{.}$$ 3. If $$A$$ is empty, then $$\inf\, A := \infty\text{.}$$ 4. If $$A$$ is not bounded below, then $$\inf\, A := -\infty\text{.}$$ For convenience, $$\infty$$ and $$-\infty$$ are sometimes treated as if they were numbers, except we do not allow arbitrary arithmetic with them. We make $$\R^* := \R \cup \{ -\infty , \infty\}$$ into an ordered set by letting \begin{equation*} -\infty < \infty \quad \text{and} \quad -\infty < x \quad \text{and} \quad x < \infty \quad \text{for all $x \in \R$}. \end{equation*} The set $$\R^*$$ is called the set of extended real numbers. It is possible to define some arithmetic on $$\R^*\text{.}$$ Most operations are extended in an obvious way, but we must leave $$\infty-\infty\text{,}$$ $$0 \cdot (\pm\infty)\text{,}$$ and $$\frac{\pm\infty}{\pm\infty}$$ undefined. We refrain from using this arithmetic, it leads to easy mistakes as $$\R^*$$ is not a field. Now we can take suprema and infima without fear of emptiness or unboundedness. In this book, we mostly avoid using $$\R^*$$ outside of exercises, and leave such generalizations to the interested reader. ### Subsection1.2.4Maxima and minima By Exercise 1.1.2, a finite set of numbers always has a supremum or an infimum that is contained in the set itself. In this case we usually do not use the words supremum or infimum. When a set $$A$$ of real numbers is bounded above, such that $$\sup\, A \in A\text{,}$$ then we can use the word maximum and the notation $$\max\, A$$ to denote the supremum. Similarly for infimum: When a set $$A$$ is bounded below and $$\inf\, A \in A\text{,}$$ then we can use the word minimum and the notation $$\min\, A\text{.}$$ For example, \begin{equation*} \begin{aligned} & \max \{ 1,2.4,\pi,100 \} = 100 , \\ & \min \{ 1,2.4,\pi,100 \} = 1 . \end{aligned} \end{equation*} While writing $$\sup$$ and $$\inf$$ may be technically correct in this situation, $$\max$$ and $$\min$$ are generally used to emphasize that the supremum or infimum is in the set itself. ### Subsection1.2.5Exercises #### Exercise1.2.1. Prove that if $$t > 0$$ ($$t \in \R$$), then there exists an $$n \in \N$$ such that $$\dfrac{1}{n^2} < t\text{.}$$ #### Exercise1.2.2. Prove that if $$t \geq 0$$ ($$t \in \R$$), then there exists an $$n \in \N$$ such that $$n-1 \leq t < n\text{.}$$ #### Exercise1.2.4. Let $$x, y \in \R\text{.}$$ Suppose $$x^2 + y^2 = 0\text{.}$$ Prove that $$x = 0$$ and $$y = 0\text{.}$$ #### Exercise1.2.5. Show that $$\sqrt{3}$$ is irrational. #### Exercise1.2.6. Let $$n \in \N\text{.}$$ Show that either $$\sqrt{n}$$ is either an integer or it is irrational. #### Exercise1.2.7. Prove the arithmetic-geometric mean inequality. That is, for two positive real numbers $$x,y\text{,}$$ we have \begin{equation*} \sqrt{xy} \leq \frac{x+y}{2} . \end{equation*} Furthermore, equality occurs if and only if $$x=y\text{.}$$ #### Exercise1.2.8. Show that for every pair of real numbers $$x$$ and $$y$$ such that $$x < y\text{,}$$ there exists an irrational number $$s$$ such that $$x < s < y\text{.}$$ Hint: Apply the density of $$\Q$$ to $$\dfrac{x}{\sqrt{2}}$$ and $$\dfrac{y}{\sqrt{2}}\text{.}$$ #### Exercise1.2.9. Let $$A$$ and $$B$$ be two nonempty bounded sets of real numbers. Let $$C := \{ a+b : a \in A, b \in B \}\text{.}$$ Show that $$C$$ is a bounded set and that \begin{equation*} \sup\,C = \sup\,A + \sup\,B \qquad \text{and} \qquad \inf\,C = \inf\,A + \inf\,B . \end{equation*} #### Exercise1.2.10. Let $$A$$ and $$B$$ be two nonempty bounded sets of nonnegative real numbers. Define the set $$C := \{ ab : a \in A, b \in B \}\text{.}$$ Show that $$C$$ is a bounded set and that \begin{equation*} \sup\,C = (\sup\,A )( \sup\,B) \qquad \text{and} \qquad \inf\,C = (\inf\,A )( \inf\,B). \end{equation*} #### Exercise1.2.11. (Hard)   Given $$x > 0$$ and $$n \in \N\text{,}$$ show that there exists a unique positive real number $$r$$ such that $$x = r^n\text{.}$$ Usually $$r$$ is denoted by $$x^{1/n}\text{.}$$ #### Exercise1.2.13. Prove the so-called Bernoulli's inequality 4 : If $$1+x > 0\text{,}$$ then for all $$n \in \N\text{,}$$ we have $$(1+x)^n \geq 1+nx\text{.}$$ #### Exercise1.2.14. Prove $$\sup \{ x \in \Q : x^2 < 2 \} = \sup \{ x \in \R : x^2 < 2 \}\text{.}$$ #### Exercise1.2.15. 1. Prove that given $$y \in \R\text{,}$$ we have $$\sup \{ x \in \Q : x < y \} = y\text{.}$$ 2. Let $$A \subset \Q$$ be a set that is bounded above such that whenever $$x \in A$$ and $$t \in \Q$$ with $$t < x\text{,}$$ then $$t \in A\text{.}$$ Further suppose $$\sup\, A \not\in A\text{.}$$ Show that there exists a $$y \in \R$$ such that $$A = \{ x \in \Q : x < y \}\text{.}$$ A set such as $$A$$ is called a Dedekind cut. 3. Show that there is a bijection between $$\R$$ and Dedekind cuts. Note: Dedekind used sets as in part b) in his construction of the real numbers. #### Exercise1.2.16. Prove that if $$A \subset \Z$$ is a nonempty subset bounded below, then there exists a least element in $$A\text{.}$$ Now describe why this statement would simplify the proof of Theorem 1.2.4 part ii so that you do not have to assume $$x \geq 0\text{.}$$ #### Exercise1.2.17. Let us suppose we know $$x^{1/n}$$ exists for every $$x > 0$$ and every $$n \in \N$$ (see Exercise 1.2.11 above). For integers $$p$$ and $$q > 0$$ where $$\nicefrac{p}{q}$$ is in lowest terms, define $$x^{p/q} := {(x^{1/q})}^p\text{.}$$ 1. Show that the power is well-defined even if the fraction is not in lowest terms: If $$\nicefrac{p}{q} = \nicefrac{m}{k}$$ where $$m$$ and $$k > 0$$ are integers, then $${(x^{1/q})}^p = {(x^{1/m})}^k\text{.}$$ 2. Let $$x$$ and $$y$$ be two positive numbers and $$r$$ a rational number. Assuming $$r > 0\text{,}$$ show $$x < y$$ if and only if $$x^r < y^r\text{.}$$ Then suppose $$r < 0$$ and show: $$x < y$$ if and only if $$x^r > y^r\text{.}$$ 3. Suppose $$x > 1$$ and $$r,s$$ are rational where $$r < s\text{.}$$ Show $$x^r < x^s\text{.}$$ If $$0 < x < 1$$ and $$r < s\text{,}$$ show that $$x^r > x^s\text{.}$$ Hint: Write $$r$$ and $$s$$ with the same denominator. 4. (Challenging) 6  For an irrational $$z \in \R \setminus \Q$$ and $$x > 1$$ define $$x^z := \sup \{ x^r : r \leq z, r \in \Q \}\text{,}$$ for $$x=1$$ define $$1^z = 1\text{,}$$ and for $$0 < x < 1$$ define $$x^z := \inf \{ x^r : r \leq z, r \in \Q \}\text{.}$$ Prove the two assertions of part b) for all real $$z\text{.}$$ Uniqueness is up to isomorphism, but we wish to avoid excessive use of algebra. For us, it is simply enough to assume that a set of real numbers exists. See Rudin [R2] for the construction and more details. Named after the Ancient Greek mathematician Archimedes of Syracuse 3  (c. 287 BC – c. 212 BC). This property is Axiom V from Archimedes' “On the Sphere and Cylinder” 225 BC. https://en.wikipedia.org/wiki/Archimedes Named after the Swiss mathematician Jacob Bernoulli 5  (1655–1705). https://en.wikipedia.org/wiki/Jacob_Bernoulli In Section 5.4 we will define exponential and the logarithm and define $$x^z := \exp(z \ln x)\text{.}$$ We will then have sufficient machinery to make proofs of these assertions far easier. At this point, however, we do not yet have these tools. For a higher quality printout use the PDF versions: https://www.jirka.org/ra/realanal.pdf or https://www.jirka.org/ra/realanal2.pdf
# Simplify the Expression ## 28 October 2013 This sequence interests me chiefly because at first glance it looks wrong. $e^{\ln^2 x}$ should expand to $\left(e^{\ln x}\right)^{2}$, should it not? Only, it doesn’t because of the algebraic properties of exponents. Given such context, it makes more sense and becomes more obvious why the original simplifies the way it does. I restrict $x$ and $y$ to $\mathbb{R}$ because I haven’t yet investigated whether the properties hold for the imaginary numbers.
# The sound of me fits in quite a lot When combined with a number, I form a race. But combined with my position, it sounds like distaste. Knock me over and I'm endless. But to knock me in early would be careless. I'm almost the smallest amount in here. But my value is necessary to your survival out there. You are: EIGHT When combined with a number, I form a race. A number is a FIGURE. FIGURE + EIGHT gives Figure 8 Racing. (Simultaneously hinted by OP in The Sphinx's Lair and offered by @Hugh Meyers in a comment.) But combined with my position, it sounds like distaste. Eighth letter is H. H + EIGHT ("ATE") sounds like HATE. Knock me over and I'm endless. An EIGHT sideways is ∞ But to knock me in early would be careless. The EIGHT-ball, knocked in early, loses in billiards. I'm almost the smallest amount in here. EIGHT BITS = 1 byte, essentially the smallest data object used for information "here" in SE. (From OP's comment on another answer) But my value is necessary to your survival out there. Element EIGHT on periodic table is Oxygen, which is absolutely necessary for survival "out there" in the Real World. Title: A lot of words end with -ATE, the sound of EIGHT. • While the 800m does fit it's not the one I was thinking of. Not a literal number for a hint. – n_plum Jun 27 '17 at 15:41 • Perhaps Figure-8 racing? A number is a figure. – Hugh Meyers Jun 27 '17 at 15:53 • @HughMeyers Nice - I was also fairly inspired from your riddles lately - more so on a thematic sense, as mine is far less clever than yours :) – n_plum Jun 27 '17 at 15:55 • @n_palum Cool! Nice to feel I've made a contribution. I enjoyed your riddle as well. – Hugh Meyers Jun 27 '17 at 19:00 • Those clues are so good at misdirection, I guessed "eight" almost immediately and then gave up thinking I was nowhere near. – Darren Ringer Jun 27 '17 at 20:41 8 2: H8 3: 8 on its side is the infinity symbol 5: It is one of the smaller numbers If anyone can explain the other clues that would be great, I don't have much time to ponder them currently. • Title: the word has a few homophones. – Rand al'Thor Jun 27 '17 at 15:41 • "knocked in early": the 8-ball in the game of pool. – Rand al'Thor Jun 27 '17 at 15:44 I got ninja'd twice while writing this but I'm posting it anyway. I think you are Eight When combined with a number, I form a race. Not sure, but could be the 800 metres But combined with my position, it sounds like distaste. H8 => Hate Knock me over and I'm endless. This is the clincher for me: 8 rotated 90 degrees becomes the infinity symbol ∞ But to knock me in early would be careless. In 8-ball pool, knocking in the 8-ball before the other seven loses you the game. I'm almost the smallest amount in here. 8 bits = 1 byte But my value is necessary to your survival out there. @Rubio got this one: the 8th element is oxygen, which we need to breathe Title "Eight" is the longest one-syllable number • One of your lines is actually the intended answer for a different line that both others are missing. – n_plum Jun 27 '17 at 15:40 You are: Eight When combined with a number, I form a race. The 'number' is Pi, Eighth Race Pie. But combined with my position, it sounds like distaste. The position in the alphabet is H, H8, Hate. Knock me over and I'm endless. Eight (8) 'knocked over' on its side is the infinity symbol. But to knock me in early would be careless. 'Knocking out' the 8-Ball in pool would cause you to lose. I'm almost the smallest amount in here. 8 is one of the lowest numbers. But my value is necessary to your survival out there. Oxygen has an atomic value of 8. Which is necessary for our survival. • While the race one is interesting, that is not intended, otherwise this adds nothing new from the other existing answers. – n_plum Jun 27 '17 at 15:54 • Could you elaborate on "Eighth Race Pie"? I'm not familiar with it. – F1Krazy Jun 27 '17 at 15:54 • @n_palum I thought it was a reach! – BreakingMyself Jun 27 '17 at 15:55 • @F1Krazy it's a type of chocolate pie. – BreakingMyself Jun 27 '17 at 15:55 • @BreakingMyself Ah, I see, that's quite a clever interpretation. – F1Krazy Jun 27 '17 at 15:56
# Can we actually have null curves in Minkowski space? I know that this sounds really stupid but, when I think of the Minkowski space I cannot imagine a null curve, only null lines. For me, the only possible way to have one is to change the basis of the space for one that is not orthogonal, and that don't make any practical sense for me. And almost the same goes to null surfaces... I just can't think of any other than null planes and null cones. And because of this, I also have doubts of what I think is a null curve and null surface in general relativity. • A line is a curve. – Slereah Oct 20 '15 at 11:20 • Is it just the terminology that is confusing you, i.e. the terms line and curve? If so, the term curve includes straight lines. – John Rennie Oct 20 '15 at 11:21 • No, for instance. A lot of books consider a congruence of null curves that can have some shear... If are lines, how is it possible to they to have shear? and then what is the purpose of the Newman-Penrose formalism in special relativity? – raul Oct 20 '15 at 11:26 • I'm reading about Twistors by the way... – raul Oct 20 '15 at 11:27 Regarding null curves in flat space, how about $$X(t) = (t,x,y) = (\tau, \cos(\tau), \sin(\tau)) .$$ Then $$V(t) = (\dot t, \dot x, \dot y) = (1,-\sin(\tau), \cos(\tau))$$ in which case $V^2 = 0$. Any particle moving in $\mathbb{R}^3$ along any curve with constant speed $|v|=c$ will trace a null curve in Minkowski space.
# Components of unit vectors 1. Mar 18, 2010 ### roam 1. The problem statement, all variables and given/known data A golfer takes three putts to get the ball into the hole of a level putting green. The first putt displaces the ball 2.5 m North, the second 4.8 m South-East and the third 5.7 m South-West. Express each of these three displacements in unit vector notation where i is a unit vector pointing due East and j is a unit vector pointing due North. E.g the first displacement is: $$(0)i, (2.5)j$$ 3. The attempt at a solution The first one is obvious but the other two aren't. For example for the second part the question says it is 4.8 m South-East BUT it doesn't say how many degrees due South-East. Without the angle, it is impossible for me to find the rectangular components (i.e $$4.8 sin(\theta), 4.8 cos(\theta)$$). Any suggestions? 2. Mar 18, 2010 ### rl.bhat When they say south of east or south of west the angle is always 45 degrees with east-west line. 3. Mar 20, 2010 ### roam Oh thanks! I found the three displacements in unit vector notation: $$0i, 2.5j$$ $$3.39i, -3.39j$$ $$-4.03i, -4.03j$$
What is "nascent oxygen", and how does it relate to potassium permanganate? Sep 10, 2017 $\text{Nascent oxygen}$ is a bit of an old-fashioned term...... Explanation: But you really just need to heat up $\text{potassium permanganate}$: oxygen (as oxide) is oxidized, and manganese is reduced...... $M n {O}_{4}^{-} + 4 {H}^{+} + 3 {e}^{-} \rightarrow M n {O}_{2} \left(s\right) + 2 {H}_{2} O$ $\left(i\right)$ ${O}^{2 -} \rightarrow \frac{1}{2} {O}_{2} \left(g\right) + 2 {e}^{-}$ $\left(i i\right)$ And so take $2 \times \left(i\right) + 3 \times \left(i i\right)$ $2 M n {O}_{4}^{-} + {\underbrace{8 {H}^{+} + 3 {O}^{2 -}}}_{3 {H}_{2} O + 2 {H}^{+}} + 6 {e}^{-} \rightarrow 2 M n {O}_{2} \left(s\right) + 4 {H}_{2} O + \frac{3}{2} {O}_{2} \left(g\right) + 6 {e}^{-}$ And we cancel out common reagents..... $2 M n {O}_{4}^{-} + 2 {H}^{+} \rightarrow 2 M n {O}_{2} \left(s\right) + {H}_{2} O + \frac{3}{2} {O}_{2} \left(g\right)$ Charge and mass are balanced as is required....... We could look at this another way, and consider the decomposition of (shortlived!) permanganic acid: $2 H M n {O}_{4} \rightarrow 2 M n {O}_{2} \left(s\right) + {H}_{2} O + \frac{3}{2} {O}_{2} \left(g\right)$
@Template # Previous Page We guessed (with some optimisation) a wavefunction for Benzene that was somewhat realistic. Diffusion Monte Carlo allows you to improve upon your energy estimate using the following observation. Let's define a new function, $f$ like this: $$f(\mathbf{x}) = \psi_{true} \psi_{guessed}.$$ This new function $f$, has two important things about it: • The weighted average local energy, weighted by f, taking the local energy to be the energy of the guessed wavefunction, is actually the true energy. (Subject to the guessed wavefunction having correctly placed nodes). • We can actually generate a series of points with a population density proportional to $f(\mathbf{x})$, even though we don't know what $\psi_{real}$ is. This is very helpful: We don't know what the real wavefunction is, but can generate a population of points that follows something related to it. And luckily, that something related to it has a way to get the "true" energy from it. The way to get a population of points with density $f(\mathbf{x})$ comes from the observation that if we assume that the energy of the wavefunction $\psi_{real}$ is minimal, and we use the normal Schroedinger equation to define the energy, we can derive that: $$\frac{\partial f}{\partial t} = \frac{1}{2}\nabla_i f - \nabla . \frac{\nabla \psi_{guess}}{\psi_{guess}} f + (E(\mathbf{x}) - E_{target})f,$$ here $t$ is a variable that as it increases, $f$ tends to the correct value. When interpreted as a diffusion equation, $t$ is time. This equation is simply a diffusion equation with some forces: It's exactly the same equation you would use to track suspended bacteria particles in moving air. The particles need to be able to divide, and die off too, to get the equation to match, but that isn't hard to simulate. So all we do is make an educated initial guessed wavefunction, and then run such a simulation, and that gets us a million or more particles whose distribution in space is equal to $f(\mathbf{x})$. And if we average the local energy over the positions of all these particles, we'll get the true energy. Or not. Unfortunately, there is a requirement that the positions where the guessed wavefunction is zero, are correct, and that can lead to inaccuracy. Another problem is that you need to run this for a long time before you get a completely smooth distribution of $f$. The smoothness itself doesn't matter, but it isn't good to have big regions of space with few or no particles, and that leads to random noise in the energy estimate. ## Moon Formation A Kotlin N-Body code, and lots of animations of the collision between Earth and a hypothetical Theia that people think created the moon. ## EMG Analysis A page with a javascript application where you can interact with EMG data using various filters. © Hugo2015. Session @sessionNumber
# String Landscape, De Sitter vacua and Broken Supersymmetry If we assume that the swampland conjectures, etc. regarding de sitter vacuas existence in the string / F-theory landscape turn out to be incorrect (and therefore we can assume the problem is well-posed), would all such solutions have broken supersymmetry? There are a number of obvious “difficulties” with formulating (at least for the case of generic qft on curved spacetime) particle theories on de Sitter like spaces (ex. non-trivial hamiltonian from vanishing of globally timelike killing vectors, etc.). One feature of de Sitter space, that is probably obvious but likely not trivial (at least from first glance) is the non-existence of positive conserved energy and its requirement for broken supersymmetry. Have I made an error, or is this a “just so” theorem regarding de Sitter vacua and the necessity for broken supersymmetry? • My reasoning for believing susy must be broken is as follows: if susy were not broken (with the positive energy condition in mind) the supercharge would be hermitian and pick up non zero value, and its square would be forced non-zero, and would be a conserved positive (?) bosonic value. This doesn’t make sense (unless I’m missing something) forcing susy to be broken by necessity. – alex sharma Jan 10 '20 at 0:38 • If SUSY was unbroken then we would've detected it. – redhood Apr 8 '20 at 19:23 • – Qmechanic Apr 8 '20 at 19:39 • @Qmechanic thank you, i appreciate the link. – alex sharma Apr 9 '20 at 5:22 You're right. Realistic string compactifications require four macroscopic dimensions. For supersymmetry to exist in a given background a non zero globally defined Killing spinor is required. In the d=4 case such a spinor should be (locally) the generator of the $$Spin(4,1)$$ group, the problem is that $$Spin(4,1)$$ has no Majorana representations (condition required to realize supersymmetry in a unitary way). Nevertheless, non unitary realizations of supersymmetry over de-Sitter space are possible.
# Writing a MiniC-to-MSIL compiler in F# - Part 1 - Defining the abstract syntax tree ### Introduction So we’re going to write a compiler - great. Let’s start by deciding on the language we want to compile. The standard way to do this is by defining a language grammar. The grammar will define precisely what source code our compiler will accept as valid input - and implicitly, what isn’t valid input. I chose to use a grammar I found in this paper (which looks like a university course assignment). It defines a language it calls Mini-C, which is a subset of C that omits macros, pointers and structs (and probably lots of other things). There’s enough in MiniC to making writing a compiler for it an interesting exercise. Here’s an example of Mini-C source code. By the end of this series, our compiler will be able to turn this source code into an executable .NET application: int fib(int n) { if (n == 0) return 0; if (n == 1) return 1; return fib(n - 1) + fib(n - 2); } int main(void) { return fib(10); } ### The Mini-C grammar Enough talking. Here’s the MiniC grammar, from the paper I mentioned above. It consists of multiple rules. Each rule starts with the rule name, followed by an arrow, followed by one or more tokens. Capital letters indicate a literal - i.e. BOOL means the literal string “bool”. program → decl_list decl_list → decl_list decl | decl decl → var_decl | fun_decl var_decl → type_spec IDENT ; | type_spec IDENT [ ] ; type_spec → VOID | BOOL | INT | FLOAT fun_decl → type_spec IDENT ( params ) compound_stmt params → param_list | VOID param_list → param_list , param | param param → type_spec IDENT | type_spec IDENT [ ] stmt_list → stmt_list stmt | ε stmt → expr_stmt | compound_stmt | if_stmt | while_stmt | return_stmt | break_stmt expr_stmt → expr ; | ; while_stmt → WHILE ( expr ) stmt compound_stmt → { local_decls stmt_list } local_decls → local_decls local_decl | ε local_decl → type_spec IDENT ; | type_spec IDENT [ ] ; if_stmt → IF ( expr ) stmt | IF ( expr ) stmt ELSE stmt return_stmt → RETURN ; | RETURN expr ; The following expressions are listed in order of increasing precedence: expr → IDENT = expr | IDENT [ expr ] = expr → expr OR expr → expr EQ expr | expr NE expr → expr LE expr | expr < expr | expr GE expr | expr > expr → expr AND expr → expr + expr | expr - expr → expr * expr | expr / expr | expr % expr → ! expr | - expr | + expr → ( expr ) → IDENT | IDENT [ expr ] | IDENT ( args ) | IDENT . size → BOOL_LIT | INT_LIT | FLOAT_LIT | NEW type_spec [ expr ] arg_list → arg_list , expr | expr args → arg_list | ε If that makes your eyes hurt, don’t worry. Let’s break it down, taking the var_decl rule as an example: var_decl → type_spec IDENT ; | type_spec IDENT [ ] ; And here’s a line of Mini-C code, which is valid according to the var_decl rule: bool MyVar[]; The vertical bar (|) in the middle of the var_decl rule indicates an OR relationship: source code will match the rule if it looks like the left side OR the right side. Let’s look at the right side: • First we have a type_spec. The type_spec rule is defined in the grammar just below var_decl. A type_spec can be any of the following strings: void, bool, int or float. • Then there’s an identifier. Mini-C uses the same rules as standard C for valid identifiers. • Finally, there’s a pair of square brackets: [ followed by ]. This level of precision in the grammar will be necessary when we come to write the parser. But before we can write the parser, we need to define the hierarchy of objects that the parser will store its results in. This is called the abstract syntax tree (AST). ### The Abstract Syntax Tree (AST) Now that we have a grammar for Mini-C, we need to turn it into code. The code won’t do anything yet - it’s just how we will represent source code after it has been parsed. Wikipedia defines an abstract syntax tree (AST) as … a tree representation of the abstract syntactic structure of source code written in a programming language. When you’re writing an F# program, as soon as you hear the word “tree”, you can smile. F# was built for such things. In C#, we’d probably use a hierarchy of classes, but F#’s discriminated unions provide a more elegant solution. Hopefully you’ll see that F# lets us write code that looks surprisingly similar to the grammar. You’ll find this code in Ast.fs in the Mini-C GitHub repository. Let’s write some code. I’m not going to explain F#’s syntax, because there are plenty of other places that do a better job than I could. If you’re not familiar with F#, it might look a bit weird at first, but it grows on you. type Program = Declaration list This makes Program essentially an alias for a list of declarations. (In C#, we’d write List<Declaration>.) and Declaration = | StaticVariableDeclaration of VariableDeclaration | FunctionDeclaration of FunctionDeclaration A declaration is either a static variable declaration or a function declaration. and TypeSpec = | Void | Bool | Int | Float The supported types in Mini-C are void, bool, int and float. and VariableDeclaration = | ScalarVariableDeclaration of TypeSpec * Identifier | ArrayVariableDeclaration of TypeSpec * Identifier A variable declaration can either be a scalar variable declaration or an array variable declaration. Both types of variable declaration consist of a type and an identifier. The * character is what F# uses to define tuples. In C#, we might write Tuple<TypeSpec, Identifier> (except we wouldn’t because nobody uses tuples in C#). and FunctionDeclaration = TypeSpec * Identifier * Parameters * CompoundStatement A function declaration consists of a type, an identifier, some function parameters, and a “compound statement”. We’ll see what compound statements are shortly. and Identifier = string and Parameters = VariableDeclaration list and IdentifierRef = { Identifier : string; } An identifier is simply a string. The Parameters type (as used in FunctionDeclaration) is an alias for a list of variable declarations. It’s worth noting that the VariableDeclaration type is used for both function parameters and global variable declarations. Although the grammar is a bit different in these two contexts, the actual data we need to store after parsing is the same. and Statement = | ExpressionStatement of ExpressionStatement | CompoundStatement of CompoundStatement | IfStatement of IfStatement | WhileStatement of WhileStatement | ReturnStatement of Expression option | BreakStatement Here are examples of each type of Mini-C statement: Statement Type Mini-C Code Expression Statement i = 2 + 3; Compound Statement { int j; j = 5; j = j + 1; } If Statement if (i == 2) { return 3; } While Statement while (i < 3) { i = i + 1; } Return Statement return true; Break Statement break; and ExpressionStatement = | Expression of Expression | Nop Many languages, including Mini-C, differentiate between expressions and statements. At a very high level: • Expressions evaluate to a value. • Statements do something. Expression statements are where the two come together. An expression statement is simply a statement composed of an expression. We’ll soon see what expressions are. and CompoundStatement = LocalDeclarations * Statement list and LocalDeclarations = VariableDeclaration list A compound statement is composed of a list of local variable declarations, and a list of statements. You might notice that there’s some recursion going on here: one of the Statement types is CompoundStatement, which itself is composed of a list of Statements. That simplifies AST construction, and will also make working with the AST easier later on. and IfStatement = Expression * Statement * Statement option An if statement is composed of: • an expression - i.e. the condition to test • a statement to execute if the condition evaluates to true, which could be either a single statement or a compound statement • an optional statement to execute if the condition evaluates to false - this is what you’d write in the else part of the if statement Note that adding option after a type such as Statement is roughly analogous to writing Statement? or Nullable<Statement> in C#, except that F# is more awesome and allows the use of option for reference types too. and WhileStatement = Expression * Statement While statements, at least far as the AST goes, are similar to if statements. They are composed of an expression - which is the condition to test on each time through the loop, and a statement (which again, could be a compound statement). and Expression = | ScalarAssignmentExpression of IdentifierRef * Expression | ArrayAssignmentExpression of IdentifierRef * Expression * Expression | BinaryExpression of Expression * BinaryOperator * Expression | UnaryExpression of UnaryOperator * Expression | IdentifierExpression of IdentifierRef | ArrayIdentifierExpression of IdentifierRef * Expression | FunctionCallExpression of Identifier * Arguments | ArraySizeExpression of IdentifierRef | LiteralExpression of Literal | ArrayAllocationExpression of TypeSpec * Expression In partnership with statements, expressions make up the core of most languages. Mini-C has a simple grammar, and correspondingly few expression types. Here are examples of each type of Mini-C expression: Expression Type Mini-C Code Scalar assignment i = 2 + 3 Array assignment j[0] = 2 + 3; Binary i == 2 Unary -i Identifier i Array identifier j[0] Function call myFunc(0, true) Array size j.size Literal true Array allocation new int[3] Note that many of these definitions are recursive. ScalarAssignmentExpression, for example, is composed of an IdentifierRef and an Expression. If you think about it, this makes sense - on the right hand side of an assignment, you want to be able to use any arbitrary expression, including other assignments (at least in C-like languages). and BinaryOperator = | ConditionalOr | Equal | NotEqual | LessEqual | Less | GreaterEqual | Greater | ConditionalAnd | Add | Subtract | Multiply | Divide | Modulus and UnaryOperator = | LogicalNegate | Negate | Identity Binary and unary operators are hopefully self-explanatory, and comparable to the operators in most C-based languages. and Literal = | BoolLiteral of bool | IntLiteral of int | FloatLiteral of float Mini-C supports these literals: • BoolLiteral - true or false • IntLiteral - 1, -2, etc. • FloatLiteral - 1.23, -0.5, etc. ### Until next time… And that’s it! You’ll find all that code in Ast.fs in the GitHub repository. F#’s discriminated unions are perfectly suited for defining an abstract syntax tree. Next time, we’ll build a parser capable of turning Mini-C source code into a tree of objects, using the AST we’ve defined here.
SBC provides tools to validate your Bayesian model and/or a sampling algorithm via the self-recovering property of Bayesian models. This package lets you run SBC easily and perform postprocessing and visualisations of the results to assess computational faithfulness. ## Installation To install the development version of SBC, run devtools::install_github("hyunjimoon/SBC") ## Quick tour To use SBC, you need a piece of code that generates simulated data that should match your model (a generator) and a statistical model + algorithm + algorithm parameters that can fit the model to data (a backend). SBC then lets you discover when the backend and generator don’t encode the same data generating process (up to certain limitations). For a quick example, we’ll use a simple generator producing normally-distributed data (basically y <- rnorm(N, mu, sigma)) with a backend in Stan that mismatches the generator by wrongly assuming Stan parametrizes the normal distribution via precision (i.e. it has y ~ normal(mu, 1 / sigma ^ 2)). library(SBC) gen <- SBC_example_generator("normal") # interface = "cmdstanr" or "rjags" is also supported backend_bad <- SBC_example_backend("normal_bad", interface = "rstan") Note: Using the cmdstanr interface, a small number of rejected steps will be reported. Those are false positives and do not threaten validity (they happen during warmup). This is a result of difficulties in parsing the output of cmdstanr. We are working on a resolution. You can use SBC_print_example_model("normal_bad") to inspect the model used. We generate 50 simulated datasets and perform SBC: ds <- generate_datasets(gen, n_sims = 50) results_bad <- compute_SBC(ds, backend_bad) The results then give us diagnostic plots that immediately show a problem: the distribution of SBC ranks is not uniform as witnessed by both the rank histogram and the difference between sample ECDF and the expected deviations from theoretical CDF. plot_rank_hist(results_bad) plot_ecdf_diff(results_bad) We can then run SBC with a backend that uses the correct parametrization (i.e. with y ~ normal(mu, sigma)): backend_sd <- SBC_example_backend("normal_sd", interface = "rstan") results_sd <- compute_SBC(ds, backend_sd) plot_rank_hist(results_sd) plot_ecdf_diff(results_sd) The diagnostic plots show no problems in this case. As with any other software test, we can observe clear failures, but absence of failures does not imply correctness. We can however make the SBC check more thorough by using a lot of simulations and including suitable generated quantities to guard against known limitations of vanilla SBC. ## Paralellization The examples above are very fast to compute, but in real use cases, you almost certainly want to let the computation run in parallel via the future package. library(future) plan(multisession) The package vignettes provide additional context and examples. Notably: • The main vignette has more theoretical background and instructions how to integrate your own simulation code and models with SBC. • Small model workflow discusses how SBC integrates with model implementation workflow and how you can use SBC to safely develop complex models step-by-step. Currently SBC supports cmdstanr, rstan, and brms models out of the box. With a little additional work, you can integrate SBC with any exact or approximate fitting method as shown in the Implementing backends vignette. ## FAQ How does calibration relate to prediction accuracy? Comparing the ground truth and the simulated result is a backbone of calibration and comparison target greatly affects the calibrated (i.e. trained) result, similar to reward in reinforcement learning. In this sense, if the U(a(y), theta) term is designed for prediction, the model will be calibrated to have best predictive result as possible. ## Acknowledgements Development of this package was supported by ELIXIR CZ research infrastructure project (Ministry of Youth, Education and Sports of the Czech Republic, Grant No: LM2018131) including access to computing and storage facilities.
# 2.3: Light spectroscopy Spectrophotometers measure the amount of light absorbed by a sample at a particular wavelength. The absorbance of the sample depends on the electronic structures of the molecules present in the sample. Measurements are usually made at a wavelength that is close to the absorbance maximum for the molecule of interest in the sample. The diagram below shows the elements present in a typical spectrophotometer. The light sources used in most spectrophotometers emit either ultraviolet or visible light. Light (Io) passes from a source to a monochromator, which can be adjusted to allow only light of a defined wavelength to pass through. The monochromatic (I) light then passes through a cuvette containing the sample to a detector. The spectrophotometer compares the fraction of light passing through the monochromator (I0) to the light reaching the detector (I) and computes the transmittance (T) as I/I0. Absorbance (A) is a logarithmic function of the transmittance and is calculated as: A = log10(1/T) = log10(I0/I) Spectrophotometers can express data as either % transmittance or absorbance. Most investigators prefer to collect absorbance values, because the absorbance of a compound is directly proportional to its concentration. Recall the Lambert-Beer Law, traditionally expressed as: A =$$\varepsilon$$b C where $$\varepsilon$$ is the molar extinction coefficient of a compound, b is the length of the light path through the sample, and C is the molar concentration of the compound. Cuvettes are formulated to have a 1 cm light path, and the molar extinction coefficient is expressed as L/moles-cm. Consequently, absorbance is a unitless value.
## International Conference on $p$-ADIC MATHEMATICAL PHYSICS AND ITS APPLICATIONS $p$-ADICS.2015, 07-12.09.2015, Belgrade, Serbia Zoran Rakić ### Path Integrals for Quadratic Lagrangians on $p$-Adic and Adelic Spaces Abstract Feynman's path integrals in ordinary, $p$-adic and adelic quantum mechanics are considered. The corresponding probability amplitudes ${\cal K}(x^{''},t^{''};x',t')$ for two-dimensional systems with quadratic Lagrangians are evaluated analytically and obtained expressions are generalized to any finite-dimensional spaces. These general formulas are presented in the form which is invariant under interchange of the number fields ${\mathbb R} \leftrightarrow {\mathbb Q}_p$ and ${\mathbb Q}_p \leftrightarrow {\mathbb Q}_{p'} \, ,\, p\neq p'$. According to this invariance we have that adelic path integral is a fundamental object in mathematical physics of quantum phenomena. This talk is based on joint work with Branko Dragovich, see [1]. [1] B. Dragovich and Z. Rakic, Path Integrals for Quadratic Lagrangians on $p$-Adic and Adelic Spaces'', $p$-Adic Numbers, Ultrametric Analysis and Applications \textbf{2} (4), 322--340 (2010), [arXiv:1011.6589 [math-ph]].
## Reach for the Stars B AstroPixel24 Member Posts: 3 Joined: September 16th, 2020, 9:38 am Division: B State: NJ Pronouns: He/Him/His Has thanked: 3 times Been thanked: 2 times Contact: ### Reach for the Stars B Since there isn't a question marathon for Reach for the Stars, here's one! Some easy ones to start off with-- 1. This type of nebula contains H II regions and is formed from ionizing gases. M42 (Orion Nebula) and M8 (Lagoon Nebula) are prime examples of this nebula. 2. Name this naked-eye double binary pair nicknamed "Horse and Rider". 3. New Horizons becomes the first space craft to demonstrate what astronomical phenomenon? (not really adhering to the guidelines but just some trivia) These users thanked the author AstroPixel24 for the post: RiverWalker88 (September 16th, 2020, 11:17 am) CMS '22 | HSN '26 '21 Events (Socorro): Reach For the Stars, Dynamic Planet, Density Lab States: Reach For the Stars - 1st - '20 space-egg Member Posts: 35 Joined: March 5th, 2019, 11:30 am State: IN Pronouns: She/Her/Hers Has thanked: 7 times Been thanked: 22 times ### Re: Reach for the Stars B AstroPixel24 wrote: September 16th, 2020, 9:56 am Since there isn't a question marathon for Reach for the Stars, here's one! Some easy ones to start off with-- 1. This type of nebula contains H II regions and is formed from ionizing gases. M42 (Orion Nebula) and M8 (Lagoon Nebula) are prime examples of this nebula. 2. Name this naked-eye double binary pair nicknamed "Horse and Rider". 3. New Horizons becomes the first space craft to demonstrate what astronomical phenomenon? (not really adhering to the guidelines but just some trivia) 1.Emission nebula 2.Mizar and Alcor 3. Stellar parallax. Last edited by space-egg on October 1st, 2020, 6:58 am, edited 2 times in total. the name's bond. covalent bond. 2019: solar system and potions and poisons 2020 (yikes): reach for the stars, ornithology, and meteorology thanks for all the memories (: RiverWalker88 Exalted Member Posts: 110 Joined: February 24th, 2020, 7:14 pm Division: C State: NM Pronouns: He/Him/His Has thanked: 81 times Been thanked: 156 times Contact: ### Re: Reach for the Stars B Well... I'll go ahead and resurrect. Zeta Ophiuchi is my favorite star on this list. 1. What is causing red crescent-like structure in the attached image? 2. This star is significantly brighter in infrared light than optical light. Why is this? Attachments Zets.png (383.82 KiB) Viewed 718 times Socorro High School (2021 Events: Astro, Chem Lab, Circuit Lab, Codybusters, Detector, ExDes, Machines) 2021 Socorro High Invitational Director (Thanks to those who competed and volunteered, it was fun!) RiverWalker88's Userpage (Mostly Complete) Lemonism Forever AstronomyPerson Member Posts: 2 Joined: December 5th, 2020, 3:14 pm Division: B State: WA Pronouns: She/Her/Hers Has thanked: 0 Been thanked: 0 ### Re: Reach for the Stars B RiverWalker88 wrote: December 7th, 2020, 9:56 am Well... I'll go ahead and resurrect. Zeta Ophiuchi is my favorite star on this list. 1. What is causing red crescent-like structure in the attached image? 2. This star is significantly brighter in infrared light than optical light. Why is this? a) The red crescent-like structure in the attached image is emitted by electrons trying to get back into their atoms after being pushed out by stellar wind. b) Dust blocks our view of Zeta Ophiuchi, so it looks dim in optical light. Infrared lets us see through high amounts of dust and gas, which is why we can see Zeta Ophiuchi better. AstroPixel24 Member Posts: 3 Joined: September 16th, 2020, 9:38 am Division: B State: NJ Pronouns: He/Him/His Has thanked: 3 times Been thanked: 2 times Contact: ### Re: Reach for the Stars B RiverWalker88 wrote: December 7th, 2020, 9:56 am Well... I'll go ahead and resurrect. Zeta Ophiuchi is my favorite star on this list. 1. What is causing red crescent-like structure in the attached image? 2. This star is significantly brighter in infrared light than optical light. Why is this? 1. Bow shock? Because of the high space velocity of Zeta Oph in combination with its high intrinsic brightness and its current location in a dust rich area of the galaxy. 2. The star is emitting ultraviolet radiation, which heats up the cloud on which its in. Last edited by AstroPixel24 on December 8th, 2020, 5:44 am, edited 1 time in total. CMS '22 | HSN '26 '21 Events (Socorro): Reach For the Stars, Dynamic Planet, Density Lab States: Reach For the Stars - 1st - '20 RiverWalker88 Exalted Member Posts: 110 Joined: February 24th, 2020, 7:14 pm Division: C State: NM Pronouns: He/Him/His Has thanked: 81 times Been thanked: 156 times Contact: ### Re: Reach for the Stars B AstroPixel24 wrote: December 8th, 2020, 5:44 am RiverWalker88 wrote: December 7th, 2020, 9:56 am Well... I'll go ahead and resurrect. Zeta Ophiuchi is my favorite star on this list. 1. What is causing red crescent-like structure in the attached image? 2. This star is significantly brighter in infrared light than optical light. Why is this? 1. Bow shock? Because of the high space velocity of Zeta Oph in combination with its high intrinsic brightness and its current location in a dust rich area of the galaxy. 2. The star is emitting ultraviolet radiation, which heats up the cloud on which its in. Socorro High School (2021 Events: Astro, Chem Lab, Circuit Lab, Codybusters, Detector, ExDes, Machines) 2021 Socorro High Invitational Director (Thanks to those who competed and volunteered, it was fun!) RiverWalker88's Userpage (Mostly Complete) Lemonism Forever AstroPixel24 Member Posts: 3 Joined: September 16th, 2020, 9:38 am Division: B State: NJ Pronouns: He/Him/His Has thanked: 3 times Been thanked: 2 times Contact: ### Re: Reach for the Stars B 1. The Paschen series lines lie in what ranges of wavelengths? 2. What is the radiated energy of a star, with a radius of 700.000km, and the wavelength of maximum intensity as 600nm. 3. The Lobster nebula, contains many of what type of stars? CMS '22 | HSN '26 '21 Events (Socorro): Reach For the Stars, Dynamic Planet, Density Lab States: Reach For the Stars - 1st - '20 AstronomyPerson Member Posts: 2 Joined: December 5th, 2020, 3:14 pm Division: B State: WA Pronouns: She/Her/Hers Has thanked: 0 Been thanked: 0 ### Re: Reach for the Stars B AstroPixel24 wrote: December 8th, 2020, 8:02 am 1. The Paschen series lines lie in what ranges of wavelengths? 2. What is the radiated energy of a star, with a radius of 700.000km, and the wavelength of maximum intensity as 600nm. 3. The Lobster nebula, contains many of what type of stars? 1) Infrared? 2) For this, I have no idea, but I'm gonna guess $3.09 \cdot 10^{-7} Jm^{-2}s^{-1}$ 3) Protostars? MorningCoffee Member Posts: 51 Joined: August 18th, 2020, 9:45 am Division: B State: PA Pronouns: She/Her/Hers Has thanked: 83 times Been thanked: 64 times ### Re: Reach for the Stars B Revive time! Alrighty, so my absolute favorite DSO probably ever has to be M101, also in the image below 1. What is another name for M101? 2. In the summer of 2011, what astronomical event was discovered in this galaxy? (bonus points if you know what they named it specifically!) 3. The image above is not the most detailed of this DSO to date. What was the name of the telescope that took the most detailed image of M101? 4. Around how many globular clusters is this galaxy estimated to have? 5. Why is this DSO asymmetrical? Events: Anatomy, Orni, RFTS, Heredity, and now forcefully doing Experimental and WIDI! (should I say WICI ._.)
# Ball (mathematics) (Redirected from Disk (mathematics)) A synonym for ball (in geometry or topology, and in any dimension) is disk (or disc); however, a 3-dimensional ball is generally called a ball, and a 2-dimensional ball (e.g., the interior of a circle in the plane) is generally called a disk. Contents ## Geometry In metric geometry, a ball is a set containing all points within a specified distance of a given point. ### Examples With the ordinary (Euclidean) metric, if the space is the line, the ball is an interval, and if the space is the plane, the ball is the disc inside a circle. With other metrics the shape of a ball can be different; for example, in taxicab geometry a ball is diamond-shaped. ### General definition Let M be a metric space. The (open) ball of radius r > 0 centred at a point p in M is defined as [itex]B_r(p) = \{ x \in M \mid d(x,p) < r \},[itex] where d is the distance function or metric. If the less-than symbol (<) is replaced by a less-than-or-equal-to (≤), the above definition becomes that of a closed ball: [itex]{\bar B}_r(p) = \{ x \in M \mid d(x,p) \le r \}[itex]. Note in particular that a ball (open or closed) always includes p itself, since r > 0. A (open or closed) unit ball is a ball of radius 1. In n-dimensional Euclidean space, a closed unit ball is also denoted Dn. ### Related notions Open balls with respect to a metric d form a basis for the topology induced by d (by definition). This means, among other things, that all open sets in a metric space can be written as a union of open balls. A subset of a metric space is bounded if it is contained in a ball. A set is totally bounded if given any radius, it is covered by finitely many balls of that radius. ## Topology In topology, ball has two meanings, with context governing which is meant. The term (open) ball is sometimes informally used to refer to any open set: one speaks of "a ball about the point p" when one means an open set containing p. What this set is homeomorphic to depends on the ambient space and on the open set chosen. Likewise, closed ball is sometimes used to mean the closure of such an open set. (This can be quite misleading, as e.g. in ultrametric spaces a closed ball is not the closure of the open ball with the same radius, both being simultaneously open and closed sets.) Sometimes, neighborhood (or neighbourhood) is used for this meaning of ball, although neighborhood has a more general meaning: a neighborhood of p is any set containing an open set about p, thus not in general an open set. Also (and more formally), an (open or closed) ball is a space homeomorphic to the (open or closed) Euclidean ball described above under Geometry, but perhaps lacking its metric. A ball is known by its dimension: an n-dimensional ball is called an n-ball and denoted [itex]B^n[itex] or [itex]D^n[itex]. For distinct n and m, an n-ball is not homeomorphic to an m-ball. A ball need not be smooth; if it is smooth, it need not be diffeomorphic to the Euclidean ball. • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy Information • Clip Art (http://classroomclipart.com)
# cupyx.scipy.sparse.coo_matrix¶ class cupyx.scipy.sparse.coo_matrix(arg1, shape=None, dtype=None, copy=False) COOrdinate format sparse matrix. Now it has only one initializer format below: coo_matrix(S) S is another sparse matrix. It is equivalent to S.tocoo(). coo_matrix((M, N), [dtype]) It constructs an empty matrix whose shape is (M, N). Default dtype is float64. coo_matrix((data, (row, col)) All data, row and col are one-dimenaional cupy.ndarray. Parameters • arg1 – Arguments for the initializer. • shape (tuple) – Shape of a matrix. Its length must be two. • dtype – Data type. It must be an argument of numpy.dtype. • copy (bool) – If True, copies of given data are always used. Methods __len__() __iter__() arcsin() Elementwise arcsin. arcsinh() Elementwise arcsinh. arctan() Elementwise arctan. arctanh() Elementwise arctanh. asformat(format) Return this matrix in a given sparse format. Parameters format (str or None) – Format you need. asfptype() Upcasts matrix to a floating point format. When the matrix has floating point type, the method returns itself. Otherwise it makes a copy with floating point type and the same format. Returns A matrix with float type. Return type cupyx.scipy.sparse.spmatrix astype(t) Casts the array to given data type. Parameters dtype – Type specifier. Returns A copy of the array with a given type. ceil() Elementwise ceil. conj(copy=True) Element-wise complex conjugation. If the matrix is of non-complex data type and copy is False, this method does nothing and the data is not copied. Parameters copy (bool) – If True, the result is guaranteed to not share data with self. Returns The element-wise complex conjugate. Return type cupyx.scipy.sparse.spmatrix conjugate(copy=True) Element-wise complex conjugation. If the matrix is of non-complex data type and copy is False, this method does nothing and the data is not copied. Parameters copy (bool) – If True, the result is guaranteed to not share data with self. Returns The element-wise complex conjugate. Return type cupyx.scipy.sparse.spmatrix copy() Returns a copy of this matrix. No data/indices will be shared between the returned value and current matrix. count_nonzero() Returns number of non-zero entries. Note This method counts the actual number of non-zero entories, which does not include explicit zero entries. Instead nnz returns the number of entries including explicit zeros. Returns Number of non-zero entries. deg2rad() diagonal(k=0) Returns the k-th diagonal of the matrix. Parameters • k (int, optional) – Which diagonal to get, corresponding to elements • i+k] Default (a[i,) – 0 (the main diagonal). Returns The k-th diagonal. Return type cupy.ndarray dot(other) Ordinary dot product eliminate_zeros() Removes zero entories in place. expm1() Elementwise expm1. floor() Elementwise floor. get(stream=None) Returns a copy of the array on host memory. Parameters stream (cupy.cuda.Stream) – CUDA stream object. If it is given, the copy runs asynchronously. Otherwise, the copy is synchronous. Returns Copy of the array on host memory. Return type scipy.sparse.coo_matrix getH() get_shape() Returns the shape of the matrix. Returns Shape of the matrix. Return type tuple getformat() getmaxprint() getnnz(axis=None) Returns the number of stored values, including explicit zeros. log1p() Elementwise log1p. maximum(other) mean(axis=None, dtype=None, out=None) Compute the arithmetic mean along the specified axis. Parameters axis (int or None) – Axis along which the sum is computed. If it is None, it computes the average of all the elements. Select from {None, 0, 1, -2, -1}. Returns Summed array. Return type cupy.ndarray minimum(other) multiply(other) Point-wise multiplication by another matrix power(n, dtype=None) Elementwise power function. Parameters • n – Exponent. • dtype – Type specifier. rad2deg() reshape(shape, order='C') Gives a new shape to a sparse matrix without changing its data. rint() Elementwise rint. set_shape(shape) sign() Elementwise sign. sin() Elementwise sin. sinh() Elementwise sinh. sqrt() Elementwise sqrt. sum(axis=None, dtype=None, out=None) Sums the matrix elements over a given axis. Parameters • axis (int or None) – Axis along which the sum is comuted. If it is None, it computes the sum of all the elements. Select from {None, 0, 1, -2, -1}. • dtype – The type of returned matrix. If it is not specified, type of the array is used. • out (cupy.ndarray) – Output matrix. Returns Summed array. Return type cupy.ndarray sum_duplicates() Eliminate duplicate matrix entries by adding them together. Warning When sorting the indices, CuPy follows the convention of cuSPARSE, which is different from that of SciPy. Therefore, the order of the output indices may differ: >>> # 1 0 0 >>> # A = 1 1 0 >>> # 1 1 1 >>> data = cupy.array([1, 1, 1, 1, 1, 1], 'f') >>> row = cupy.array([0, 1, 1, 2, 2, 2], 'i') >>> col = cupy.array([0, 0, 1, 0, 1, 2], 'i') >>> A = cupyx.scipy.sparse.coo_matrix((data, (row, col)), ... shape=(3, 3)) >>> a = A.get() >>> A.sum_duplicates() >>> a.sum_duplicates() # a is scipy.sparse.coo_matrix >>> A.row array([0, 1, 1, 2, 2, 2], dtype=int32) >>> a.row array([0, 1, 2, 1, 2, 2], dtype=int32) >>> A.col array([0, 0, 1, 0, 1, 2], dtype=int32) >>> a.col array([0, 0, 0, 1, 1, 2], dtype=int32) Warning Calling this function might synchronize the device. tan() Elementwise tan. tanh() Elementwise tanh. toarray(order=None, out=None) Returns a dense matrix representing the same value. Parameters • order (str) – Not supported. • out – Not supported. Returns Dense array representing the same value. Return type cupy.ndarray tobsr(blocksize=None, copy=False) Convert this matrix to Block Sparse Row format. tocoo(copy=False) Converts the matrix to COOdinate format. Parameters copy (bool) – If False, it shares data arrays as much as possible. Returns Converted matrix. Return type cupyx.scipy.sparse.coo_matrix tocsc(copy=False) Converts the matrix to Compressed Sparse Column format. Parameters copy (bool) – If False, it shares data arrays as much as possible. Actually this option is ignored because all arrays in a matrix cannot be shared in coo to csc conversion. Returns Converted matrix. Return type cupyx.scipy.sparse.csc_matrix tocsr(copy=False) Converts the matrix to Compressed Sparse Row format. Parameters copy (bool) – If False, it shares data arrays as much as possible. Actually this option is ignored because all arrays in a matrix cannot be shared in coo to csr conversion. Returns Converted matrix. Return type cupyx.scipy.sparse.csr_matrix todense(order=None, out=None) Return a dense matrix representation of this matrix. todia(copy=False) Convert this matrix to sparse DIAgonal format. todok(copy=False) Convert this matrix to Dictionary Of Keys format. tolil(copy=False) Convert this matrix to LInked List format. transpose(axes=None, copy=False) Returns a transpose matrix. Parameters • axes – This option is not supported. • copy (bool) – If True, a returned matrix shares no data. Otherwise, it shared data arrays as much as possible. Returns Transpose matrix. Return type cupyx.scipy.sparse.spmatrix trunc() Elementwise trunc. __eq__(other) Return self==value. __ne__(other) Return self!=value. __lt__(other) Return self<value. __le__(other) Return self<=value. __gt__(other) Return self>value. __ge__(other) Return self>=value. __nonzero__() __bool__() Attributes A Dense ndarray representation of this matrix. This property is equivalent to toarray() method. H T device CUDA device on which this array resides. dtype Data type of the matrix. format = 'coo' ndim nnz shape size
+0 # Square root between +1 180 4 which number is between 6 1/8 and square root of 64 Aug 28, 2019 #1 +6046 +1 $$6\frac 1 8 < 7 < 8=\sqrt{64}$$ . Aug 28, 2019 #2 +1692 0 If you mean whole number, the answer is 7. If you mean any sort of number, the answer is: there are an infinite number of numbers. Aug 28, 2019 #3 0 7 1/16   or   7.0625 Aug 28, 2019 #4 +8854 +2 which number is between 6 1/8 and square root of 64 $$\(this\ number\in\{\mathbb R|\ 6\frac{1}{8}$$ Latex does not work again. this number ∈ { ℝ| 6 1/8 < this number < 8} ! Aug 28, 2019 edited by asinus  Aug 28, 2019 edited by asinus  Aug 28, 2019
# Symmetric Matrix over a finite field of Characteristic 2 Let $$M$$ be a $$n$$ by $$n$$ symmetric matrix over a finite field of Characteristic 2. Suppose that the entries in the diagonal of $$M$$ are all zero, and $$n$$ is an odd number. I found that the rank of $$M$$ is at most $$n-1$$. Is my observation true? How do we prove it? Thanks • I believe this has something to do with symplectic bilinear forms in characteristic two having an even rank. See for example Keith Conrad's notes. – Jyrki Lahtonen Jun 2 at 20:05 Yes, this is true. In general, over any field, if $$M$$ is a skew-symmetric matrix with a zero diagonal (i.e. if it represents an alternating bilinear form), the rank of $$M$$ must be even. Suppose $$M\ne0$$. By a simultaneous permutation of the rows and columns of $$M$$, we may assume that $$c_1:=m_{21}=m_{12}\ne0$$. So, we may write $$M=\pmatrix{R&-Y^T\\ Y&Z},\ \text{ where }\ R=c_1\pmatrix{0&-1\\ 1&0}$$ and $$Z$$ is a symmetric matrix with a zero diagonal. Thus $$M$$ is congruent to $$R\oplus S$$, where $$S=Z+YR^{-1}Y^T$$ is the Schur complement of $$R$$ in $$M$$. Since $$Z$$ and $$R^{-1}$$ represent alternating bilinear forms, so must $$S$$. Therefore, we may proceed recursively and $$M$$ will eventually be congruent to a matrix of the form $$c_1R\oplus c_2R\oplus\cdots\oplus c_kR\oplus0$$. Hence its rank is even.
Coulomb's Law (Electric Force) Video Lessons Concept Problem: Four equal positive point charges q are fixed to the corners of a horizontal square. A fifth positive point charge Q is on a free particle of mass m positioned directly above the center of the square, at a height equal to the length d of one side of the square. Refer to the figure. Determine the magnitude of q in terms of defined quantities, the Coulomb constant, and g, if the system is in equilibrium. FREE Expert Solution Coulomb's law: $\overline{){{\mathbf{F}}}_{{\mathbf{E}}}{\mathbf{=}}\frac{\mathbf{k}\mathbf{q}\mathbf{Q}}{{\mathbf{r}}^{\mathbf{2}}}}$ From the figure, each charge produces a vertical component. The vertical components are in the same axis as the weight. The weight is directed downward while the repulsive electric force is directed upward. FE makes the hypotenuse. To solve for the opposite (vertical component of the electric force) we'll use the 'sin' trigonometric ratio. opp = hyp•sinθ ΣF = 4FE•sinθ - mg = 0 96% (487 ratings) Problem Details Four equal positive point charges q are fixed to the corners of a horizontal square. A fifth positive point charge Q is on a free particle of mass m positioned directly above the center of the square, at a height equal to the length d of one side of the square. Refer to the figure. Determine the magnitude of q in terms of defined quantities, the Coulomb constant, and g, if the system is in equilibrium.
# Importance of 'smallness' in a category, and functor categories I feel like, having spent a little time doing category theory now, this is probably a silly question, but I keep coming up to many things (definitions, examples etc.) where smallness is required. I continually fail to see why this is: I can see why smallness (or local smallness) is a useful property, but often not why it is necessary, assuming it is. For example, the following definition of the category of presheaves http://ncatlab.org/nlab/show/category+of+presheaves requires $\mathcal{C}$ to be a small category to define the functor category $[\mathcal{C}^{\text{ op}}, \bf{Set}]$. A number of exercises I've attempted, such as "Let $\mathcal{C}$ be a small category and A abelian. Show that the functor category $[\mathcal{C}, A]$ is abelian." "Let $\mathcal{C}$ be a category such that, for each object $c$, the slice category $\mathcal{C}\,/c$ is equivalent to a small category, even though $\mathcal{C}$ may not be small. Show that the functor category $[\mathcal{C}^{\text{ op}}, \bf{Set}]$ is an elementary topos." require smallness as an assumption. There are plenty of other examples which I can probably dredge up if needs be. It seems like generally a lot of these requirements somehow involve functor categories. Is there some much more basic definition (something extremely fundamental like functor or natural transformation) which requires a set rather than a class somewhere, which might be cropping up and causing all of these instances? Obviously for adjointness you require a bijection between morphisms $FA \to B$ and $A \to GB$, but although bijections are usually between sets I expect you can probably define one safely between classes too in a similar manner. I know the hom-functor requires $\mathcal{C}$ to be locally small before you can define the functor $\mathcal{C} \to \bf{Set}$, since of course the map wouldn't necessarily take things into $\bf{Set}$ otherwise, and this all spills over into the Yoneda lemma, but is that really the cause of all these smallness requirements? Often we're talking about a collection of all possible functors, rather than specifying just one, so is it generally just the case that when dealing with a functor we need a category to be sufficiently small so that we know we definitely get sets coming out the other side? Or is there more going on here that I've failed to notice? Why is smallness and local smallness frequently so important? The first exercise I suggested wasn't for functors mapping onto $\bf{Set}$, after all. Obviously you aren't going to know every single example of a time I've noticed a smallness assumption and to be honest I've probably forgotten many of them, but any general thoughts you could provide on the matter would be very well received. As a guide, I've completed a fairly in-depth first course in Category Theory and am currently undertaking one in Topos Theory, if that helps to gauge the level of complexity I'd probably have come across. - Go do that exercise: it will illuminate where the smallness assumptions are necessary. In general, if $\mathcal{C}$ is a large category, the functor category $[\mathcal{C}, \textbf{Set}]$ will fail to be a topos. –  Zhen Lin Feb 4 '12 at 22:50 Yes, I was very careful to use the phrase 'exercises I've attempted', because I'm struggling with that very exercise as we speak! I will struggle on and see if I can fathom what's happening; I see you were attempting the same thing this time last year. –  Spyam Feb 4 '12 at 23:15 There is no set (nor class I think) of all functions from a class to Set, so the category of presheaves $[\mathcal C^{op},\text{Set}]$ does not exist if $\mathcal C$ is a proper class, i.e. not small. The problem is that if I looked for a contradiction I could probably find it, if I only try to reason correctly then I will probably not reach wrong conclusion just forgetting to assume $\mathcal C$ small, but I could be unlucky, and we must maintain that kind of standards to prevent degenerations. I think it is a very small price to pay to keep mathematics a clean pleasing place. –  plm May 16 '12 at 10:38 @ZhenLin, How can you even define that functor category if C is not small? You must be able to talk of formulas defining class functions, but you can't in first-order ZF. –  plm May 16 '12 at 10:44 @plm: Typically when one is confronted with such problems in category theory, one moves to either NBG, MK, or ZFC+U. –  Zhen Lin May 16 '12 at 10:53 My impression (as an outsider) is that the smallness assumptions are not considered to be important at all inside category theory itself. They are there only to make sure the results can be formalized in standard set theory, and don't encode any particular intuitive insight. In most (probably all) concrete applications of the results, it is easy to argue that all of the categories involved are small enough to make things work anyway. Like dicatorships traditionally call themselves the Democratic Republic of $X$, because countries are generally expected to want to be democracies, category theorists dutifully keep track of their smallness assumptions, because mathematical disciplines are generally expected to want to be expressible in ZFC. But it's not as if any particular attention is paid to this internally. - Nice, analogy :) –  Norbert Feb 4 '12 at 20:52 Smallness conditions in category theory appear in several situations. One situation is to assure that certain constructions exist. This includes the most elementary cases of (as you mentioned) the forming of functor categories but also for the constructions of adjoints (i.e., the solution set condition in the Freyd adjoint functor theorem, with a huge emphasis here on the word 'set'). A more advanced application of a categorical construction involving set conditions is in homotopical algebra. Establishing a Quillen model structure can be very hard. It is simplified enormously (yet typically remains hard) to construct a Quillen model structure by means of a cofibrant generation. There one needs to provide sets (and not just classes) of certain arrows with certain properties. If these sets exists the model structure is guaranteed. If such classes of arrows are proper classes and not sets then a model structure is not guaranteed. Another aspect of smallness conditions is to assure that certain categorical conditions do not force degeneration. For instance, any category that admits all products, not just set indexed products, is known to be a poset. That is why one usually considers small complete (and small cocomplete) categories. To conclude, smallness in category theory plays a crucial role in different ways. For some of these aspects a tacit assumption of Grothendieck universes is sufficient to hide all the size issues under the carpet and happily go on with your business. In other situations size issues play a very important role that can't be 'pushed away to a higher universe'. -
# Math Help - Improper Riemann intergal 1. ## Improper Riemann intergal Here is a question we had on an assignment a while back, the damn thing was only worth 2 marks yet it took the guy the whole lecture to explain the answer. dumb. But anyway, here's the question and the solution. What I want to know is whether there is an easier way to do this. I don't understand the N(b) bit in the solution and frankly for a 2 mark question it was far too easy to get half/zero marks considering the guy is the most stringent marker I've ever encountered and the solution is fairly long. Show that, $\lim_{b \to \infty} \int_0^b \frac{\sin(x)}{x} dx = \lim_{N \to \infty} \int_0^{\pi} \frac{\sin((N + 1/2)x)}{x} dx$. I thought it would just be a simple, let $x = (N+1/2)y$ and go from there but apparently not... Solution is in attachment.
The experimentally determined pressure drops due to a sudden contraction in two-phase flow in round pipes is given as a function of system pressure, flowing mixture quality, contraction area ratio, and mass velocity. The theoretical equation derived for the resulting pressure drop is $ΔPc=G322gcρ¯[1−σ2+K¯TPC]$ where KTPC is a parameter independent of mass velocity. This parameter was evaluated for three different two-phase flow models. It is shown that the fog-flow (homogeneous) model gives the best correlation of data over the whole range of conditions studied. The range of pressures studied was 200–500 psia; the area ratios varied from 0.144 to 0.398; the mass velocity varied from 0.52 × 106 to 4.82 × 106 lb/hr-ft2. The fluid used in this study was water. This content is only available via PDF.
# Different types of Questions not many though 1. Dec 3, 2004 ### hackeract an elevator of mass M is suspended from a vertical cable. When the elevator is accelerating downward with an acceleration magnitude of 5.8 m/s^2, the tension in the cable is 3644N, what is the mass of the elevator? T - mg = ma T = m(g + a) T/(g+ a) = m Or i was thinking 3644N = Kg/M/s^2, you could just divide by 5.8 and the M/s^2 cancel out, leaving M which is = 628kg .. what is the moons orbital speed? Mass Earth 6 x 10^24 kg Mass Moon 7.36 x 10^22 kg Mooms orbital radius 3.84 x 10^8 for this i was thinking a = v²/r, and plug that into Newton's second law.. but i dont think thats right A 24kg mass sits on a frictionless plane. Calculate the force required to give an acceleration of 5 m/s^2 up the plane (mass = 24kg, Angle=37) and a speeding motorist passes a stopped police car. At the moment he passes the police car begins accelerating at a constant rate of 4.4 m/s². the motorist unaware that he is being chased, continues at constant speed until the police car catches him 12s later. how fast was the motorist going? 2. Dec 3, 2004 ### jdstokes You're close in the first question, except the acceleration is down, meaning that the net force is negative. $T - mg = - ma$ so $m = \frac{T}{g-a}$. The simplest way to find the orbital speed is to equate the gravitational force $\frac{GMm}{r^2}$ with the centripetal force, so you get $v=\sqrt{\frac{GM}{r}}$. Identify all the forces acting on the block (there are 3) and decompose them into perpendicular components. You'll find it easier if you work in a perpendicular coordinate system where the $x$-axis is inclined 37 degrees from the horizontal, since this coordinate system has one of its axes oriented in the direction of the block's motion. For last question, write down the positions of the objects as functions of time, using the police car as the origin. Equate the the positions and solve for time. Last edited: Dec 3, 2004 3. Dec 3, 2004 ### Tide The fact that the Moon's mass was provided suggests you might want to consider that the center of mass of the Earth-Moon system may not coincide with the center of the Earth! 4. Dec 3, 2004 ### hackeract Im still confused on the orbital question.. would it look something like v= √(6.67 x 10^-11)(7.36 x 10^22) _________________________ 1740km im not sure what exactly what information to plug in... and it sucks for the last 2 questions we havn't covered how to find them like that... this is only first tri however..
+0 0 51 4 (x+5)/9 +5 = (7x-10)/6 and (x+5)/10 +5 = (7x-10)/6 Guest Nov 20, 2017 edited by Guest  Nov 20, 2017 #2 +1 Here is the first one: Solve for x: (x + 5)/9 + 5 = (7 x - 10)/6 Put each term in (x + 5)/9 + 5 over the common denominator 9: (x + 5)/9 + 5 = 45/9 + (x + 5)/9: 45/9 + (x + 5)/9 = (7 x - 10)/6 45/9 + (x + 5)/9 = ((x + 5) + 45)/9: (x + 5 + 45)/9 = (7 x - 10)/6 Grouping like terms, x + 5 + 45 = x + (45 + 5): (x + (45 + 5))/9 = (7 x - 10)/6 45 + 5 = 50: (x + 50)/9 = (7 x - 10)/6 Multiply both sides by 18: (18 (x + 50))/9 = (18 (7 x - 10))/6 18/9 = (9×2)/9 = 2: 2 (x + 50) = (18 (7 x - 10))/6 18/6 = (6×3)/6 = 3: 2 (x + 50) = 3 (7 x - 10) Expand out terms of the left hand side: 2 x + 100 = 3 (7 x - 10) Expand out terms of the right hand side: 2 x + 100 = 21 x - 30 Subtract 21 x from both sides: (2 x - 21 x) + 100 = (21 x - 21 x) - 30 2 x - 21 x = -19 x: -19 x + 100 = (21 x - 21 x) - 30 21 x - 21 x = 0: 100 - 19 x = -30 Subtract 100 from both sides: (100 - 100) - 19 x = -100 - 30 100 - 100 = 0: -19 x = -30 - 100 -30 - 100 = -130: -19 x = -130 Divide both sides of -19 x = -130 by -19: (-19 x)/(-19) = (-130)/(-19) (-19)/(-19) = 1: x = (-130)/(-19) Multiply numerator and denominator of (-130)/(-19) by -1: x = 130/19 Here is the second one: Solve for x: (x + 5)/10 + 5 = (7 x - 10)/6 Put each term in (x + 5)/10 + 5 over the common denominator 10: (x + 5)/10 + 5 = 50/10 + (x + 5)/10: 50/10 + (x + 5)/10 = (7 x - 10)/6 50/10 + (x + 5)/10 = ((x + 5) + 50)/10: (x + 5 + 50)/10 = (7 x - 10)/6 Grouping like terms, x + 5 + 50 = x + (50 + 5): (x + (50 + 5))/10 = (7 x - 10)/6 50 + 5 = 55: (x + 55)/10 = (7 x - 10)/6 Multiply both sides by 30: (30 (x + 55))/10 = (30 (7 x - 10))/6 30/10 = (10×3)/10 = 3: 3 (x + 55) = (30 (7 x - 10))/6 30/6 = (6×5)/6 = 5: 3 (x + 55) = 5 (7 x - 10) Expand out terms of the left hand side: 3 x + 165 = 5 (7 x - 10) Expand out terms of the right hand side: 3 x + 165 = 35 x - 50 Subtract 35 x from both sides: (3 x - 35 x) + 165 = (35 x - 35 x) - 50 3 x - 35 x = -32 x: -32 x + 165 = (35 x - 35 x) - 50 35 x - 35 x = 0: 165 - 32 x = -50 Subtract 165 from both sides: (165 - 165) - 32 x = -165 - 50 165 - 165 = 0: -32 x = -50 - 165 -50 - 165 = -215: -32 x = -215 Divide both sides of -32 x = -215 by -32: (-32 x)/(-32) = (-215)/(-32) (-32)/(-32) = 1: x = (-215)/(-32) Multiply numerator and denominator of (-215)/(-32) by -1: x = 215/32 Guest Nov 20, 2017 Sort: #1 +129 +1 Step one is simplifying the equatation! (x+5/9)+5 = (7x-10/6) x + 5/9 + 5 = 7x − 10/6 x + 5/9 + 5 = 7x + −5/3 x + 50/9 = 7x + −5/3 Step two is subtracting 7x from both sides −6x + 50/9 −7x = −5/3 Step three is subtracting 50/9 from both sides −6x = −65/9 Step four is dividing both sides by −6 −6x/−6 =−65/9/−6 x = 65/54 Now the second one! First we simplify the equatation. x + 1/2 + 5 = 7x + −5/3 x + 11/2 = 7x + −5/3 Step two is to subtract 7x from both sides −6x + 11/2 = −5/3 Step three is subtracting 11/2 from both sides (−6x + 11/2 − 11/2 = −5/3 − 11/2) −6x = −43/6 Now we divide both sides by −6 −6x/−6 = −43/6/−6 x = 43/36 HeyxJacq  Nov 20, 2017 #2 +1 Here is the first one: Solve for x: (x + 5)/9 + 5 = (7 x - 10)/6 Put each term in (x + 5)/9 + 5 over the common denominator 9: (x + 5)/9 + 5 = 45/9 + (x + 5)/9: 45/9 + (x + 5)/9 = (7 x - 10)/6 45/9 + (x + 5)/9 = ((x + 5) + 45)/9: (x + 5 + 45)/9 = (7 x - 10)/6 Grouping like terms, x + 5 + 45 = x + (45 + 5): (x + (45 + 5))/9 = (7 x - 10)/6 45 + 5 = 50: (x + 50)/9 = (7 x - 10)/6 Multiply both sides by 18: (18 (x + 50))/9 = (18 (7 x - 10))/6 18/9 = (9×2)/9 = 2: 2 (x + 50) = (18 (7 x - 10))/6 18/6 = (6×3)/6 = 3: 2 (x + 50) = 3 (7 x - 10) Expand out terms of the left hand side: 2 x + 100 = 3 (7 x - 10) Expand out terms of the right hand side: 2 x + 100 = 21 x - 30 Subtract 21 x from both sides: (2 x - 21 x) + 100 = (21 x - 21 x) - 30 2 x - 21 x = -19 x: -19 x + 100 = (21 x - 21 x) - 30 21 x - 21 x = 0: 100 - 19 x = -30 Subtract 100 from both sides: (100 - 100) - 19 x = -100 - 30 100 - 100 = 0: -19 x = -30 - 100 -30 - 100 = -130: -19 x = -130 Divide both sides of -19 x = -130 by -19: (-19 x)/(-19) = (-130)/(-19) (-19)/(-19) = 1: x = (-130)/(-19) Multiply numerator and denominator of (-130)/(-19) by -1: x = 130/19 Here is the second one: Solve for x: (x + 5)/10 + 5 = (7 x - 10)/6 Put each term in (x + 5)/10 + 5 over the common denominator 10: (x + 5)/10 + 5 = 50/10 + (x + 5)/10: 50/10 + (x + 5)/10 = (7 x - 10)/6 50/10 + (x + 5)/10 = ((x + 5) + 50)/10: (x + 5 + 50)/10 = (7 x - 10)/6 Grouping like terms, x + 5 + 50 = x + (50 + 5): (x + (50 + 5))/10 = (7 x - 10)/6 50 + 5 = 55: (x + 55)/10 = (7 x - 10)/6 Multiply both sides by 30: (30 (x + 55))/10 = (30 (7 x - 10))/6 30/10 = (10×3)/10 = 3: 3 (x + 55) = (30 (7 x - 10))/6 30/6 = (6×5)/6 = 5: 3 (x + 55) = 5 (7 x - 10) Expand out terms of the left hand side: 3 x + 165 = 5 (7 x - 10) Expand out terms of the right hand side: 3 x + 165 = 35 x - 50 Subtract 35 x from both sides: (3 x - 35 x) + 165 = (35 x - 35 x) - 50 3 x - 35 x = -32 x: -32 x + 165 = (35 x - 35 x) - 50 35 x - 35 x = 0: 165 - 32 x = -50 Subtract 165 from both sides: (165 - 165) - 32 x = -165 - 50 165 - 165 = 0: -32 x = -50 - 165 -50 - 165 = -215: -32 x = -215 Divide both sides of -32 x = -215 by -32: (-32 x)/(-32) = (-215)/(-32) (-32)/(-32) = 1: x = (-215)/(-32) Multiply numerator and denominator of (-215)/(-32) by -1: x = 215/32 Guest Nov 20, 2017 #4 0 (x+5)/10 +5 = (7x-10)/6 (x+55)/10 =(7x -10)/6 6x + 330 =70x - 100 6x-70x =-100 -330 -64x =-430 x =-430/-64 x =215/32 Guest Nov 20, 2017 #3 +91235 +1 Here is another presentation for what our guest did. Thanks guest :) $$\frac{(x+5)}{9} +5 =\frac{ (7x-10)}{6}$$ The lowest common denominator is 18 so multiply every term by 18 That way you will get rid of all the fractions and it will be much easier. $$\frac{18(x+5)}{9} +18*5 =\frac{18 (7x-10)}{6}\\ \frac{2(x+5)}{1} +90 =\frac{3 (7x-10)}{1}\\ 2(x+5)+90 =3 (7x-10)\\ 2x+10+90 =21x-30\\ 2x+100 =21x-30\\ 130 =19x\\ x=\frac{130}{19}=6\frac{16}{19}$$ Melody  Nov 20, 2017 3 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
11 February, 2014 # Extended time measurement in MATLAB When developing in MATLAB, it could be crucial to measure execution time of the whole program and the sub-functions. This helps us to identify the bottlenecks in our algorithm and improve code performance. Each computer has a high-precision internal timer, which lets us to measure elapsed time of microsecond accuracy. In MATLAB tic and toc are functions for interfacing this timer: they can be used in to ways. ## Stopwatch mode This mode can be imagined as a simple stopwatch: tic (re)starts the time measurement while toc measures the elapsed time from the last call of tic. In addition we can save the result to a variable. See this simple example below: tic % start stopwatchtic % restart stopwatchM = magic(20); % a magic square toc % print the elapsed timeN = magic(20); % a magic squareelapsedTime = toc % save the elapsed time The output is: Elapsed time is 0.001544 seconds.elapsedTime = 0.0019290 ## Timestamp mode To understand how it really works, use timestamp to note the current value of the internal high-precision timer. In this mode we can save timestamps and calculate the difference between them. To save the current timestamp, simply give the value of tic to a variable: timeStamp = tic timeStamp = tic timeStamp = tic The output is: timeStamp = 1392109925334548 timeStamp = 1392109925334627 timeStamp = 1392109925334638 The values are slightly increasing. To measure the elapsed time in this mode, simply give a timestamp parameter to toc: the calculated interval will be the difference between the current and the given timestamps. See the example: startM = tic; % save the current timestamp M = magic(20); % a magic square startN = tic; % save the current timestamp N = magic(20); % a magic square toc(startM) % print the elapsed time from startM elapsedTime = toc(startN) % save the elapsed time from startN The output is: Elapsed time is 0.0015521 seconds. elapsedTime = 0.00198708 This method gives us more flexibility when measuring time intervals. Please take care when using stopwatch and timestamp modes together, since they can not be mixed. Calling toc without any input parameter must be preceeded by calling tic without giving its value to any variable, just as in stopwatch mode. ## Average runtime measurement It is not enough to measure the runtime of a function only once, because some warm-up time is needed to load the code into the memory. In addition the runtime can be dependant of the current processes on the computer too. To measure a more valid value, we shall run the function multiple times and calculate an average, as the following code example shows: runTime = 0; % variable to store runtime N = 100; % average of 100 runs for run = 1 : N tic; % start stopwatch M = magic(20); % generate a magic square runTime = runTime + toc; % increase runtime end runTime = runTime / N % calculate average runtime This way we will get more accurate results.
# [tex-live] A bug in the fonts Emil Hedevang Lohse [email protected] 31 Aug 2002 22:43:40 +0200 Well, it is probably not a real bug but it annoys me nonetheless. First a quote from Donald Knuth himself (http://www-cs-faculty.stanford.edu/~knuth/cm.html): Many characters were improved in 1992, notably the arrows, which now are darker and have larger arrowheads, so that they don't disappear so easily after xeroxing. But most of the changes are rather subtle compared to the dramatic improvement in the lowercase delta. In fact, the old delta was so ugly, I couldn't stand to write papers using that symbol; now I can't stand to read papers that still do use it. The arrows of the Computer Modern fonts in the TeX Live 7 distribution seem to be the old, thin, and ugly ones from before 1992. Here is some TeX code. % The delta is right, but the arrow is wrong. Compare with "Computers % and Typesetting", "Computer Modern Typefaces" pp. 148 and 464. (My % book is from the Millennium Boxed Set and has ISBN 0-201-13446-2). $\delta\rightarrow$ \end When I have run the code through pdfeTeX, the log file contains This is pdfeTeX, Version 3.14159-1.00b-pretest-20020211-2.1 (Web2C 7.3.7x) (format=pdfetex 2002.7.31) 31 AUG 2002 22:38 entering extended mode **ah.tex (./ah.tex{/home/TeX/texmf-var/pdftex/config/pdftex.cfg} [1{/home/TeX/texmf-var/ dvips/config/pdftex.map}] )</home/TeX/texmf/fonts/type1/bluesky/cm/cmr10.pfb></ home/TeX/texmf/fonts/type1/bluesky/cm/cmsy10.pfb></home/TeX/texmf/fonts/type1/b luesky/cm/cmmi10.pfb> Output written on ah.pdf (1 page, 7385 bytes). You can find the generated pdf file at http://home.imf.au.dk/emil/ah.pdf I get the same delta and the same arrow if I run the code through TeX as I do with pdfeTeX. I hope this is just a customization problem, and I hope also that you can help me. Regards, Emil Hedevang Lohse -- Emil Hedevang Lohse <http://home.imf.au.dk/emil/> Alle spørgsmål er lige dumme. Og spørgsmålet "Kan ænder flyve?" er ikke dumt.
# Geometry of curves on the sphere Let P be a finite set of points on the unit sphere $S^2$ such that • for every $p\in P$, there exists a closed curve $\gamma_p \subset S^2$ which has a self intersection at $p$ and passes through $-p$. Moreover, every plane passing through $p$ and the origin intersect $\gamma_p$ in at most 2 points, excluding $-p$ (see the graph below by Neil Strickland). • At the self intersection point $p$, $\gamma_p$ is orthogonal to itself. • $\gamma_{p_1} \cap \gamma_{p_2} \subset P$ for every $p_1,p_2 \in P$ • If $A_i \subset S^2$ be such that $\partial A_i \subset \gamma_{p_i}$, $p_i\in P$, i=1,2, then neither $A_1 \subset A_2$ nor $A_2 \subset A_1$. • $P \subset \gamma_p$ for all $p\in P$. Can we prove that $P$ has at most one element? If not, what is the upper bound on the number of elements of $P$? • Please write couple of words on motivation. – Anton Petrunin Sep 5 '17 at 20:51 • Is it OK to assume that $\gamma_p$ is smooth? – Anton Petrunin Sep 5 '17 at 23:01 • Yes, $\gamma_p$ is indeed smooth. – User4966 Sep 5 '17 at 23:24 • I find it hard to imagine even two of these curves satisfying all assumptions. Could you draw a picture with two or three curves, please? – Sebastian Goette Sep 6 '17 at 16:26 • Yes, it's difficult to draw a picture. Indeed if P has more than one element , then it should have at least 4 element. – User4966 Sep 7 '17 at 5:07 This is just a comment, really. One can write a nice formula for a family of curves of the indicated type, as follows. Let $p$, $q$ and $r$ be an orthonormal basis of $\mathbb{R}^3$, and suppose that $a>0$. Put $$C = \{x\in S^2: (q.x)\,(r.x)=a((q-r).x)(1 - p.x)\}$$ This can be parametrised as $$\gamma(t) = \frac{(a^2(c-s)^2-s^2c^2)p+2asc(c-s)(cq+sr)}{a^2(c-s)^2+s^2c^2},$$ where $s=\sin(t/2)$ and $c=\cos(t/2)$. This passes through $p$ with derivative $q/a$ at $t=0$ and through $p$ again with derivative $r/a$ at $t=\pi$. It also passes through $-p$ with derivative $-2a(q+r)$ at $t=\pi/2$. Maple code is as follows: g := unapply(subs({s = sin(t/2),c=cos(t/2)},((a^2*( c-s)^2-s^2*c^2) *~
Limited access A $200$-turn coil of wire is situated in a $5.0 \text{ T}$ magnetic field, such that the coil's plane is perpendicular to the direction of the field. The cross-sectional area of the coil is $0.3\text{ m}^{2}$. What is the magnitude of the induced emf if the strength of the field is made to increase at a rate of $0.2 \text{ T/s}$? A $0.06\text{ V}$ B $1.5\text{ V}$ C $12\text{ V}$ D $300\text{ V}$ Select an assignment template
Construct the equation of a linear vector field if the phase portrait is given I already know that since that $0$ is a repulsor singular point over $x$-axis and $-x$ also works as a repulsor point in opposite direction. Can anyone help me out by finding the equation of this vector field given the phase portrait on picture? Thanks. • There seem to be more information given on the paper. Could you provide also that information. It seems that the arrows are vertical at $y=-\mu x$, is that true? – mickep Feb 17 '15 at 19:31 Let $A$ be the matrix of this vector field, so $$\begin{bmatrix} \dot x \\ \dot y \end{bmatrix} = A \begin{bmatrix} x \\ y \end{bmatrix}$$ There isn't enough on the photograph to tell so let me make some additional assumptions. 1. $A \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix}1 \\ 0 \end{bmatrix}$. If that's not true, $A \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ is at least a positive multiple of $\begin{bmatrix}1 \\ 0 \end{bmatrix}$, but let's keep it simple. 2. $A \begin{bmatrix} 1 \\ -\mu \end{bmatrix} = \begin{bmatrix}0 \\ -1 \end{bmatrix}$. Again, this is from the picture, modulo positive multiples. If that's true then \begin{align*} A \begin{bmatrix} 0 \\ 1 \end{bmatrix} &= A \left(\begin{bmatrix} -1/\mu \\ 1 \end{bmatrix} + \begin{bmatrix} 1/\mu \\ 0 \end{bmatrix} \right)\\ &= \begin{bmatrix} 0 \\ 1/\mu \end{bmatrix} + \begin{bmatrix} 1/\mu \\ 0 \end{bmatrix} = \begin{bmatrix} 1/\mu \\ 1/\mu \end{bmatrix} \end{align*} So from this we know that $$A = \begin{bmatrix} 1 & 1/\mu \\ 0 & 1/\mu \end{bmatrix}$$
dc.contributor.author Ourmières-Bonafos T. en_US dc.contributor.author Pankrashkin K. en_US dc.contributor.author Pizzichillo F. en_US dc.date.accessioned 2017-09-16T07:44:35Z dc.date.available 2017-09-16T07:44:35Z dc.date.issued 2017 dc.identifier.issn 0022-247X dc.identifier.uri http://hdl.handle.net/20.500.11824/731 dc.description.abstract We investigate the spectrum of three-dimensional Schr\"odinger operators with $\delta$-interactions of constant strength supported on circular cones. As shown in earlier works, such operators have infinitely many eigenvalues below the threshold of the essential spectrum. We focus on spectral properties for sharp cones, that is when the cone aperture goes to zero, and we describe the asymptotic behavior of the eigenvalues and of the eigenvalue counting function. A part of the results are given in terms of numerical constants appearing as solutions of transcendental equations involving modified Bessel functions. en_US dc.format application/pdf en_US dc.language.iso eng en_US dc.publisher Journal of Mathematical Analysis and Applications en_US dc.relation info:eu-repo/grantAgreement/EC/H2020/669689 en_US dc.relation ES/1PE/SEV-2013-0323 en_US dc.relation ES/1PE/MTM2014-53145-P en_US dc.relation EUS/BERC/BERC.2014-2017 en_US dc.rights info:eu-repo/semantics/openAccess en_US dc.rights.uri http://creativecommons.org/licenses/by-nc-sa/3.0/es/ en_US dc.subject Schrödinger operator en_US dc.subject $\delta$-interaction en_US dc.subject conical surface en_US dc.subject eigenvalue en_US dc.subject asymptotic analysis en_US dc.title Spectral asymptotics for $\delta$-interactions on sharp cones en_US dc.type info:eu-repo/semantics/article en_US  ### This item appears in the following Collection(s) Except where otherwise noted, this item's license is described as info:eu-repo/semantics/openAccess
# Question #ce59d Jan 24, 2015 An element's mass number represents the number of protons and neutrons it has in its nucleus. A quick example: Helium's mass number is 4, meaning that the number of protons plus the number of neutrons it has in its nucleus is 4. Since helium has an atomic number of 2, which expresses the number of protons it has in its nucleus, you can deduce that it will have $\text{mass number - [atomic number](http://socratic.org/chemistry/a-first-introduction-to-matter/atomic-number)" = 4 -2 = "2 neutrons}$ Another example of an element's mass number is carbon Notice that the $\text{_6^12"C}$ isotope has a mass number of 12, meaning it has 6 neutrons and 6 protons in its nucleus, while the $\text{_6^14"C}$ isotpe has a mass number of 14, which means it will have 8 neutrons and 6 protons in its nucleus.
Feeds: Posts ## How to select an auto? Recently, I bought a second hand auto in Germany. As a foreigner, there were some non-trivial steps. In this post, I will share my experience about how I had selected my auto.  Here, I discuss the criteria. In the next post, I show the procedure in a step-by-step way. If you want to find an auto, you have to check following information for each candidate. The first set “Essentials” are very important. For instance, if you are goring to buy an auto which has only a few month till next technical inspection, you may find yourself in a trouble of changing a katalysator (about 1000 Euro). The fuel consumption is also very important consider the current price of  one litter Benzin (about 1.5 Euro). So I tried to avoid old autos and big engines. And the last critical parameter for me was the mileage. Essentials: —————— Erstzulassung (EZ)   or age Vorbesitzer     or number of previous owners Kraftstoffverbrauch  or fuel consumption Kilometerstand  or mileage HU-Prüfung (TÜV)  or date of next inspection Hubraum (cc)   or motor capacity Motorleistung (kW/PS)   or power Feinstaubplakette or emissions sticker Unfallfrei? or any accident? Klimaanlage und / oder Sitzheizung?  or Air conditioning system/ seat heating? Benzin/Diesel? So I narrowed down the search to small autos, not too old, and with as little  mileagespossible AND without accident. Such autos are usually cheap as new, so if you find a good second hand one, it is like an ideal choice. In addition, they are not old, so use new technologies, e.g., for better performance of engine. Done with Essential parameters, I the concentrated on other information: Information: —————– Zylinder Gänge Schaltgetriebe oder Automatik? Sitzplätze Leergewicht CO2-Emissionen Zustand Mechanik? Zustand Elektronik? metalic color? And finally, you have to see which equipments are on the auto. That can balance the price significantly. Since many years, ABS is mandatory in Germany. Hence all autos do have it. Also almost all autos do have a central locking system. In contrast, things like ESP or board computer only appears in modern and not too cheap autos (e.g., Toyota Yaris). You can in principle run an auto without any of the following except for central locking (Zentralverriegelung). And remember, having such a central locking is also mandatory. Equipments: —————– ABS Airbag ESP (electronic stabilization system) Elektr. Fensterheber  (electric  windows) Klima Servolenkung Zentralverriegelung Wegfahrsperre sunroof, leather cover for seats You can add other things to this list but they are not really important for autos in cheap range. Hope this was useful for you, it was helpful for me. ## Text color in Beamer presentations Beamer is a LaTeX class for creating slides for presentations. It supports both pdfLaTeX and LaTeX + dvips. The name is taken from the German word Beamer, a pseudo-anglicism for video projector. One can make beautiful and structured presentation using all power of LaTex. You can install it in Linux distributions from your package manager. There are a few text color available by default like green, blue, and red. To have a variety of text colors in your beamer presentation, simply include \usepackage{xcolor} I suggest the following: \definecolor{olive}{rgb}{0.3, 0.4, .1} \definecolor{fore}{RGB}{249,242,215} \definecolor{back}{RGB}{51,51,51} \definecolor{title}{RGB}{255,0,90} \definecolor{dgreen}{rgb}{0.,0.6,0.} \definecolor{gold}{rgb}{1.,0.84,0.} \definecolor{JungleGreen}{cmyk}{0.99,0,0.52,0} \definecolor{BlueGreen}{cmyk}{0.85,0,0.33,0} \definecolor{RawSienna}{cmyk}{0,0.72,1,0.45} \definecolor{Magenta}{cmyk}{0,1,0,0} You can find many more on the web. A sample output is like this: The above colors was created using this simple frame: \begin{frame} \textcolor{blue}{blue} \textcolor{green}{green} \textcolor{yellow}{yellow} \textcolor{orange}{orange} \textcolor{red}{red} \textcolor{violet}{violet} \newline \textcolor{BlueGreen}{bluegreen} \textcolor{dgreen}{dgreen} \textcolor{olive}{olive} \textcolor{title}{title} \textcolor{Magenta}{magenta} \newline \textcolor{gold}{gold} \textcolor{darkyellow}{darkyellow} \textcolor{RawSienna}{rawsienna} \end{frame} good luck ! ## Squeeze: KDE was broken after update A few days ago, I think Nov 27, 2010, I have updated my Debian Squeeze box (currently frozen, will be the next stable soon). It installed a new kernel and updated my Nvidia graphics driver. I rebooted and tried to go into KDE 4.4, my default desktop. It refused ! I tried Gnome and it was fine. Sound, network, … all fine. so what was the problem? A few times restart did not help. I went to Gnome and started the KDE setting in terminal: $systemsettings In appearance, there was an error message, complaining about the graphics driver (libGL.so.1). I checked if my graphics module works properly:$glxinfo |grep rendering if it returns like direct rendering: Yes then, it is fine. Otherwise, I have to fix the problem. Note that if this command does not work for you, run the following: # apt-get install mesa-utils It turned out that that is the problem. I fixed that in two steps: 1) remove all other Nvidia libs than the standard one (/usr/lib/libGLcore.so.180.29 in my case). 2) re-install Nvidia driver using #apt-get install –reinstall libgl1-nvidia-glx nvidia-glx nvidia-kernel-dkms KDE works fine now.
## Friday, April 17, 2015 ### Macro prices are sticky, not micro prices Two not very sticky prices ... David Glasner, great as always: While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness ... The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. ... Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. ... This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Calvo pricing is an ad hoc attempt to model an entropic force with a microeconomic effect (see here and here). As I commented below his post, assuming ignorance of this process is actually the first step ... if equilibrium is the most likely state, then it can be achieved by random processes: Another way out of requiring sticky micro prices is that if there are millions of prices, it is simply unlikely that the millions of (non-sticky) adjustments will happen in a way that brings aggregate demand into equilibrium with aggregate supply. Imagine that each price is a stochastic process, moving up or down +/- 1 unit per time interval according to the forces in that specific market. If you have two markets and assume ignorance of the specific market forces, there are 2^n with n = 2 or 4 total possibilities {+1, +1}, {+1 -1}, {-1, +1}, {-1 -1} The most likely possibility is no net total movement (the “price level” stays the same) — present in 2 of those choices: {+1 -1}, {-1, +1}. However with two markets, the error is ~1/sqrt(n) = 0.7 or 70%. Now if you have 1000 prices, you have 2^1000 possibilities. The most common possibility is still no net movement, but in this case the error (assuming all possibilities are equal) is ~1/sqrt(n) = 0.03 or 3%. In a real market with millions of prices, this is ~ 0.1% or smaller. In this model, there are no sticky individual prices — every price moves up or down in every time step. However, the aggregate price p = Σ p_i moves a fraction of a percent. Now the process is not necessarily stochastic — humans are making decisions in their markets, but those decisions are likely so complicated (and dependent e.g. on their expectations of others expectations) that they could appear stochastic at the macro level. This also gives us a mechanism to find the equilibrium price vector — if the price is the most likely (maximum entropy) price though “dither” — individuals feeling around for local entropy gradients (i.e. “unlikely conditions” … you see a price that is out of the ordinary on the low side, you buy). This process only works if the equilibrium price vector is the maximum entropy (most likely) price vector consistent with macro observations like nominal output or employment. http://informationtransfereconomics.blogspot.com/2015/03/entropy-and-walrasian-auctioneer.html 1. Despite the frequent small changes, it seems to me that there is a clear price that the potato chips and spaghetti keep returning to. Maybe the prices aren't sticky in the fact that they change very rapidly, but that the firms only re-optimize every ~100 weeks. Even though the prices aren't technically "sticky", I'm pretty this could lead to some nominal rigidity. In terms of microfoundations, couldn't the data be replicated by simply adding a stochastic element to calvo pricing so that $$p_t = \theta \bar p_t + (1-\theta)p_{t-1} + \epsilon_t$$ where epsilon is a "sale" shock that occurs each period? (I'm not sure having the stochastic element would even be relevant to business cycle fluctuations anyway). 1. P.S. sorry about the equation, I couldn't find your settings for mathjax by using Chrome's debug menu 2. Hi John, I think this line gets at what I am trying to say: Even though the prices aren't technically "sticky", I'm pretty [sure] this could lead to some nominal rigidity. That's what I am saying -- it would lead to nominal rigidity if they re-optimized every 100 days ... in fact, it would lead to nominal rigidity if they re-optimized every day. Nominal rigidity is a macro phenomenon that is independent of the micro behavior (as long as no firm dominates prices or there is a 'representative firm'). I have some pictures of what the macro/micro four scenarios (sticky/sticky, sticky/flexible, flexible/sticky and flexible/flexible) here: http://informationtransfereconomics.blogspot.com/2015/04/micro-stickiness-versus-macro-stickiness.html And I agree, I'm sure you could add a Calvo mechanism that produces the exact micro fluctuations we see in the graphs above and lead to macro prices being sticky. My main point is that this isn't necessary ... sticky prices come for free (without any micro mechanism) if you aggregate using a maximum entropy framework. PS You have the mathjax right I just don't always add the js to each page depending on what platform I'm using to write -- and I had some issues trying to put it in the template. 3. With regards to mathjax, putting the javascript in the post template does nothing, but if you paste the javascript under the first head section of the HTML for the whole site it should work. Just make sure to avoid numbering your equations or mathjax will number your entire blog which I'm sure you can imagine is a pain.
This following article was published on the Career Services Informer (CSI), the official career blog of Simon Fraser University (SFU).  I have been fortunate to be a guest blogger for the CSI since I was an undergraduate student at SFU, and you can read all of my recent articles as an alumnus here. As most students return to school in the upcoming semester, their academic studies and back-to-school logistics may be their top priorities.   However, if you want to pursue graduate studies or professional programs like medicine or law, then there are some important deadlines that are fast approaching, and they all involve time-consuming efforts to meet them. Now is a good time to tackle these deadlines and put forth your best effort while you are free of the burdens of exams and papers that await you later in the fall semester. Image Courtesy of Melburnian at Wikimedia Speaking from experience, these applications are very long and tiring, and they will take a lot of thought, planning, writing and re-writing. They also require a lot of coordination to get the necessary documents, like your transcripts and letters of recommendation from professors who can attest to your academic accomplishments and research potential.  Plan ahead for them accordingly, and consider using the Career Services Centre to help you with drafting your curriculum vitae, your statements of interest, and any interview preparation. ## Odds and Probability: Commonly Misused Terms in Statistics – An Illustrative Example in Baseball Yesterday, all 15 home teams in Major League Baseball won on the same day – the first such occurrence in history.  CTV News published an article written by Mike Fitzpatrick from The Associated Press that reported on this event.  The article states, “Viewing every game as a 50-50 proposition independent of all others, STATS figured the odds of a home sweep on a night with a full major league schedule was 1 in 32,768.”  (Emphases added) Screenshot captured at 5:35 pm Vancouver time on Wednesday, August 12, 2015. Out of curiosity, I wanted to reproduce this result.  This event is an intersection of 15 independent Bernoulli random variables, all with the probability of the home team winning being 0.5. $P[(\text{Winner}_1 = \text{Home Team}_1) \cap (\text{Winner}_2 = \text{Home Team}_2) \cap \ldots \cap (\text{Winner}_{15}= \text{Home Team}_{15})]$ Since all 15 games are assumed to be mutually independent, the probability of all 15 home teams winning is just $P(\text{All 15 Home Teams Win}) = \prod_{n = 1}^{15} P(\text{Winner}_i = \text{Home Team}_i)$ $P(\text{All 15 Home Teams Win}) = 0.5^{15} = 0.00003051757$ Now, let’s connect this probability to odds. It is important to note that • odds is only applicable to Bernoulli random variables (i.e. binary events) • odds is the ratio of the probability of success to the probability of failure For our example, $\text{Odds}(\text{All 15 Home Teams Win}) = P(\text{All 15 Home Teams Win}) \ \div \ P(\text{At least 1 Home Team Loses})$ $\text{Odds}(\text{All 15 Home Teams Win}) = 0.00003051757 \div (1 - 0.00003051757)$ $\text{Odds}(\text{All 15 Home Teams Win}) = 0.0000305185$ The above article states that the odds is 1 in 32,768.  The fraction 1/32768 is equal to 0.00003051757, which is NOT the odds as I just calculated.  Instead, 0.00003051757 is the probability of all 15 home teams winning.  Thus, the article incorrectly states 0.00003051757 as the odds rather than the probability. This is an example of a common confusion between probability and odds that the media and the general public often make.  Probability and odds are two different concepts and are calculated differently, and my calculations above illustrate their differences.  Thus, exercise caution when reading statements about probability and odds, and make sure that the communicator of such statements knows exactly how they are calculated and which one is more applicable. ## Analytical Chemistry Lesson of the Day – Linearity in Method Validation and Quality Assurance In analytical chemistry, the quantity of interest is often estimated from a calibration line.  A technique or instrument generates the analytical response for the quantity of interest, so a calibration line is constructed from generating multiple responses from multiple standard samples of known quantities.  Linearity refers to how well a plot of the analytical response versus the quantity of interest follows a straight line.  If this relationship holds, then an analytical response can be generated from a sample containing an unknown quantity, and the calibration line can be used to estimate the unknown quantity with a confidence interval. Note that this concept of “linear” is different from the “linear” in “linear regression” in statistics. This is the the second blog post in a series of Chemistry Lessons of the Day on method validation in analytical chemistry.  Read the previous post on specificity, and stay tuned for future posts!
LaTeX Issues Fixed, Hopefully A quick administrative note. For people reading this blog via RSS readers, there've been problems with LaTeX equations for a couple of months. I've set up a local tex server and tweak the configurations. Hopefully, now everyone will be able to see the math stuff. As a test, . If you still get an error box instead of an equation, please drop me an email. • I can verify that I see the equation now where I previously only saw the error box. • Infophile says: • Looks like a fucken equation to me! Does pi really equal thatte? • MarkCC says: Yup. • Yes, this is known as "Leibniz's series" although the actual discovery dates to a few centuries before. Unfortunately, as pretty as the series is, it converges much too slowly to be usable for actually calculating digits of Pi. • la23ng says: Same here. I used to get only errors but now I see a infinite series that represents pi. • Spencer Bliven says: Thanks, Mark! • Manuel Moe G says:
# Structural breaks, stationarity and time series modelling This is a simplified version of my problem... Say I have two time series ($X$ and $Y$) and I know that $Y_t$ is somehow dependent on $X_t$ but not on $X_{t-k}$ for any $k > 1$. Ultimately I want to have a model describing the relationship between $Y$ and $X$. My objective with this model is to describe past values of the system, not to do forecasting. It seems however that the series have a structural break. Following the work of Kim and Perron I tested each series for unit roots and found none - but I did find breaks. Just for clarification, in this test I'm assuming each series can be described as: $$\text{SERIES}_t = \begin{cases} a + b*t + u_t \;,\;t \leq t_{break} \\ (a + a_{break}) + (b+b_{break})*t + u_t \;,\; t > t_{break} \end{cases}$$ where $a, a_{break}, b, b_{break}$ are constants (which can be $=0$) and $u_t$ is a (potentially ARIMA) noise term. The test checks if the noise $u_t$ has an unit root. Say the results are that $X$ has a break in the mean, $Y$ has a break in the trend and both series are stationary. My question is, how should I model / regress time series that have structural breaks? Since I found breaks while testing for unit roots, should this somehow be included in the model? The time of the break is different for each series, and I have no idea of how to check the validity / significance of such model anyway. Would it make any sense to try a regression with autocorrelated errors (such as in Hyndman and Athanasopoulos) even though there is evidence of breaks in the series? (just out of curiosity... if I don't assume the possibility of structural breaks in the series and use standard KPSS or ADF-GLS tests to check for unit roots, the results are quite confusing: both tests reject the null - so KPSS results in an unit root while ADF-GLS results in stationarity) (also, I know there's a lot of discussion out there about whether structural breaks make sense or not after all, but assume that in this case there's strong visual evidence of a change... you can imagine $X$ is a step function + noise and $Y$ follows an increasing linear trend before the break and switches to fluctuations around a constant after the break) • After more reading, I found that Chow tests can be used to check for structural breaks in regression models. However I'm not sure if / how that would help since I would need to check for stationarity in the first place anyway (before doing the regression). Then if I believe the stationarity test I would know where the potential breaks are for $X$ and $Y$ (so another test would be redundant...). I believe I must check if my series are stationary before trying to model them, is this correct? – arroba Jul 14 '15 at 7:39 • Just a thought for the model - a hidden markov model where the time-series parameters were linked to the hidden markov state. I'd encourage you to question how much additional value there is by modeling the entire system vs finding the change and modeling subsets where things are more stable. – Ben Ogorek Nov 22 '15 at 2:56 • I would say structural changes make a lot of sense in economic data! If you know they contain a break then you need to use unit root tests that can accommodate this as the ADF test is biased towards the null of a unit root in case of breaks. You say you find evidence of breaks but not unit roots. The solution is simply to model the break with a broken trend, i.e. add a trend for each period. You can also try to estimate the model and do a recursive estimation to see if the estimates are stable over time. – Plissken Oct 21 '16 at 21:55 • Please add a graph of your series or post your data. It makes it easier for us to help you. Also, do the breaks coincide with major economic events? Its always a good idea to check the economic calendar to see if the breaks coincide with some major event. This motivates the modelling approach as well. – Plissken Oct 21 '16 at 21:56 • Please see stats.stackexchange.com/questions/251480/… which discusses practical ways to identify the cause for the symptom of "structural breaks" . @BenOgorek is on the right track with his sage comment. – IrishStat Dec 25 '16 at 13:01 You can estimate this using the strucchange R package with a simple linear regression of y given x. In your case the slope coefficient equals $b$ before the break, and $b + b_{break}$ after the break. bp.mod <- breakpoints(y ~ x, breaks = 1) # specify 1, also automated possible
DiffBind: 0 consensus peaks for dba.peakset: New bug / undocumented change introduced between version 2.14 and 3.4.11 2 0 Entering edit mode chrarnold • 0 @chrarnold-7603 Last seen 4 weeks ago Germany We have a data pipeline that served us well for years that includes DiffBind, which generates consensus peak files with different minOverlap values for a set of peak files. The code used in DiffBind is very simple, see below, and worked flawlessly so far. We however recently updated our Singularity image, and we now work with a newer DiffBind package version. Unfortunately, the new(er) versions either introduce a new bug (see below) or at least an undocumented and hard to understand behaviour that results in 0 consensus peak sets. I have reproducible code that runs just fine when running with DIffBind 2.14, but results in 0 consensus peaks with a newer DiffBind version. Can someone help? I can provide the rds file of the dba object if needed for both R versions (they seem to be a bit different, as I get an error when I try to lrun a DiffBind v3 object within DiffBind v2. > R.version.string [1] "R version 3.6.2 (2019-12-12)" > packageVersion("DiffBind") [1] ‘2.14.0’ dba.peakset(dba, minOverlap = 1) 9 Samples, 33498 sites in matrix: ID Caller Intervals 1 1 narrow 18881 2 2 narrow 15370 3 3 narrow 16446 4 4 narrow 13283 5 5 narrow 20127 6 6 narrow 15987 7 7 narrow 21918 8 8 narrow 15457 9 1-2-3-4-5-6-7-8 narrow 33498 > R.version.string [1] "R version 4.1.2 (2021-11-01)" >packageVersion("DiffBind") [1] ‘3.4.11’ dba.peakset(dba, minOverlap = 1) 9 Samples, 22954 sites in matrix (33498 total): 1 1 narrow 18881 NA 2 2 narrow 15370 NA 3 3 narrow 16446 NA 4 4 narrow 13283 NA 5 5 narrow 20127 NA 6 6 narrow 15987 NA 7 7 narrow 21918 NA 8 8 narrow 15457 NA 9 9 0 0 The problem here is the 9th and last row, which (used to) stores the consensus peakset. It is 0 for the new version. An rds version of the dba object for reproducing the issue can be downloaded here: Hyperlink removed, as issue is solved DiffBind • 153 views 0 Entering edit mode Rory Stark ★ 4.4k @rory-stark-5741 Last seen 13 days ago CRUK, Cambridge, UK The object in the .rds file, sampleMetaData.df, is already a DBA object, with eight samples: > sampleMetaData.df <- readRDS("~/Downloads/DiffBind_issue_dba.obj_R4.1.2.rds") 8 Samples, 22954 sites in matrix (33498 total): ID Intervals 1 1 18881 2 2 15370 3 3 16446 4 4 13283 5 5 20127 6 6 15987 7 7 21918 8 8 15457 However you can not pass in a DBA object as a sample sheet: > mydba <- dba(sampleSheet = sampleMetaData.df) > Error in 1:nrow(samples) : argument of length 0 Using it directly as a DBA object, I can reproduce what you are seeing. I'll take you word that it worked previously to generate a consensus peakset, but that is not how it is documented. As specified, the minOverlap parameter only comes into effect when either a) the consensus parameter is specified > dba.peakset(sampleMetaData.df, minOverlap = 1, consensus=DBA_CALLER) 9 Samples, 33498 sites in matrix: ID Intervals 1 1 18881 2 2 15370 3 3 16446 4 4 13283 5 5 20127 6 6 15987 7 7 21918 8 8 15457 9 ALL 33498 or b) when the peaks parameter is specified as a vector of sample numbers: > dba.peakset(sampleMetaData.df, minOverlap = 1, peaks=1:8) 9 Samples, 33498 sites in matrix: ID Intervals 1 1 18881 2 2 15370 3 3 16446 4 4 13283 5 5 20127 6 6 15987 7 7 21918 8 8 15457 9 1-2-3-4-5-6-7-8 33498 The latter case gives you what you want. It is possible that the behavior has changed between DiffBind_2 and DiffBind_3 (which is one of the reasons for a major version bump), however it is currently working as documented (with the behavior of dba.peakset() being undefined when minOverlap is specified but not either consensus or peaks). 0 Entering edit mode chrarnold • 0 @chrarnold-7603 Last seen 4 weeks ago Germany Thanks a lot for your quick reply, it is very appreciated! Indeed, with your 2 suggested modifications, I can again reproduce the results from before, perfect. A couple of clarifications and suggestions: • the sampleMetadata.df I have in my code is indeed a data frame, I didnt mention this explicitly enough but it was never a dba object. This is why I shared the dba object so you can reproduce. • The documentation for ?dba.peakset says for minOverlap: "the minimum number of peaksets a peak must be in to be included when adding a consensus peakset. When retrieving, if the peaks parameter is a vector (logical mask or vector of peakset numbers), a binding matrix will be retrieved including all peaks in at least this many peaksets. If minOverlap is between zero and one, peak will be included from at least this proportion of peaksets.". It doesnt mention anywhere here that this is only valid when either peaks or consensus is set, and it worked definitely like this for DiffBind v. 2.14. Maybe clarifying this in the R help then would be a good idea. If the behavior is undefined, it should rather throw an error and not fail silently as it does now - because 0 could be a "real" result, but often it is not. . Again, thanks a lot for clarifying this here! 0 Entering edit mode I agree the documentation could be clearer -- it took me a while to understand what it said so I could address your issue!
A critical function for the planar brownian convex hull Séminaire de probabilités de Strasbourg, Volume 26 (1992), p. 107-112 @article{SPS_1992__26__107_0, author = {Mountford, Thomas}, title = {A critical function for the planar brownian convex hull}, journal = {S\'eminaire de probabilit\'es de Strasbourg}, publisher = {Springer - Lecture Notes in Mathematics}, volume = {26}, year = {1992}, pages = {107-112}, zbl = {0765.60028}, mrnumber = {1231987}, language = {en}, url = {http://www.numdam.org/item/SPS_1992__26__107_0} } Mountford, Thomas S. A critical function for the planar brownian convex hull. Séminaire de probabilités de Strasbourg, Volume 26 (1992) pp. 107-112. http://www.numdam.org/item/SPS_1992__26__107_0/ Burdzy,K. and San Martin,J. (1989) Curvature of the convex hull of planar Brownian motion near its minimum point. Stochastic Processes and their Applications, 33, 89-103. | MR 1027110 | Zbl 0696.60041 Cranston,M. Hsu,P. and March,P. (1989) Smoothness of the Convex Hull of planar Brownian Motion Annals of Probability 17, 1, 144-150. | MR 972777 | Zbl 0678.60073 Evans,S. (1985) On the Hausdorff dimension of Brownian cone points. Math. Proc. Camb. Philos. Soc., 98, 343-353. | MR 795899 | Zbl 0583.60078 Ito,K. and Mckean,H. (1965) Diffusion Processes and their Sample Paths. New York, Springer. | Zbl 0127.09503
# Correction to the scalar propagator - derivative coupling Given the scalar field Lagrangian $$\mathscr{L}=\frac{1}{2}e^{-\lambda\phi}\partial_\mu\phi\partial^\mu\phi,$$ evaluate the order $$\lambda^2$$ correction to the propagator. At that order in $$\lambda$$, the Lagrangian is $$\mathscr{L}=\frac{1}{2}\left(\partial_\mu\phi\right)^2 - \frac{\lambda}{2}\phi\left(\partial_\mu\phi\right)^2 + \frac{\lambda^2}{4}\phi^2 \left(\partial_\mu\phi\right)^2 + \mathcal{O}\left(\lambda^3\right).$$ The vertices are: Since $$\phi$$s are indistinguishable and because of the derivative coupling, Feynman rules for the vertices should be: 1. $$-i\lambda\left(k_1 k_2 + k_1 k_3 + k_2 k_3\right)$$ 2. $$i\lambda^2 \left(k_1 k_2 + k_1 k_3 + k_1 k_4 + k_2 k_3 + k_2 k_4 + k_3 k_4\right)$$ At order $$\lambda$$, there's nothing. At order $$\lambda^2$$ there are contributions from the tadpole diagram with a $$\phi^2 \left(\partial_\mu\phi\right)^2$$ vertex and from the diagram with two $$\phi\left(\partial_\mu\phi\right)^2$$ vertices. Is it right? Or am I missing something? Are the Feynman rules for the vertices correct? It seems to me that your Lagrangian is just a free Lagrangian in disguise. Start from $$\mathcal L =\frac{1}{2} (\partial_\mu \phi)^2$$ and do a field redefinition $$\phi(x) \to \frac{2}{\lambda} e^{-\frac{\lambda \phi(x)}{2}}$$ With this you find back your Lagrangian. Field redefinitions don't change correlation functions, so whatever you are going to compute with your Lagrangian will be identical to a free Lagrangian and thus there is no correction to the propagator. • This is an exam problem, so I get confused knowing it's trivial. How can I see that field redefinitions don't change correlation functions and propagators? Do I need path integral? – Vincenzo Ventriglia Sep 27 '18 at 7:35 • From path integrals it"s really trivial to see that correlations functions are invariant, since the field $\phi$ is just an integrated variable. Without path integrals I don't know if there is a quick way to see it. – FrodCube Sep 27 '18 at 10:15
How do you simplify the expression (3^2s^3)^6 using the properties? Apr 27, 2017 $531441 {s}^{18}$ Explanation: When you open the bracket, you multiply each of the powers in the bracket with those outside ${\left({3}^{2} {s}^{3}\right)}^{6}$=$\left({3}^{2 \cdot 6} {s}^{3 \cdot 6}\right)$ ${3}^{12} {s}^{18}$=$531441 {s}^{18}$
Next: . About wavelets Up: Supervised classification for textured Previous: . Introduction # . Classification Partition, level set approach The image is considered as a function (where is an open subset of ) We denote . The collection of open sets forms a partition of if and only if , and if      Ø We denote the boundary of (except points belonging also to ), and the interface between and (see figure 1). In order to get a functional formulation rather than a set formulation, we suppose that for each there exists a lipschitz function such that: is thus completely determined by . Regularization In our equations, there will appear some Dirac and Heaviside distributions and . In order that all the expressions we write have a mathematical meaning, we use the classical regular approximations of these distributions (see figure 2): Figure 3 shows how the regions are defined by these distributions and the level sets. Functional Our functional will have three terms: 1) A partition term: (2.1) 2) A regularization term: (2.2) In practice, we seek to minimize: (2.3) 3) A data term: (2.4) The functional we want to minimize is the sum of the three previous terms: (2.5) Next: . About wavelets Up: Supervised classification for textured Previous: . Introduction Jean-Francois Aujol 2002-12-03
Discussion Last Post Replies Views Forum Rounding BrendanC July 21st, 2018 09:27 PM by skipjack 3 58 Elementary Math EXP(x) approximation in old 1980's computer ROM jpcohet July 21st, 2018 08:51 PM by SDK 1 37 Pre-Calculus trigonometric functions shaharhada July 21st, 2018 06:43 PM by topsquark 7 214 Geometry Can Someone Please Help Me?! pleasehelp101 July 21st, 2018 06:36 PM by Denis 1 46 Real Analysis Equations solvable only by directly providing the... Loren July 21st, 2018 04:50 PM by greg1313 15 565 Complex Analysis Complex Analysis - Help needed Harrisu July 21st, 2018 04:48 PM by greg1313 1 34 Complex Analysis please help me understand this.. equation/ cord plane MichaelGD3 July 21st, 2018 02:38 PM by studiot 4 53 Algebra intersecting two matrices problem zollen July 21st, 2018 11:14 AM by romsek 4 187 Linear Algebra Riemanns Sum Problem MathsKid007 July 21st, 2018 11:08 AM by Country Boy 1 123 Calculus Having problems with math..here's why! Matt C July 21st, 2018 10:37 AM by topsquark 32 500 Math Need to calculate the length of an arc Saibaton July 21st, 2018 07:59 AM by studiot 8 241 Geometry Number Sequence 3mE Alamar July 21st, 2018 05:17 AM by Denis 4 130 Math Isomorphism mona123 July 20th, 2018 03:53 PM by cjem 42 952 Abstract Algebra question on odd numbered composites KenE July 20th, 2018 06:45 AM by Collag3n 28 1,764 Math Why my approach to prove this trigonometric identity... Chemist116 July 20th, 2018 12:01 AM by greg1313 10 368 Trigonometry Primes and Factoring penrose July 19th, 2018 05:03 PM by penrose 33 1,735 Number Theory What is name of this wire? EBTERTTBT July 19th, 2018 09:36 AM by Country Boy 2 133 Computer Science Converting angle in inches to degrees paulm July 19th, 2018 06:45 AM by Burt 7 279 Trigonometry Several digit number Vs. Set of numbers terminology Fishman July 19th, 2018 04:04 AM by Country Boy 2 89 Elementary Math Given a velocity vector and a point, find the angle... TheRealJosh July 19th, 2018 03:58 AM by Country Boy 1 172 Pre-Calculus Prime number question idontknow July 18th, 2018 06:19 AM by JeffM1 12 360 Number Theory completing the square shaharhada July 18th, 2018 06:03 AM by rudimt 2 137 Algebra Creating chaos through colors justintimmer July 18th, 2018 05:53 AM by rudimt 8 353 Physics projecting a square on an equirectangular map fifthFunction July 17th, 2018 06:43 AM by fifthFunction 0 68 Topology Description blackberry July 17th, 2018 04:45 AM by Denis 2 140 Academic Guidance Prove the eigenvalues $\lambda$ of \$\lambda \phi_j(x)=... Chloesannon July 16th, 2018 12:52 PM by mathman 1 132 Math Prime Number Sequence ma1975 July 16th, 2018 12:46 PM by KenE 10 308 Number Theory Negative residues matqkks July 16th, 2018 12:14 PM by skipjack 1 135 Number Theory reading comprehension shaharhada July 16th, 2018 10:20 AM by skipjack 1 73 Algebra HVAC finding the value of BETA help lhdwce2018 July 16th, 2018 10:13 AM by skipjack 1 83 Algebra empty set shaharhada July 16th, 2018 09:27 AM by Country Boy 7 284 Math 1/998001 goes way deeper than expected skipjack July 16th, 2018 04:41 AM by alan2here 1 147 Math pressure gradient in sphere edwardone333 July 16th, 2018 03:12 AM by weirddave 1 79 Geometry Find the number of possible choices for x and y when... Physicslog July 16th, 2018 01:49 AM by 1ucid 3 256 Number Theory Fun with World Cup Group Play jks July 15th, 2018 03:43 PM by jks 1 160 Elementary Math 1/9801 and even deeper examples alan2here July 15th, 2018 11:05 AM by alan2here 0 75 Math Mathematical GIF-animations for students kubrikov July 15th, 2018 07:12 AM by alan2here 1 192 Math Question About a 1-Tailed vs. 2-Tailed Test EvanJ July 15th, 2018 03:44 AM by EvanJ 4 135 Probability and Statistics vector geometry chandan gowda July 15th, 2018 02:27 AM by chandan gowda 2 131 Geometry Calculate probability of two polygons cheerful July 14th, 2018 10:24 PM by cheerful 2 198 Applied Math Why can't I solve an equation this way? hansi July 14th, 2018 03:07 AM by hansi 10 265 Elementary Math How to solve this confusing question? Ganesh Ujwal July 14th, 2018 02:08 AM by skipjack 14 334 Elementary Math Lebesgue integration - Riemann integration shaharhada July 13th, 2018 09:01 AM by Micrm@ss 3 180 Calculus You know that P(10)=30, P'(10)=0.4, and P''(10)=0.0008. mathaway July 13th, 2018 03:16 AM by Country Boy 4 199 Calculus By exploring what knowledge of Maths can I understand... rubis July 12th, 2018 07:37 PM by Maschke 2 164 Applied Math Urgent help needed! ArusaWaseem1784 July 12th, 2018 12:57 PM by romsek 6 189 Math new at forum alexandrosst July 12th, 2018 10:12 AM by romsek 1 157 New Users My new YouTube channel about recreational math Mircode July 12th, 2018 09:29 AM by Mircode 4 497 New Users What was the ratio of the speed of the man and the... Ganesh Ujwal July 12th, 2018 09:17 AM by Denis 3 163 Elementary Math Why is the following inequality correct? Mathmatizer July 12th, 2018 08:24 AM by Mathmatizer 2 127 Calculus
# networkx.generators.random_graphs.watts_strogatz_graph¶ watts_strogatz_graph(n, k, p, seed=None)[source] Returns a Watts–Strogatz small-world graph. Parameters • n (int) – The number of nodes • k (int) – Each node is joined with its k nearest neighbors in a ring topology. • p (float) – The probability of rewiring each edge • seed (integer, random_state, or None (default)) – Indicator of random number generation state. See Randomness. Notes First create a ring over $$n$$ nodes 1. Then each node in the ring is joined to its $$k$$ nearest neighbors (or $$k - 1$$ neighbors if $$k$$ is odd). Then shortcuts are created by replacing some edges as follows: for each edge $$(u, v)$$ in the underlying “$$n$$-ring with $$k$$ nearest neighbors” with probability $$p$$ replace it with a new edge $$(u, w)$$ with uniformly random choice of existing node $$w$$. In contrast with newman_watts_strogatz_graph(), the random rewiring does not increase the number of edges. The rewired graph is not guaranteed to be connected as in connected_watts_strogatz_graph(). References 1 Duncan J. Watts and Steven H. Strogatz, Collective dynamics of small-world networks, Nature, 393, pp. 440–442, 1998.
# Hartshorne Exercise III.8.4(c) Let $Y$ be a noetherian scheme, and let $\mathcal E$ be a locally free $\mathcal O_Y$-module of rank $n+1$, with $n\ge 1$. Let $X=\mathbb P(\mathcal E)$ [the projective bundle over $\mathcal E$], with corresponding invertible sheaf $O_X(1)$ and projection morphism $\pi:X\rightarrow Y$. Exercise III.8.4(c) in Hartshorne says Now show, for any $l\in\mathbb Z$, that $$R^n\pi_*(O(l))\cong \pi_*(O(-l-n-1))^\vee \otimes (\wedge^{n+1} \mathcal E)^\vee.$$ I interpret "Now show" to mean 'Use the previous parts of the exercise to help you show this', so it is worth noting that in part (a), we showed that $R^i\pi_*(O(l))$ vanishes for most values of $i$ and $l$, and in (b), $$R^n\pi_*((\pi^*\wedge^{n+1} \mathcal E)(-n-1))\cong \mathcal O_Y.$$ Also, $\omega_{X/Y}\cong (\pi^*\wedge^{n+1} \mathcal E)(-n-1)$. We also have the projection formula $$R^if_*(\mathcal F \otimes f^* \mathcal E)\cong R^if_*(\mathcal F)\otimes \mathcal E.$$ Using this formula, we have $$R^n\pi_*(\mathcal O(l))\otimes (\wedge^{n+1} \mathcal E) \cong R^n \pi_*(\pi^*(\wedge^{n+1} \mathcal E) \ \otimes \mathcal O (l))$$ Twisting by $0$ inside the parentheses by adding and subtracting $n+1$ shows this is equal to $$R^n\pi_*(\omega_{X/Y} \otimes O(n+l+1)).$$ Now, using Proposition III.8.1, it seems we might be done if had some version of Serre duality for $X/Y$ and used this on an appropriate affine cover of the base and patched things together. However, Hartshorne is very careful to not use things he hasn't proved previously, and he has only proved Serre duality for projective schemes over a field. We have also not used the fact that $$R^n\pi_*((\pi^*\wedge^{n+1} \mathcal E)(-n-1))\cong \mathcal O_Y,$$ which seems important and useful. I conclude that I am probably approaching this the wrong way, and that some different method is needed to solve the problem with only the tools Hartshorne has developed so far. On the other hand, I don't know how I would obtain an expression of the form $$\pi_*(O(-l-n-1))^\vee$$ without a duality theorem. How should one proceed here? • We know Serre duality for $\mathcal O(d)$ on $\mathbb P^n$ over any Noetherian ring $A$ (Theorem III.5.1). Locally, $\mathbb P(\mathscr E)$ is of this form (e.g. on an affine cover that trivialises $\mathscr E$). – Remy Nov 29 '15 at 3:56 • @Remy Ah, I'm being silly. Thanks. And the fact that we can glue the isomorphisms on these affine open sets follows from the fact that the duality mapping commutes with restriction, right? (If you post your comment as an answer, I would be happy to accept it.) – user4571 Nov 29 '15 at 4:50 Hartshorne does prove Serre duality for $\mathbb P^n$ over any Noetherian ring (Theorem III.5.1). Locally, $\mathbb P(\mathscr E)$ is of this form (e.g. on an affine cover that trivialises $\mathscr E$). To check that the duality commutes with restriction (on $\operatorname{Spec} A$), observe that the pairing $$H^0(\mathbb P^n, \mathcal O(d)) \times H^n(\mathbb P^n, \mathcal O(-d-n-1) \to H^n(\mathbb P^n, \mathcal O(-n-1)) = A$$ is defined in terms of a Čech complex whose formation 'does not depend on $A$'.
# SOCR EduMaterials Activities General CI Experiment ## Summary There are two types of parameter estimates – point-based and interval-based estimates. Point-estimates refer to unique quantitative estimates of various parameters. Interval-estimates represent ranges of plausible values for the parameters of interest. There are different algorithmic approaches, prior assumptions and principals for computing data-driven parameter estimates. Both point and interval estimates depend on the distribution of the process of interest, the available computational resources and other criteria that may be desirable (Stewarty 1999) – e.g., biasness and robustness of the estimates. Accurate, robust and efficient parameter estimation is critical in making inference about observable experiments, summarizing process characteristics and prediction of experimental behaviors. This activity demonstrates the usage and functionality of SOCR General Confidence Interval Applet. This applet is complementary to the SOCR Simple Confidence Interval Applet and its corresponding activity. ## Goals The aims of this activity are to: • demonstrate the theory behind the use of interval-based estimates of parameters, • illustrate various confidence intervals construction recipes • draw parallels between the construction algorithms and intuitive meaning of confidence intervals • present a new technology-enhanced approach for understanding and utilizing confidence intervals for various applications. ## Motivational example A 2005 study proposing a new computational brain atlas for Alzheimer’s disease (Mega et al., 2005) investigated the mean volumetric characteristics and the spectra of shapes and sizes of different cortical and subcortical brain regions for Alzheimer’s patients, individuals with minor cognitive impairment and asymptomatic subjects. This study estimated a number of centrality and variability parameters for these three populations. Based on these point- and interval-estimates, the study analyzed a number of digital scans to derive criteria for imaging-based classification of subjects based on the intensities of their 3D brain scans. Their results enabled a number of subsequent inference studies that quantified the effects of subject demographics (e.g., education level, familial history, APOE allele, etc.), stage of the disease and the efficacy of new drug treatments targeting Alzheimer’s disease. The Figure to the right illustrates the shape, center and distribution parameters for the 3D geometric structure of the right hippocampus in the Alzheimer’s disease brain atlas. New imaging data can then be coregistered and compared relative to the amount of anatomical variability encoded in this atlas. This enables automated, efficient and quantitative inference on large number of brain volumes. Examples of point and interval estimates computed in this atlas framework include the mean-intensity and mean shape location, and the standard deviation of intensities and the mean deviation of shape. ## Activity ### Confidence intervals (CI) for the population mean μ of normal population with known population variance σ2 Let $X_1, X_2, \cdots, X_n$ be a random sample from N(μ,σ). We know that $\bar X \sim N(\mu, \frac{\sigma}{\sqrt{n}})$. Therefore, $P\left(-z_{\frac{\alpha}{2}} \le \frac{\bar X - \mu}{\frac{\sigma}{\sqrt{n}}} \le z_{\frac{\alpha}{2}} \right)=1-\alpha,$ where $-z_{\frac{\alpha}{2}}$ and $z_{\frac{\alpha}{2}}$ are defined as shown in the figure below: The area 1 − α is called confidence level. Usually, the choices for confidence levels are the following: 1 − α $z_{\frac{\alpha}{2}}$ 0.90 1.645 0.95 1.960 0.98 2.325 0.99 2.575 The expression above can be written as: $P\left(\bar x -z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \le \mu \le \bar x + z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \right)=1-\alpha.$ We say that we are 1 − α confident that the mean μ falls in the interval $\bar x \pm z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}}$. ### Example 1 Suppose that the length of iron rods from a certain factory follows the normal distribution with known standard deviation $\sigma=0.2\ m$ but unknown mean μ. Construct a 95% confidence interval for the population mean μ if a random sample of n=16 of these iron rods has sample mean $\bar x=6 \ m$. We solve this problem by using our CI recipe $6 \pm 1.96 \frac{0.2}{\sqrt{16}}$ $6 \pm 0.098$ $5.902 \le \mu \le 6.098.$ ### Sample size determination for a given length of the confidence interval Find the sample size n needed when we want the width of the confidence interval to be $\pm E$ with confidence level 1 − α. #### Solution In the expression $\bar x \pm z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}}$ the width of the confidence interval is given by $z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}}$ (also called margin of error). We want this width to be equal to E. Therefore, $E=z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \Rightarrow n=\left(\frac{z_{\frac{\alpha}{2}} \sigma}{E}\right)^2.$ ### Example 2 Following our first example above, suppose that we want the entire width of the confidence interval to be equal to $0.05 \ m$. Find the sample size n needed. $n=\left(\frac{1.96 \times 0.2}{0.025}\right)^2=245.9 \Rightarrow n \approx 246.$ ## Introduction of the SOCR Confidence Interval Applet To access the SOCR applet on confidence intervals go to http://socr.ucla.edu/htmls/exp/Confidence_Interval_Experiment_General.html. To select the type and parameters of the specific confidence interval of interest click on the Confidence Interval button on the top -- this will open a new pop-up window as shown below: A confidence interval of interest can be selected from the drop-down list under CI Settings. In this case, we selected Mean - Population Variance Known. In the same pop-up window, under SOCR Distributions, the drop-down menu offers a list of all the available distributions of SOCR. These distributions are the same as the ones included in the SOCR Distributions applet. Once the desired distribution is selected, its parameters can be chosen numerically or via the sliders. In this example we select: normal distribution with mean 5 and standard deviation 2, sample size (number of observations selected from the distribution) is 20, the confidence level (1 − α = 0.95), and the number of intervals to be constructed is 50 (see screenshot below). Note: Make sure to hit enter after you enter any of the parameters above. To run the SOCR CI simulation, go back to the applet in the main browser window. We can run the experiment once, by clicking on the Step button, or many times by clicking on the Run button. The number of experiments can be controlled by the value of the Number of Experiments variable (10, 100, 1,000, 10,000, or continuously). In the screenshot above we observe the following: • The shape of the distribution that was selected (in this case Normal). • The observations selected from the distribution for the construction of each of the 50 intervals shown in blue on the top-left graph panel. • The confidence intervals shown as red line segments on the bottom-left panel. • The green dots represent instances of confidence intervals that do not include the parameter (in this case population mean of 5). • All the parameters and simulation results are summarized on the right panel of the applet. ### Practice Run the same experiment using sample sizes of 20, 30, 40, 50 with the same confidence level (1 − alpha = 0.95). What are your observations and conclusions? ## Confidence intervals for the population mean μ with known population variance σ2 From the central limit theorem we know that when the sample size is large (usually $n \ge 30$) the distribution of the sample mean $\bar X$ approximately follows $\bar X \sim N(\mu, \frac{\sigma}{\sqrt{n}})$. Therefore, the confidence interval for the population mean μ is approximately given by the expression we previously discussed: $P\left(\bar x -z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \le \mu \le \bar x + z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \right) \approx 1-\alpha.$ The mean μ falls in the interval $\bar x \pm z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}}$. Also, the sample size determination is given by the same formula: $E=z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \Rightarrow n=\left(\frac{z_{\frac{\alpha}{2}} \sigma}{E}\right)^2.$ ### Example 3 A sample of size n=50 is taken from the production of light bulbs at a certain factory. The sample mean of the lifetime of these 50 light bulbs is found to be $\bar x = 1,570$ hours. Assume that the population standard deviation is σ = 120 hours. • Construct a 95% confidence interval for μ. • Construct a 99% confidence interval for μ. • What sample size is needed so that the length of the interval is 30 hours with 95% confidence? ## An empirical investigation Two dice are rolled and the sum X of the two numbers that occurred is recorded. The probability distribution of X is as follows: X 2 3 4 5 6 7 8 9 10 11 12 P(X) 1/36 2/36 3/36 4/36 5/36 6/36 5/36 4/36 3/36 2/36 1/36 This distribution has mean μ = 7 and standard deviation σ = 2.42. We take 100 samples of size n=50 each from this distribution and compute for each sample the sample mean $\bar x$. Pretend now that we only know that σ = 2.42, and that μ is unknown. We are going to use these 100 sample means to construct 100 confidence intervals. each one with 95% confidence level for the true population mean μ. Here are the results: Sample $\bar x$ 95% CI for μ: $\bar x - 1.96 \frac{2.42}{\sqrt {50}} \le \mu \le \bar x + 1.96 \frac{2.42}{\sqrt {50}}$ Is μ = 7 included? 1 6.9 $6.23\leq \mu\leq 7.57$ YES 2 6.3 $5.63\leq\mu\leq 6.97$ NO 3 6.58 $5.91\leq\mu\leq 7.25$ YES 4 6.54 $5.87\leq\mu\leq 7.21$ YES 5 6.7 $6.03\leq\mu\leq 7.37$ YES 6 6.58 $5.91\leq\mu\leq 7.25$ YES 7 7.2 $6.53\leq\mu\leq 7.87$ YES 8 7.62 $6.95\leq\mu\leq 8.29$ YES 9 6.94 $6.27\leq\mu\leq 7.61$ YES 10 7.36 $6.69\leq\mu\leq 8.03$ YES 11 7.06 $6.39\leq\mu\leq 7.73$ YES 12 7.08 $6.41\leq\mu\leq 7.75$ YES 13 7.42 $6.75\leq\mu\leq 8.09$ YES 14 7.42 $6.75\leq\mu\leq 8.09$ YES 15 6.8 $6.13\leq\mu\leq 7.47$ YES 16 6.94 $6.27\leq\mu\leq 7.61$ YES 17 7.2 $6.53\leq\mu\leq 7.87$ YES 18 6.7 $6.03\leq\mu\leq 7.37$ YES 19 7.1 $6.43\leq\mu\leq 7.77$ YES 20 7.04 $6.37\leq\mu\leq 7.71$ YES 21 6.98 $6.31\leq\mu\leq 7.65$ YES 22 7.18 $6.51\leq\mu\leq 7.85$ YES 23 6.8 $6.13\leq\mu\leq 7.47$ YES 24 6.94 $6.27\leq\mu\leq 7.61$ YES 25 8.1 $7.43\leq\mu\leq 8.77$ NO 26 7 $6.33\leq\mu\leq 7.67$ YES 27 7.06 $6.39\leq\mu\leq 7.73$ YES 28 6.82 $6.15\leq\mu\leq 7.49$ YES 29 6.96 $6.29\leq\mu\leq 7.63$ YES 30 7.46 $6.79\leq\mu\leq 8.13$ YES 31 7.04 $6.37\leq\mu\leq 7.71$ YES 32 7.06 $6.39\leq\mu\leq 7.73$ YES 33 7.06 $6.39\leq\mu\leq 7.73$ YES 34 6.8 $6.13\leq\mu\leq 7.47$ YES 35 7.12 $6.45\leq\mu\leq 7.79$ YES 36 7.18 $6.51\leq\mu\leq 7.85$ YES 37 7.08 $6.41\leq\mu\leq 7.75$ YES 38 7.24 $6.57\leq\mu\leq 7.91$ YES 39 6.82 $6.15\leq\mu\leq 7.49$ YES 40 7.26 $6.59\leq\mu\leq 7.93$ YES 41 7.34 $6.67\leq\mu\leq 8.01$ YES 42 6.62 $5.95\leq\mu\leq 7.29$ YES 43 7.1 $6.43\leq\mu\leq 7.77$ YES 44 6.98 $6.31\leq\mu\leq 7.65$ YES 45 6.98 $6.31\leq\mu\leq 7.65$ YES 46 7.06 $6.39\leq\mu\leq 7.73$ YES 47 7.14 $6.47\leq\mu\leq 7.81$ YES 48 7.5 $6.83\leq\mu\leq 8.17$ YES 49 7.08 $6.41\leq\mu\leq 7.75$ YES 50 7.32 $6.65\leq\mu\leq 7.99$ YES 51 6.54 $5.87\leq\mu\leq 7.21$ YES 52 7.14 $6.47\leq\mu\leq 7.81$ YES 53 6.64 $5.97\leq\mu\leq 7.31$ YES 54 7.46 $6.79\leq\mu\leq 8.13$ YES 55 7.34 $6.67\leq\mu\leq 8.01$ YES 56 7.28 $6.61\leq\mu\leq 7.95$ YES 57 6.56 $5.89\leq\mu\leq 7.23$ YES 58 7.72 $7.05\leq\mu\leq 8.39$ NO 59 6.66 $5.99\leq\mu\leq 7.33$ YES 60 6.8 $6.13\leq\mu\leq 7.47$ YES 61 7.08 $6.41\leq\mu\leq 7.75$ YES 62 6.58 $5.91\leq\mu\leq 7.25$ YES 63 7.3 $6.63\leq\mu\leq 7.97$ YES 64 7.1 $6.43\leq\mu\leq 7.77$ YES 65 6.68 $6.01\leq\mu\leq 7.35$ YES 66 6.98 $6.31\leq\mu\leq 7.65$ YES 67 6.94 $6.27\leq\mu\leq 7.61$ YES 68 6.78 $6.11\leq\mu\leq 7.45$ YES 69 7.2 $6.53\leq\mu\leq 7.87$ YES 70 6.9 $6.23\leq\mu\leq 7.57$ YES 71 6.42 $5.75\leq\mu\leq 7.09$ YES 72 6.48 $5.81\leq\mu\leq 7.15$ YES 73 7.12 $6.45\leq\mu\leq 7.79$ YES 74 6.9 $6.23\leq\mu\leq 7.57$ YES 75 7.24 $6.57\leq\mu\leq 7.91$ YES 76 6.6 $5.93\leq\mu\leq 7.27$ YES 77 7.28 $6.61\leq\mu\leq 7.95$ YES 78 7.18 $6.51\leq\mu\leq 7.85$ YES 79 6.76 $6.09\leq\mu\leq 7.43$ YES 80 7.06 $6.39\leq\mu\leq 7.73$ YES 81 7 $6.33\leq\mu\leq 7.67$ YES 82 7.08 $6.41\leq\mu\leq 7.75$ YES 83 7.18 $6.51\leq\mu\leq 7.85$ YES 84 7.26 $6.59\leq\mu\leq 7.93$ YES 85 6.88 $6.21\leq\mu\leq 7.55$ YES 86 6.28 $5.61\leq\mu\leq 6.95$ NO 87 7.06 $6.39\leq\mu\leq 7.73$ YES 88 6.66 $5.99\leq\mu\leq 7.33$ YES 89 7.18 $6.51\leq\mu\leq 7.85$ YES 90 6.86 $6.19\leq\mu\leq 7.53$ YES 91 6.96 $6.29\leq\mu\leq 7.63$ YES 92 7.26 $6.59\leq\mu\leq 7.93$ YES 93 6.68 $6.01\leq\mu\leq 7.35$ YES 94 6.76 $6.09\leq\mu\leq 7.43$ YES 95 7.3 $6.63\leq\mu\leq 7.97$ YES 96 7.04 $6.37\leq\mu\leq 7.71$ YES 97 7.34 $6.67\leq\mu\leq 8.01$ YES 98 6.72 $6.05\leq\mu\leq 7.39$ YES 99 6.64 $5.97\leq\mu\leq 7.31$ YES 100 7.3 $6.63\leq\mu\leq 7.97$ YES We observe that four confidence intervals among the 100 that we constructed fail to include the true population mean μ = 7 (about 5%). ### Example 4 For this example, we will select the Exponential distribution with λ = 5 (mean of 1/5 = 0.2), sample size 60, confidence level 0.95, and number of intervals 50. These settings, along with the results of the simulations are shown below ## Confidence intervals for the population mean of normal distribution when the population variance σ2 is unknown Let $X_1, X_2, \cdots, X_n$ be a random sample from N(μ,σ2). It is known that $\frac{\bar X - \mu}{\frac{s}{\sqrt{n}}} \sim t_{n-1}$. Therefore, $P\left(-t_{\frac{\alpha}{2}; n-1} \le \frac{\bar X - \mu}{\frac{s}{\sqrt{n}}} \le t_{\frac{\alpha}{2}; n-1} \right)=1-\alpha,$ where $-t_{\frac{\alpha}{2};n-1}$ and $t_{\frac{\alpha}{2};n-1}$ are defined as follows: As before, the area 1 − α is called the confidence level. The values of $t_{\frac{\alpha}{2};n-1}$ can be found from: 1 − α n $t_{\frac{\alpha}{2};n-1}$ 0.90 13 1.782 0.95 21 2.086 0.98 31 2.457 0.99 61 2.660 • Note: The sample standard deviation is computed as follows: $s=\sqrt{\frac{\sum_{i=1}^{n} (x_i-\bar x)^2}{n-1}}$ or using the shortcut formula. $s=\sqrt{\frac{1}{n-1}\left[\sum_{i=1}^{n} x_i^2 - \frac{(\sum_{i=1}^{n} x_i)^2}{n}\right]}$ After some rearranging the expression above can be written as: $P\left(\bar x -t_{\frac{\alpha}{2};n-1} \frac{s}{\sqrt{n}} \le \mu \le \bar x + t_{\frac{\alpha}{2};n-1} \frac{s}{\sqrt{n}} \right)=1-\alpha$ We say that we are 1 − α confident that μ falls in the interval: $\bar x \pm t_{\frac{\alpha}{2};n-1} \frac{s}{\sqrt{n}}.$ ### Example 3 The daily production of a chemical product last week in tons was: 785, 805, 790, 793, and 802. • Construct a 95% confidence interval for the population mean μ. • What assumptions are necessary? ### SOCR investigation For this case, we will select the normal distribution with mean 5 and standard deviation 2, sample size of 25, number of intervals 50, and confidence level 0.95. These settings and simulation results are shown below: We observe that the length of the confidence interval differs for all the intervals because the margin of error is computed using the sample standard deviation. ## Confidence interval for the population proportion p Let $Y_1, Y_2, \cdots, Y_n$ be a random sample from the Bernoulli distribution with probability of success p. To construct a confidence interval for p the following result is used based on the normal approximation: $\frac{X-np}{\sqrt{np(1-p)}} \sim N(0,1),$ where $X=\sum_{i=1}^n{Y_i}$ is the total number of successes in the n experiments. Therefore, $P\left(-z_{\frac{\alpha}{2}} \le \frac{X-np}{\sqrt{np(1-p)}} \le z_{\frac{\alpha}{2}} \right)=1-\alpha,$ where $-z_{\frac{\alpha}{2}}$ and $z_{\frac{\alpha}{2}}$ defined as above. After rearranging we get: $P\left(\frac{X}{n} - z_{\frac{\alpha}{2}} \sqrt{\frac{p(1-p)}{n}} \le p \le \frac{X}{n} + z_{\frac{\alpha}{2}} \sqrt{\frac{p(1-p)}{n}}\right)=1-\alpha.$ The ratio $\frac{x}{n}$ is the point estimate of the population p and it is denoted with $\hat p=\frac{x}{n}$. The problem with this interval is that the unknown p appears also at the end points of the interval. As an approximation we can simply replace p with its estimate $\hat p=\frac{x}{n}$. Finally the confidence interval is given by: $P\left(\hat p - z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}} \le p \le \hat p + z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}}\right)=1-\alpha.$ We say that we are 1 − α confident that p falls in $\hat p \pm z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}}.$ ## Calculating sample sizes The basic problem we will address now is how to determine the sample size needed so that the resulting confidence interval will have a fixed margin of error E with confidence level 1 − α. In the expression $\hat p \pm z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}}$, the width of the confidence interval is given by the margin of error $z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}}$. We can simply solve for n: $E=z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}} \Rightarrow n = \frac {z_{\frac{\alpha}{2}}^2\hat p (1-\hat p)}{E^2}.$ However, the value of $\hat p$ is not known because we have not observed our sample yet. If we use $\hat p=0.5$, we will obtain the largest possible sample size. Of course, if we have an idea about its value (from another study, etc.) we can use it. ### Example 6 At a survey poll before the elections candidate A receives the support of 650 voters in a sample of 1,200 voters. • Construct a 95% confidence interval for the population proportion p that supports candidate A. • Find the sample size needed so that the margin of error will be $\pm 0.01$ with confidence level 95%. ### Another formula for the confidence interval for the population proportion p Another way to solve for p is presented below: $P\left(-z_{\frac{\alpha}{2}} \le \frac{X-np}{\sqrt{np(1-p)}} \le z_{\frac{\alpha}{2}} \right)=1-\alpha$ $P\left(-z_{\frac{\alpha}{2}} \le \frac{\frac{X}{n}-p}{\sqrt{\frac{p(1-p)}{n}}} \le z_{\frac{\alpha}{2}} \right)=1-\alpha$ $P\left(\frac{|\hat p - p|}{\sqrt{\frac{p(1-p)}{n}}} \le z_{\frac{\alpha}{2}} \right) =1-\alpha$ $P\left(\frac{(\hat p - p)^2}{\frac{p(1-p)}{n}} \le z_{\frac{\alpha}{2}}^2 \right) =1-\alpha$ We obtain a quadratic expression in terms of p: $(\hat p - p)^2 - z_{\frac{\alpha}{2}}^2 \frac{p(1-p)}{n} \le 0$ $(1+\frac{z_{\frac{\alpha}{2}}^2}{n})p^2 - (2\hat p + \frac{z_{\frac{\alpha}{2}}^2}{n})p + \hat p^2 = 0$ Solving for p we get the following confidence interval: $\frac{\hat p +\frac{z_{\frac{\alpha}{2}}^2}{2n} \pm z_{\frac{\alpha}{2}} \sqrt{\frac{\hat p(1-\hat p)}{n}+\frac{z_{\frac{\alpha}{2}}^2}{4n^2}}} {1+\frac{z_{\frac{\alpha}{2}}^2}{n}}.$ When n is large, this interval is the same as before. ## Exact confidence interval for p The first interval for proportions above (normal approximation) produces intervals that are too narrow when the sample size is small. The coverage is below 1 − α. The following exact method (or Clopper-Pearson) improves the low coverage of the normal approximation confidence interval. The exact confidence interval however has higher coverage than 1 − α. $\left[1+\frac{n-x+1}{xF_{1-\frac{\alpha}{2};2x,2(n-x+1)}}\right]^{-1} < p < \left[1+\frac{n-x}{(x+1)F_{\frac{\alpha}{2};2(x+1),2(n-x)}}\right]^{-1},$ where, x is the number of successes among n trials, and Fa,b,c is the a quantile of the F distribution with numerator degrees of freedom b and denominator degrees of freedom c. ## Confidence interval for the population variance σ2 of the normal distribution Again, let $X_1, X_2, \cdots, X_n$ random sample from N(μ,σ2). It is known that $\frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1}$. Therefore, $P\left(\chi^2_{\frac{\alpha}{2}; n-1} \le \frac{(n-1)S^2}{\sigma^2} \le \chi^2_{1-\frac{\alpha}{2}; n-1} \right)=1-\alpha,$ where $\chi^2_{\frac{\alpha}{2};n-1}$ and $\chi^2_{1-\frac{\alpha}{2};n-1}$ are defined as follows: As with the T-distribution, the values of $\chi^2_{\frac{\alpha}{2};n-1}$ and $\chi^2_{1-\frac{\alpha}{2};n-1}$ may be found from: 1 − α n $\chi^2_{\frac{\alpha}{2};n-1}$ $\chi^2_{1-\frac{\alpha}{2};n-1}$ 0.90 4 0.352 7.81 0.95 16 6.26 27.49 0.98 25 10.86 42.98 0.99 41 20.71 66.77 If we rearrange the inequality above we get: $P\left(\frac{(n-1)s^2}{\chi_{1-\frac{\alpha}{2};n-1}^2} \le \sigma^2 \le \frac{(n-1)s^2}{\chi_{\frac{\alpha}{2};n-1}^2}\right)=1-\alpha.$ We say that we are 1 − α confident that the population variance σ2 falls in the interval: $\left[\frac{(n-1)s^2}{\chi_{1-\frac{\alpha}{2};n-1}^2}, \frac{(n-1)s^2}{\chi_{\frac{\alpha}{2};n-1}^2}\right]$ • Comment: When the sample size n is large, the $\chi^2_{n-1}$ distribution can be approximated by $N(n-1, \sqrt{2(n-1)})$. Therefore, in such situations, the confidence interval for the variance can be computed as follows: $\frac{s^2}{1+z_{\frac{\alpha}{2}}\sqrt{\frac{2}{n-2}}} \le \sigma^2 \le \frac{s^2}{1-z_{\frac{\alpha}{2}}\sqrt{\frac{2}{n-2}}}.$ ### Example 7 A precision instrument is guaranteed to read accurately to within 2 units. A sample of 4 instrument readings on the same object yielded the measurements 353, 351, 351, and 355. Find a 90% confidence interval for the population variance. Assume that these observations were selected from a population that follows the normal distribution. ## SOCR investigation Using the SOCR confidence intervals applet, we run the following simulation experiment: normal distribution with mean 5 and standard deviation 2, sample size 30, confidence intervals 50, and confidence level 0.95. However, if the population is not normal, the coverage is poor and this can be seen with the following SOCR example. Consider the exponential distribution with λ = 2 (variance is σ2 = 0.25). If we use the confidence interval based on the χ2 distribution, as described above, we obtain the following results (first with sample size 30 and then sample size 300). We observe that regardless of the sample size the 95% CI(σ2) coverage is poor. In these situations, (sampling from non-normal populations) an asymptotically distribution-free confidence interval for the variance can be obtained using the following large sample theory result: $\sqrt{n}(s^2-\sigma^2) \rightarrow N\left(0, \mu_4-\sigma^4\right),$ That is, $\frac{\sqrt{n}(s^2-\sigma^2)}{ \sqrt{\mu_4-\sigma^4}} \rightarrow N(0,1),$ where, μ4 = E(X − μ)4 is the fourth moment of the distribution. Of course, μ4 is unknown and will be estimated by the fourth sample moment $\mu_4=\frac{1}{n}\sum_{i=1}^n(X_i-\bar X)^4$. The confidence interval for the population variance is computed as follows: $s^2 - z_{\frac{\alpha}{2}} \frac{\sqrt{m_4-s^4}}{\sqrt{n}} \le \sigma^2 \le s^2 + z_{\frac{\alpha}{2}} \frac{\sqrt{m_4-s^4}}{\sqrt{n}}.$ Using the SOCR CI Applet, with exponential distribution (λ = 2), sample size 300, number of intervals 50, and confidence level 0.95, we see that the coverage of this interval is approximately 95%. The 95% CI(σ2) coverage for the intervals constructed using the method of asymptotic distribution-free intervals is much closer to 95%. ## Confidence intervals for the population parameters of a distribution based on the asymptotic properties of maximum likelihood estimates To construct confidence intervals for a parameter of some distribution, the following method can be used based on the large sample theory of maximum likelihood estimates. As the sample size n increases it can be shown that the maximum likelihood estimate $\hat \theta$ of a parameter θ follows approximately the normal distribution with mean θ and variance equal to the lower bound of the Cramer-Rao inequality. $\hat \theta \sim N\left(\theta, \frac{1}{nI(\theta)}\right)$ where $\sqrt{\frac{1}{nI(\theta)}}$ is the lower bound of the Cramer-Rao inequality. Because I(θ) (Fisher's information) is a function of the unknown parameter θ we replace θ with its maximum likelihood estimate $\hat \theta$ to get $I(\hat \theta)$. Since, $Z=\frac{\hat \theta - \theta}{\sqrt{\frac{1}{nI(\hat \theta)}}}$, we can write $P(-z_{\frac{\alpha}{2}} \le Z \le z_{\frac{\alpha}{2}}).$ If we replace Z with $Z=\frac{\hat \theta - \theta}{\sqrt{\frac{1}{nI(\hat \theta)}}}$, we get $P\left(-z_{\frac{\alpha}{2}} \le \frac{\hat \theta - \theta}{\sqrt{\frac{1}{nI(\hat \theta)}}} \le z_{\frac{\alpha}{2}}\right).$ Therefore, $P\left(\hat \theta -z_{\frac{\alpha}{2}} \sqrt{\frac{1}{nI(\hat \theta)}} \le \theta \le \hat \theta + z_{\frac{\alpha}{2}} \sqrt{\frac{1}{nI(\hat \theta)}} \right).$ Thus, we are 1 − α confident that θ falls in the interval $\hat \theta \pm z_{\frac{\alpha}{2}} \sqrt{\frac{1}{nI(\hat \theta)}}.$ ### Example 8 Use the result above to construct a confidence interval for the Poisson parameter λ. Let $X_1, X_2, \cdots, X_n$ be independent and identically distributed random variables from a Poisson distribution with parameter λ. We know that the maximum likelihood estimate of λ is $\hat \lambda=\bar x$. We need to find the lower bound of the Cramer-Rao inequality: $f(x)=\frac{\lambda e^{-\lambda x}}{x!} \Rightarrow lnf(x) = xln\lambda - \lambda -lnx!$. Let's find the first and second derivatives w.r.t. λ. $\frac{\partial {lnf(x)}}{\partial \lambda}=\frac{x}{\lambda}-1,$ $\frac{\partial^2{lnf(x)}}{\partial \lambda^2}=-\frac{x}{\lambda^2}.$ Therefore, $\frac{1}{-nE\left(\frac{\partial^2 lnf(x)}{\partial \lambda^2}\right)}=\frac{1}{-nE(-\frac{X}{\lambda^2})}= \frac{\lambda^2}{\lambda n}=\frac{\lambda}{n}$. When n is large, $\hat \lambda$ follows approximately $\hat \lambda \sim N\left(\lambda, \sqrt{\frac{\lambda}{n}}\right)$. Because λ is unknown, we replace it with its MLE estimate $\hat \lambda$: $\hat \lambda \sim N\left(\bar X, \sqrt{\frac{\bar X}{n}}\right).$ Therefore, the confidence interval for λ is: $\bar X \pm z_{\frac{\alpha}{2}} \sqrt{\frac{\bar X}{n}}.$ ### Application The number of pine trees at a certain forest follows the Poisson distribution with unknown parameter λ per acre. A random sample of size n=50 acres is selected and the number of pine trees in each acre is counted. Here are the results: 7 4 5 3 1 5 7 6 4 3 2 6 6 9 2 3 3 7 2 5 5 4 4 8 8 7 2 6 3 5 0 5 8 9 3 4 5 4 6 1 0 5 4 6 3 6 9 5 7 6 The sample mean is $\bar x=4.76$. Therefore, a 95% confidence interval for the parameter λ is $4.76 \pm 1.96 \sqrt{\frac{4.76}{50}}$ That is, $4.76 \pm 0.31.$ Therefore $4.15 \le \lambda \le 5.34$. ### Exponential distribution Verify that for the parameter λ of the exponential distribution the confidence interval obtained by this method is given as follows: $\frac{1}{\bar x} \pm z_{\frac{\alpha}{2}} \sqrt{\frac{1}{n \bar x^2}}.$ The following SOCR simulations refer to ## References • Mega, M., Dinov, I., Thompson, P., Manese, M., Lindshield, C., Moussai, J., Tran, N., Olsen, K., Felix, J., Zoumalan, C., Woods, R., Toga, A., and Mazziotta, J. (2005). Automated brain tissue assessment in the elderly and demented population: Construction and validation of a sub-volume probabilistic brain atlas. NeuroImage, 26(4), 1009-1018. • Stewarty, C. (1999). Robust Parameter Estimation in Computer Vision. SIAM Review, 41(3), 513–537. • Wolfram, S. (2002). A New Kind of Science, Wolfram Media Inc. • Agresti A, Coull A. “Approximate is Better than Exact for Interval Estimation of Binomial Proportions” American Statistician (1998). • Sauro, J., Lewis, J. R., Estimating Completion Rates From Small Samples Using Binomial Confidence Intervals: Comparisons and Recommendations, Proceedings of the Human Factor AND Ergonomics Society 49th Annual Meeting (2005). • Hogg, R. V., Tanis, E. A., Probability and Statistical Inference, 3rd Edition, Macmillan (1988). • Ferguson, T., S., A Course in Large Sample Theory, Chapman & Hall (1996). • John Rice, Mathematical Statistics and Data Analysis, Third Edition, Duxbury Press (2006).
# Volume 14 - 1991 ### 1. Proof of some conjectures on the mean-value of Titchmarsh series-II In this paper, we give lower bounds for $\int_0^H \vert F(it)\vert^k\,dt$, where $k=1$ or $2$ and $F(s)$ is a Dirichlet series of a certain kind. Since the conditions on $F(s)$ are relaxed, the bounds are somewhat smaller than those obtained previously. ### 2. On the zeros of a class of generalised Dirichlet series-VIII In an earlier paper (Part VII, with the same title as the present paper) we proved results on the lower bound for the number of zeros of generalised Dirichlet series $F(s)= \sum_{n=1}^{\infty} a_n\lambda^{-s}_n$ in regions of the type $\sigma\geq\frac{1}{2}-c/\log\log T$. In the present paper, the assumptions on the function $F(s)$ are more restrictive but the conclusions about the zeros are stronger in two respects: the lower bound for $\sigma$ can be taken closer to $\frac{1}{2}-C(\log\log T)^{\frac{3}{2}}(\log T)^{-\frac{1}{2}}$ and the lower bound for the number of zeros is something like $T/\log\log T$ instead of the earlier bound $>\!\!\!>T^{1-\varepsilon}$. ### 3. On the zeros of a class of generalised Dirichlet series-IX In the present paper, the assumptions on the function $F(s)$ are more restrictive but the conclusions about the zeros are stronger in two respects: the lower bound for $\sigma$ can be taken closer to $\frac{1}{2}-C(\log\log T)(\log T)^{-1}$ and the lower bound for the number of zeros is like $T/\log\log\log T$.
/ spv / 题库 / # Description In this problem, you have to analyze a particular sorting algorithm. The algorithm processes a sequence of n distinct integers by swapping two adjacent sequence elements until the sequence is sorted in ascending order. For the input sequence 9, 1, 0, 5, 4 Ultra-QuickSort produces the output 0, 1, 4, 5, 9 Your task is to determine how many swap operations Ultra-QuickSort needs to perform in order to sort a given input sequence. # Input The input contains several test cases. Every test case begins with a line that contains a single integer $$n < 5\times10^5$$ : the length of the input sequence. Each of the the following $$n$$ lines contains a single integer $$0 \le a_i < 10^9$$, the i-th input sequence element. Input is terminated by a sequence of length $$n = 0$$. This sequence must not be processed. # Output For every input sequence, your program prints a single line containing an integer number op, the minimum number of swap operations necessary to sort the given input sequence. # Samples ## Sample #1 ### Input 5 9 1 0 5 4 3 1 2 3 0 ### Output 6 0 ID 1020 9 (无) (无) 9 4 44%
This is an archived post. You won't be able to vote or comment. [–] 34 points35 points  (22 children) Equality for Eq a => (Nat -> Bool) -> a. I was even more surprised to learn it works just fine in ML too. [–][S] 3 points4 points  (10 children) That is the most amazing thing I've ever read. [–] 3 points4 points  (9 children) Can anyone tell me if I understand this correctly? I hope my explanation isn't too terse. I had a more thorough one typed up but then my comp froze before I could post. 1. A function on an infinite sequence of 1's and 0's that terminates must then operate on only a finite subsequence of the input (I call this the relevant subinput). 2. By evaluating the function across all relevant subinputs, we can completely characterize it. 3. The author writes some code that exploits lazy evaluation to automatically evaluate a function across all its relevant subinputs and no more. 4. By evaluating 2 functions across their relevant subinputs, we can determine equality across all such subinputs, and by extension equality across all possible infinite sequences of 1's and 0's. This is how exhaustive search of the infinite domain is possible. 5. The author then writes some code that essentially keeps track of the maximum depth of the infinite sequence lazy evaluation had to investigate in order to prove that a function equals itself. This is the modulus of continuity of the function. 6. He then does some code tricks I don't really understand to make these operations really, really fast. So that's what I think is going on. The reason we can do this with the Cantor set is that it is compact (bounded by 1,1,1.. and 0,0,0..) and the integers are not. By doesn't the searchability of input depend on the function being investigated? Couldn't we establish the equality of a function Int->Int if that function is \a->1? I don't understand how this is different than the modulus of 0 example the author provides. Private messages welcome if the sender so desires. [–] 2 points3 points  (6 children) The author writes some code that exploits lazy evaluation This code has nothing to do with lazy evaluation and works just fine with strict evaluation. [–] 0 points1 point  (5 children) How so? The comments mention lazy lists explicitly and the code for find and forsome uses recursion that would cause a sgack overflow if executed strictly. And there are calls to f a where a is an infinite sequence. Sure, you could implement the idea in a strict language, but doing so would require some data structures to delay computation on the tails of infinite sequences (or pehaps to carefully compress constant tails of those sequences to finite representations), which is something to do with lazy evaluation. [–] 1 point2 points  (4 children) How so? Because I translated the code directly into OCaml and it worked fine with no stack overflow. And there are calls to f a where a is an infinite sequence. There are no infinite sequences as such. Only functions (whose evaluation is arguably lazy, even in a strict language). [–] 0 points1 point  (3 children) OK, but that gets into the hazy boundary when talking lazy vs strict, of you are willing to accept a casual big-tent definition of "having something to do with laziness" to mean "correct program termination does depend on some appropriate evaluation strategy". But I am being deliberately and perhaps unfairly informal with my terminology (because I am not a genius/expert and I am (mentally) lazy) . It is totally fair to say, as you proved in your OCaml port, that it's not specific to Haskell's special and extreme brand of laziness, but a port to Python or Java would require a lot of evaluation-management code, and (I suspect) a port to a Lisp would require some use of delay/force or special forms or extra explicit ways of preventing some eager evaluations. [–] 1 point2 points  (2 children) I'm quite sure it would work perfectly fine in Lisp without force or delay, and I'm pretty sure it would work fine with a direct translation to Python or Java or C (even replacing Integer with Int would probably work well enough for illustration purposes). I recommend you try it. [–] 0 points1 point * (1 child) Because I trust you, I am going to transcribe to Python, making as few changes as possible to the form of the original. I'll be back. Edit: First cut: # forsome p = p(find(\a -> p a))) def forsome(p): p(find_i(lambda a: p(a))) def find_i (p): if forsome (lambda a: p(zero |q| a)): return zero |q| find_i(lambda a: p( zero |q| a)) else: return one |q| find_i(lambda a: p( one |q| a)) File "./infinite-search.py", line 74, in find_i if forsome (lambda a: p(zero |q| a)): File "./infinite-search.py", line 71, in forsome def forsome(p): p(find_i(lambda a: p(a))) RuntimeError: maximum recursion depth exceeded I trust there's a away to fix it (and I might have a silly bug, not a "laziness"-related issue), but I believe it would involve yield or adding some lambdas ("delay") and explicit evaluations thereof ("force)... If anyone can post an obvious fix, without dredging through all my code, I'd like to see it. For anyone curious, my Python code is here: http://pastebin.com/KMUtKuMJ [–] 0 points1 point * (0 children) Here is a fixed version: http://pastebin.com/QS7b2N8Q Unfortunately it was so impossibly slow that I changed the example f, g, and h functions to be simpler. None the less, it does work. I'm not a python programmer, so I'm sure things can be refined in order to make the code both more idiomatic and faster. Edit: http://pastebin.com/8Fwumf1E is somewhat faster. [–] 1 point2 points  (0 children) [–] 0 points1 point * (0 children) Note: You mean Integer, the infinite type, not Int, the finite type. The "computability" requirement is the trick. I think a constant function on Integer could be compared to another computable for equality. I am not convinced that a seemingly innocuous function like \x -> 2*x meets the compatability requirement, since it requires examining an unbounded number of digits to find the highest set bit. The examples in the article all explicitly confine themselves to considering and manipulating a bounded set of bit positions, whose bound depends only on the function, not the input. Disclaimer : I am having trouble believing what I wrote below is true. Equality computation would work, I believe, but for the modulus of continuity to be well defined, you would have to use an unintuitive notion of "computable function", because you don't "narrow" your search to a smaller range of integers , because a set of sequences with a common prefix would match in low order bits (not high order bits) , so the deeper you search, the wider the distance between numbers you examine. For example, 10000... =1, 10100...=5, 101100...=13 You have to write numbers with low order bits first in order to give a finite representation of each number. If you interpreted the sequences as binary expansions to the right of the decimal instead of to the left of the decimal, then you are representing the unit interval and modulus of continuity is meaningful (a simpler case than the Cantor space). The weird thing about Integer is that computable functions are those that only modify low order bits and leave the infinitely high end untouched, which is less satisfying than functions on compact sets, which modify high order bits and leave infinitely small end untouched. [–] 2 points3 points  (1 child) He mentions The next version of the definition of Haskell will have a built-in type of natural numbers Is he talking about where you use type fams to inductively define One, Two, ... for type safety in vector lengths or physical dimensions? Or are they just adding UInt? [–] 2 points3 points  (0 children) Since this was from 2007, I think he was just guessing (or looking at HaskellPrime). [–][S] 1 point2 points  (6 children) There aren't any countable types that aren't convertible to Nat -> Bool, are there? [–] 0 points1 point  (4 children) I'm pretty sure that all countable types are convertible to some Nat -> Bool. After all, if they're countable, they're iso to Nat, and thus to iso to any non-finite subset of Nat, and thus to any Nat -> Bool that is true of infinitely many nats. Just define an enumerator for your type, and line up the n'th enumerated element with the n'th Nat that the predicate holds of! For instance, lets just take [()], we have many isos to Nat -> Bool, for instance iso1: [] ~> 0 [()] ~> 1 [(),()] ~> 2 ... iso2 [] ~> 0 [()] ~> 2 [(),()] ~> 4 ... iso3 [] ~> 1 [()] ~> 3 [(),()] ~> 5 ... iso4 [] ~> 2 [()] ~> 3 [(),()] ~> 5 ... Any countably finite type type is iso to some Nat -> Bool too, in infinitely many boring ways, but the obvious one is the iso that sends n :: Nat to (n ==) :: Nat -> Bool. [–][S] 1 point2 points  (3 children) ... which was exactly what I was thinking! So shouldn't a sufficiently lazy iso allow us to use the tricks described in that article on other types? [–] 0 points1 point  (2 children) It has to be an isomorphism though; otherwise you wouldn't be able to convert back from Nat->Bool that you found. [–] 0 points1 point  (1 child) It is an iso tho! Every predicate that's true of finitely many nats will iso back to a type with that many inhabitants. The trick is that the types in Haskell aren't friendly enough. In Agda, I think the following will work: zero-or-one : Nat -> Bool zero-or-one 0 = true zero-or-one 1 = true zero-or-one _ = false data Ext (p : Nat -> Bool) : Set where ev : (n : Nat) -> T (p n) -> Ext p fwd : Bool -> Ext zero-or-one fwd true = ev 0 tt fwd false = ev 1 tt bwd : Ext zero-or-one -> Bool bwd (ev 0 tt) = true bwd (ev 1 tt) = true bwd (ev (suc (suc n)) ()) {- bwd (fwd true) = bwd (ev 0 tt) = true bwd (fwd false) = bwd (ev 1 tt) = false fwd (bwd (ev 0 tt)) = fwd true = ev 0 tt fwd (bwd (ev 1 tt)) = fwd false = ev 1 tt -} So because we're working with boolean tests, not true Agda predicates (which are of type X -> Set) we use a decidable extension type Ext which is inhabited only when the predicate is true of the number you're trying to apply it to (e.g. 0 and 1 here). Then we just show that Bool and Ext zero-or-one are isomorphic. If instead we just use actual predicates: zero-or-one' : Nat -> Set zero-or-one' 0 = Top zero-or-one' 1 = Top zero-or-one' n = Bot data Ext' (p : Nat -> Set) : Set where <_> : (n : Nat) -> {_ : p n} -> Ext' p fwd' : Bool -> Ext' zero-or-one' fwd' true = < 0 > fwd' false = < 1 > bwd' : Ext' zero-or-one' -> Bool bwd' < 0 > = true bwd' < 1 > = false bwd' < suc (suc ()) > {- bwd' (fwd' true) = bwd' < 0 > = true bwd' (fwd' false) = bwd' < 1 > = false fwd' (bwd' < 0 >) = fwd' true = < 0 > fwd' (bwd' < 1 >) = fwd' false = < 1 > -} We had to use an extension type only because predicates and types aren't the same thing in Agda, but that's not a problem for showing isomorphism, it just means we have to be really explicit about the fact that we're thinking of (the extensions of) predicates as types. [–] 0 points1 point  (0 children) There may be an isomorphism between Integer and "functions of Nat -> Bool that are only true on finitely many nats", but I don't think you can have a total computable isomorphism between Integer and the whole Nat -> Bool, precisely between when going from Nat -> Bool to Integer you would have to terminate even on inputs that have an infinite basis. (The set of computable Nat -> Bool function may be countable, but that is a meta-theorem that should not be computable.) I realize this must be obvious to you, but your reply would let one think that there is an isomorphism with the whole Nat -> Bool. One could probably express "is true on finitely many nats" as a dependent/refinement type, but that's another story. [–] 0 points1 point  (0 children) Beware of your intuition when thinking about this surprising result. (Nat -> Bool) -> Integer has decidable equality, but Integer -> Integer does not. [–][S] 0 points1 point  (1 child) Actually, wouldn't the find fail for something that searches for a Cantor with at least 1 one-bit? [–] 0 points1 point * (0 children) There is no computable predicate that tests if a stream contains at least 1 bit. [–] 24 points25 points  (3 children) a classic: > fibs = 1 : 1 : zipWith (+) fibs (tail fibs) [–] 2 points3 points  (0 children) I always liked that example, which indirectly lead to my discovery of corecursive queues, which is among the cleverest bits of Haskell code I've ever come up with. [–] 1 point2 points  (0 children) One of the reasons I started learning haskell. Magic. [–] 0 points1 point  (0 children) Should be 0,1,1. I got it wrong on here the other day, too. [–] 21 points22 points  (2 children) I've always liked powerSet = filterM $const [True, False] [–] 8 points9 points (1 child) [–] 1 point2 points (0 children) That was fantastic :) [–] 5 points6 points (4 children) Type-safe observable sharing. The fact that it's possible is both scary and awesome. [–] 3 points4 points (2 children) I was a little disappointed after reading that. It comes down to the fact that sharing is observable when using StableNames, which isn't all that surprising. [–] 1 point2 points (1 child) Well, yes. The implications are the key: You can do a hell a lot of clever things with snatching the fixpoints out of a deeply embedded DSL. Turns Haskell into lisp. [–] 2 points3 points (0 children) That is true. I can now, finally, checkpoint my structure with lots of sharing when using acid-state. Thanks. [–] 0 points1 point (0 children) Where do they actually define reifyGraph? e: Ah, they cleverly hid it in the section labeled 'Implementation'. Tricky! [–] 5 points6 points * (1 child) This page has a bunch of fun pieces of code. Blow your mind (Haskell Wiki) One line list of primes, with Data.List otherPrimes = nubBy (((>1).).gcd) [2..] One line Pascal triangle pascal = iterate (\row -> zipWith (+) ([0] ++ row) (row ++ [0])) [1] Also, this is one of my favorites. For a deeper treatment see From Löb's Theorem to Spreadsheet Evaluation. -- spreadsheet magic -- requires import Control.Monad.Instances let loeb x = fmap ($ loeb x) x in loeb [ (!!5), const 3, liftM2 (+) (!!0) (!!1), (*2) . (!!2), length, const 17] [–] 2 points3 points  (0 children) I don't think it's the most clever piece of Haskell code I know (that would have to fall to things like whole libraries (like 'ad'), or Oleg's stuff), but here's a nice small snippet that can be posted on reddit: primes = 2 : 3 : (minus [5, 7 ..] nonprimes) nonprimes = foldr1 f [[p*p, p*p+2*p ..] | p <- tail primes] where f (x:xt) ys = x : (union xt ys) Where 'minus' is just list-set difference, 'union' is list-set merge. All of this is on here) edit: Which, in hindsight, looks like a copy and paste affair from the haskellwiki [–] 4 points5 points  (4 children) As a relatively new haskeller, I might be easily impressed, but I find this simple function clever, even elegant, especially with functions as first-class objects. It might not be the most clever, but at least in the top 10. It's even short-circuiting! if' :: Bool -> a -> a -> a if' True x _ = x if' False _ y = y [–] 1 point2 points  (0 children) FYI if' is the catamorphism for Bool. That is, if we pass in the two constructors, we get an identity: \x -> if' x True False == x. In order to be more consistent with existing folds (either, maybe, foldr), the value on which to fold appears last in the argument list. Therefore: if' :: a -> a -> Bool -> a [–] 0 points1 point  (1 child) I seem to remember a discussion where some people argued that if/then/else shouldn't be built into the language, since it's so easy to define it as part of the language. In fact, I think there's a GHC extension to make if not be a keyword, so you can define it yourself, the way you did, without calling it if'. [–] 1 point2 points  (0 children) Yep. The RebindableSyntax extension re-writes if x then y else z as ifThenElse x y z, using whichever ifThenElse is in scope. Also, the types can be different (e.g. the condition doesn't have to be Bool), as long as the resulting function call type checks. [–] 0 points1 point  (0 children) I have used this to branch in quad trees: ifQuad True True thenQuad _ _ _ = thenQuad [–] 1 point2 points  (2 children) Well, I was pretty impressed by this when I saw this: import Control.Monad powerSet :: [a] -> [[a]] powerSet = filterM (\_ -> [True, False]) [–] 1 point2 points  (1 child) (\_ -> [True, False]) is the same as const [True, False] [–] 0 points1 point  (0 children) and it's also already mentioned above :) [–] 5 points6 points  (15 children) The cleverest code of all, in any language, is code which is easiest to understand, because it doesn't try to be too clever. [–][S] 2 points3 points  (1 child) Code can be very clever and very easy to understand at the same time. [–] 4 points5 points  (0 children) Sure, but when it is easy to understand, although it might have taken a lot of cleverness to write, it looks almost effortless. For example Norvig's sudoku program. [–] 2 points3 points  (4 children) "What's the most clever piece of Perl code you know?" makes the pejorative nature of "clever" more clear. [–] 5 points6 points  (3 children) For that meaning of clever, uncyclopedia has a few "nice" ones. This one, for example: fix$(<$>)<$>(:)<*>((<$>((:[])<$>))(=<<)<$>(*)<$>(*2))$1 [–] 2 points3 points  (2 children) What is that doing? [–] 2 points3 points  (1 child) > take 30 $fix$(<$>)<$>(:)<*>((<$>((:[])<$>))(=<<)<$>(*)<$>(*2))$1 [1,2,4,8,16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536,131072,262144,524288,1048576,2097152,4194304,8388608,16777216,33554432,67108864,134217728,268435456,536870912] [–] 0 points1 point (0 children) Oh, duh. The numbers flew by so quickly that by the time my terminal had so much as painted a frame we were up to the point where the commas were easy to miss and it looked like it was just spewing numbers. [–] 3 points4 points (2 children) The problem is that understanding is relative--a seasoned Haskeller with experience using the List monad will, for example, find code using functions like foldM or filterM with the list monad more readable than the alternative while somebody unfamiliar with it would be confused. Similarly, somebody with a math background would find very different code easy to read than somebody with an imperative programming background. Also, there are plenty of cases when really clever nigh-unreadable code is warranted--take a look at any heavily optimized programs. [–] 0 points1 point (1 child) I wish library writers would always leave a commented version of unoptimized code in Haddock, alongside the optimized strict hard-"Core" annotated magic. [–] 1 point2 points (0 children) This is actually a brilliant use case for QuickCheck--just assert that your unoptimized version returns the same results (or something close for floating point math) as the optimized version. This acts both as a fairly thorough test of your optimized algorithm--assuming the trivial version is tested and works, of course!--and as self-enforcing documentation. [–] 0 points1 point (1 child) surely that sentence contradicts itself..? [–] 0 points1 point (0 children) Simplicity is the ultimate sophistication. [–] 0 points1 point (0 children) Every party needs a pooper... [–] 0 points1 point (0 children) Not a one-liner, but code that makes use of the laziness properties of haskell's Array are pretty clever http://brandon.si/code/fun-with-lazy-arrays-the-lz77-algorithm/ [–] 2 points3 points (13 children) I love haskell but this perlish attitude about syntax really irks me. [–] 0 points1 point (11 children) What do you mean? [–] 3 points4 points (10 children) The language is wonderful but most haskeller tend to write as if the code is not meant to be read. All the shortcuts (like$) and their too-clever-for-their-own-good compositions are really hurting and reminds me of perl. Yep, I know what they mean. But I don't code all day long in haskell and after 8 hours in another language, I tend to mix them up and make stupid mistakes. If you live and breath Haskell all day long, that's not a problem. But it puts a silly barrier to entry for beginners/new teammember. [–] 1 point2 points  (6 children) The language is wonderful but most haskeller tend to write as if the code is not meant to be read. All the shortcuts (like $) and their too-clever-for-their-own-good compositions are really hurting and reminds me of perl. I think it is largely a matter of how you model functional concepts in your brain. Most of the composition, point-free style and things like$ make perfect sense if you really treat it as a declarative language where functions are tangible things to work with. Many tend to think imperatively about programming, and I can understand things become quite cryptic. I don't think mis-reading is very common, but admittedly you sometimes have to think a little to understand things. [–] 1 point2 points  (4 children) I grok the concepts but clever/complex/long compositions are often a bane to readability. IMHO, almost each step would need a comment to make it quickly readable. It's rarely done and one liner are common. ( the choice to use symbols instead of words just make them even more cryptic ). It's not a problem with the language but with the community who consider that the guy reading the code have all the time in the world to sort it out. Quite the opposite of the Python community where readability is praised. [–] 3 points4 points  (2 children) On the contrary, readability is also praised in the Haskell community. We prefer this: foo <$> bar <*> baz over this: fooVar = None barVar = bar() if barVar: bazVar = baz() if bazVar: fooVar = foo(barVar, bazVar) Wise use of combinators and composition can drastically improve readability. They are, of course, to be used with good taste. You'll often see slightly bad taste on irc or reddit simply because Haskellers like to exercise their ability to use said combinators, but that doesn't mean that such code is praised in a production environment. [–] 0 points1 point (1 child) I think your comparison is rather unfair. Wagnerius complains about overuse of combinators with symbolic names, e.g. writing foo <$> bar <*> baz liftA2 foo bar baz [–] 0 points1 point  (0 children) Haskell would benefit from structural layout and markup to help visualize higher-order expression structure, but is currently limited to the current language spec. A coloring, indenting and wireframing layout algorithm could help (think HTML alight diagramming as syntax highlighting on steroids, as semantic highlighting). liftA2 the isn't more readable to all -- sure precedence is easier to see, but that is a Pyrrhic victory when you lose visibility into the fact that foo, bar, and baz have very important parallel structure, and aren't just 3 function arguments. [–][S] 2 points3 points  (0 children) The point wasn't to create unreadable clever code, just to make clever code. [–] 0 points1 point  (0 children) Haskell, being pure and concise and general and rewrite-rule friendly, is a fantastic candidate. "intentional programming" where an IDE heavily augments and transforms code for display, even interactively, to unpack tight code and specialize general code and hide performance annotations and other such translations down the cleverness stack. It is totally doable, since Haskell is so transformation friendly and strongly typed. [–] 1 point2 points * (2 children) I don't think $ is a shortcut. Maybe it's because I come from a CS/Math background, but a super-common and useful piece of notation is 'this expression extends as far to the right as possible' as a way of eliminating parentheses reading problems. I find f$ g x y z a lot more readable than f (g x y z) especially when x/y/z may also have their own sub-expressions. From an actual piece of Haskell code: allTrees f a = Node a $map (allTrees f)$ f a compared to allTrees f a = Node a (map (allTrees f) (f a)) Similar, but I find the $ version more readable. As expressions become more complex that readability helps. [–] 0 points1 point (1 child) I think you got it, the shortcut is obvious for you, because you have the right profile, but very inconvenient for the casual/stupid/tired reader. And after a few years in development, I am deeply convinced that we are often this kind of reader in a professional (software) production environment : there is too little time and too much to do. This is a multi-dimensional problem, and while these shortcuts hurt readability and therefore productivity, the type system pushes in the other direction. Comments, documentation and compilation times are also in the mix. I am not sure of the final balance but the shortcuts do irks me as a negative influence. And, personally, I'll take a bit more verbosity to get much more readability. Your example is just fine to me. [–] 1 point2 points (0 children) It's a small example. It's like trying to read lisp where a statement ends with )))))); how can you tell if it meant ))))) or ))))))) instead? $ means: go as far to the right as possible, and no further--it's kind of like a semicolon in a sentence. Yes, it takes a little bit of getting used to, but learning to read any language takes some getting used to. It's still a better notation than requiring tons of balanced parentheses. Alternatively, you come up with potentially irrelevant names for all subexpressions, and then you add to the cognitive load of needing to check if the name is re-used anywhere else: allTrees transition startState = node where nextStates = transition startState subTrees = map (allTrees transition) nextStates node = Node startState subTrees For such a simple function, this seems overkill. Here's another simple example: putStrLn $concat [ "It took you ", show (numGuesses gs), " guesses." ] Once you see the $, you don't have to look for a matching close parenthesis. Instead, you immediately know "the argument to putStrLn is the entire rest of the line" [–][S] 0 points1 point  (0 children) Isn't it (.) . (.)?
Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin 1-1048-1048.523-r0-0-0 $4.86$ $4.86$ $1$ $2^{3} \cdot 131$ 1048.523 $0$ $$0 1 0 0.852513 Dirichlet character $$\chi_{1048} (523, \cdot)$$ 1-1048-1048.261-r1-0-0 112. 112. 1 2^{3} \cdot 131 1048.261 1$$ $0$ $1$ $0$ $0.378081$ Dirichlet character $$\chi_{1048} (261, \cdot)$$
# How do you find the slope of the line passing through the points (0, 5) and (-2, -1)? Jan 23, 2016 $y = 3 x + 5$ #### Explanation: The equation of line is: $y = m x + b$ where: $m = \frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}} = \frac{5 - \left(- 1\right)}{0 - \left(- 2\right)} = \frac{6}{2} = 3$ Note it is evident that b = 5; Why? Now we know then: $y = 3 x + b$ Pick any point say (0,5); y = 3(0) + 5; b = 5
# Analogues of sensitivity and specificity for continuous outcomes How can I calculate the sensitivity and specificity (or analogous measures) of a continuous diagnostic test in predicting a continuous outcome (e.g., blood pressure) without dichotomizing the outcome? Any ideas? It appears that researchers have done this using mixed effects modeling (see link below), but I'm not familiar with their use of the technique: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3026390/ By the way, I'm most familiar with R, so it would be ideal for the implementation you suggest to be accompanied with an R function (but it's okay if not). Thank you in advance for any suggestions! As the question is still not answered, here are my 2ct: I think here are two different topics mixed into this question: How can I calculate the sensitivity and specificity (or analogous measures) of a continuous diagnostic test in predicting a continuous outcome (e.g., blood pressure) without dichotomizing the outcome? I take it that you want to measure the performance of the model. The model predicts continuous (metric) outcome from some kind of input (happens to be metric in your example as well, but that doesn't really matter here). This is a regression scenario, not a classification. So you better look for performance measures for regression models, sensitivity and specificity are not what you are looking for*. Some regression problems have a "natural" grouping into presence and absence of something, which gives a link to classification. For that you may have a bimodal distribution: lots of cases with absence, and a metric distribution of values for cases of presence. For example, think of a substance that contaminates some product. Many of the product samples will not contain the contaminant, but for those that do, a range of concentrations is observed. However, this is not the case for your example of blood pressure (absence of blood pressure is not a sensible concept here). I'd even guess that blood pressures come in a unimodal distribution. All that points to a regression problem without close link to classification. * With the caveat that both words are used in analytical chemistry for regression (calibration), but with a different meaning: there, the sensitivity is the slope of the calibration/regression function, and specific sometimes means that the method is completely selective, that is it is insensitive to other substances than the analyte, and no cross-sensitivities occur. A. D. McNaught und A. Wilkinson, eds.: Compendium of Chemical Terminology (the “Gold Book”). Blackwell Scientific, 1997. ISBN: 0-9678550-9-8. DOI: doi:10.1351/ goldbook. URL: http://goldbook.iupac.org/. Analogues of sensitivity and specificity for continuous outcomes On the other hand, if the underlying nature of the problem is a classification, you may nevertheless find yourself describing it better by a regression: • the regression describes a degree of belonging to the classes (as in fuzzy sets). • the regression models (posterior) probability of beloning to the classes (as in logistic regression) • your cases can be described as mixtures of the pure classes (very close to "normal" regression, the contamination example above) For these cases it does make sense to extend the concepts behind sensitivity and specificity to "continuous outcome classifiers". The basic idea is to weight each case according to its degree of belonging to the class in question. For sensitivity and specificity that refers to the reference label, for the predictive values to the predicted class memberships. It turns out that this leads to a very close link to regression-type performance measures. We recently described this in C. Beleites, R. Salzer and V. Sergo: Validation of Soft Classification Models using Partial Class Memberships: An Extended Concept of Sensitivity & Co. applied to Grading of Astrocytoma Tissues Chemom. Intell. Lab. Syst., 122 (2013), 12 - 22. Again, the blood pressure example IMHO is not adequately described as classification problem. However, you may still want to read the paper - I think the formulation of the reference values there will make clear that blood pressure is not sensibly described in a way that is suitable for classification. (If you formulate a continuous degree of "high blood pressure" that would itself be a model, and a different one from the problem you describe.) I had only a quick glance at the paper you linked, but if I understood correctly the authors use thresholds (dichotomize) for both modeling strategies: for the continuous prediction is further processed: a prediction interval is calculated and compared to some threshold. In the end, they have a dichotomous prediction, and generate the ROC by varying the specification for the interval. As you specify that you want to avoid this, the paper doesn't seem to be overly relevant. • Sounds as if the methods in that paper are highly problematic. The original question was not answered because the question implies the use of questionable methodology. – Frank Harrell Jul 31 '13 at 11:50 • @FrankHarrell: yes, at least the naming of the method in the paper is misleading as the final prediction is not continuous. But I understood the mentioning of the paper as showing that the OP did look into literature, and thought that the paper may help - which it doesn't. – cbeleites Jul 31 '13 at 18:53 Trying to do this with continuous variables will expose the severe problems with backwards time-order measures even in the binary case (i.e., predicting X from Y in general). • It doesn't have to be with backwards time-order measures. I'm just looking for an analogue of sensitivity and specificity in the case where the outcome or dependent variable is continuous. – itpetersen Feb 1 '13 at 14:03 • The analog would have to be backwards, i.e., involve the distribution of $X$ given $Y$. Otherwise it is not analogous. – Frank Harrell Feb 1 '13 at 14:13 • You are right; you have stated a very reasonable goal that is in the correct time order. The sensitivity analog would be to predict prenatal testosterone exposure from future degree of aggression. To answer your question I would use a generalized $R^2$ measure, generalized ROC area (i.e. Somers' $D_{xy}$ rank correlation between predicted and observed), and the histogram of predicted aggression degree - the larger the variety the more discrimination the predictions. – Frank Harrell Feb 1 '13 at 16:11 • That's the reason for generalized in what I wrote. $D_{xy}$ is proportional to generalized ROC area which reduces to ordinary ROC area if $Y$ is binary. It is an unfortunate accident of nature that concordance probability can be derived from sensitivity and specificity. Just think of the more general (and non-backward-looking) concordance probability between $X$ and $Y$ ($c$-index; equals ROC area in binary $Y$ case). Concordance probability (easily described for continuous $Y$) or ROC area can be used without creating an ROC curve at all. No need to predict the past from the future. – Frank Harrell Feb 1 '13 at 16:59 • My R Hmisc and rms packages handle this. In Hmisc see the rcorr.cens function. – Frank Harrell Feb 1 '13 at 20:59 Taken loosely, sensitivity means the ability to respond to something if it's present, and specificity means the ability to suppress responding when it's absent. For continuous variables, sensitivity corresponds to the slope of the regression of the obtained measures on the true values of the variable being measured, and specificity corresponds to the standard error of measurement (i.e., the standard deviation of the obtained measures when the quantity being measured does not vary). EDIT, responding to comments by Frank Harrell and cbeleites. I was trying to give conceptual analogs of sensitivity and specificity. For continuous variables, the basic idea of sensitivity is that if two objects (or the same object at different times or under different conditions, etc) differ on the variable we are trying to measure, then our obtained measures should also differ, with bigger true differences leading to bigger measured differences. The regression of any variable, say $Y$, on any other, say $X$, is simply the conditional expected value, $\mathrm{E}\,Y|X$, treated as a function of $X$. The sensitivity of $Y$ to $X$ is the slope of that function -- i.e., its derivative with respect to $X$ -- evaluated at whatever values of $X$ are of interest, and possibly averaged with weights that reflect the relative importance or frequency of occurrence of different $X$-values. The basic idea of specificity is the converse of sensitivity: if $Y$ has high specificity and there are no true differences on $X$ then all our measured $Y$-values should be the same, regardless of whatever differences there may be among the objects on variables other than $X$; $Y$ should not respond to those differences. When $X$ is constant, higher variability among the $Y$-values implies lower specificity. The conditional standard deviation -- i.e., the s.d. of $Y|X$, again treated as a function of $X$ -- is an inverse measure of specificity. The ratio of the conditional slope over the conditional s.d. is a signal-to-noise ratio, and its square is referred to in psychometrics as the information function. • That does not sound quite right. Specificity does not involve variances and sens. and spec. only apply to binary quantities. – Frank Harrell Jul 31 '13 at 17:05 • @Ray: the meaning/definition of both terms for regression (chemical calibration) is so different from their meaning in classification that this should IMHO more emphasized. I've never heard about your definition of specificity, though. Can you give a reference? – cbeleites Jul 31 '13 at 18:54
Open problem Let $A$ and $B$ be infinite sets. Characterize the set of all coatoms of the lattice $\mathsf{FCD}(A;B)$ of funcoids from $A$ to $B$. Particularly, is this set empty? Is $\mathsf{FCD}(A;B)$ a coatomic lattice? coatomistic lattice?
# Prime Factorization I'm doing an Objective-C project and I need to do some prime factorization and so I came up with the following algorithm which is 99% C, except for the array part which was easiest to implement in Objective-C because it dynamically allocates new memory for the elements as they are simply added. The Objective-C part is pretty self-explanatory. I'm mainly looking for feedback on the algorithm, not necessarily the coding style and such, but I will gladly feedback on something I can improve on! On my machine with O3 optimizations, I get just under two seconds on factorizing 1234567890. I really don't know if that is good or bad. One thing I would like to add to it is a table for previously calculated factorizations, when on large numbers you will most likely encounter numbers more than once (e.g. 4, 6, etc.). I could maybe add to the function that if it's even, add a 2 to the factors and don't do the division/prime check which may improve performance. //Declare an array NSMutableArray *factors; bool isPrime(long n) { //Check if even or ends in a five if ((n % 2 == 0 && n > 2) || (n > 5 && n % 10 == 5)) { return 0; } if (n == 2) return 1; //set upper bound to square root long u = sqrt(n); long c = 0; for (long i = 3; i <= u; i+=2) if (n % i == 0) { c++; break; } return !c; } void factorize(long n) { //If prime return if (isPrime(n)) { return; } for (long i = n/2; i > 1; i--) { if (n % i) continue; //Check if indivisible long d = n/i; bool iP = isPrime(i), dP = isPrime(d); if (iP) else factorize(i); if (dP) else factorize(d); return; } } int main(int argc, const char * argv[]) { factors = [NSMutableArray array]; long n = 1234567890; factorize(n); NSLog(@"%@", factors); return 0; } • Why bother optimizing a slow algorithm when you can just replace it with another (easy to implement) algorithm that will instantly factor all 64-bit integers in under few thousand or so simple iterations? Work smart, not hard - en.wikipedia.org/wiki/Pollard%27s_rho_algorithm – Thomas Sep 5 '14 at 13:54 • @awashburn This is prime factorization, not primality testing.. – Thomas Sep 5 '14 at 14:08 • @Thomas I'd still argue against a stochastic algorithm... – recursion.ninja Sep 5 '14 at 15:18 • @awashburn The algorithm I linked is deterministic, and randomizing it doesn't really gain anything. Most nontrivial factorization algorithms are probabilistic though, that's just the reality of things, for 64 bit inputs though I find deterministic Miller-Rabin and rho (after trial division by the first few primes for efficiency) is as fast as it gets. Although I haven't tried ECM, but I suspect it has too much overhead in this case. – Thomas Sep 5 '14 at 15:30 • @Thomas But the function you linked to uses stochastic principals to define a partial (non-total) function. Your right it is deterministic, still wouldn't recommend a partial function... – recursion.ninja Sep 5 '14 at 15:36 Your algorithm tries to find the largest factors first. This is ineffective because you have to test each possible factor whether it is a prime number or not. It is much more effective to start with the smallest factor. If you divide the number by each factor found this way then you will get only prime factors, and the prime testing function isPrime() becomes obsolete. You start with dividing by 2 and then test possible odd factors in increasing order. (If you have a pre-computed list of primes then this can be further improved.) On my computer this reduced the runtime for the factorization of 1234567890 from 1.6 to 0.00005 seconds. In addition, a global factors variable is no longer necessary with this change, and your method could look like this: NSMutableArray * primeFactorization(long n) { NSMutableArray *factors = [NSMutableArray array]; // Divide by 2: while (n > 1 && n % 2 == 0) { n /= 2; } // Divide by 3, 5, 7, ... // // i is a possible *smallest* factor of the (remaining) number n. // If i * i > n then n is either 1 or a prime number. for (long i = 3; i * i <= n; i += 2) { while (n > 1 && n % i == 0) { n /= i; } } if (n > 1) { // Append last prime factor: } return factors; } int main(int argc, const char * argv[]) { long n = 1234567890; NSDate *startTime = [NSDate date]; NSArray *factors = primeFactorization(n); NSLog(@"time: %f", -[startTime timeIntervalSinceNow]); NSLog(@"%@", factors); return 0; } • Since you don't use argc & argv, I would put void in their spot. At least, that is what I would do in C. Other than that, very good advice: +1 – syb0rg Sep 5 '14 at 15:14 Feedback on the algorithm //Check if even or ends in a five if ((n % 2 == 0 && n > 2) || (n > 5 && n % 10 == 5)) { return 0; } The part which checks if your number is divided by 5 makes very little optimization, since it's checked on the second iteration of the loop which follow. for (long i = 3; i <= u; i+=2) if (n % i == 0) { c++; break; } I suggest creating a cache of all prime numbers up to, say, 100000, and rewrite the loop like this: for (long i = 0; i < primeCacheSize; i++) if (n % primeCache[i] == 0) { return 1; } for (long i = 100001; i <= u; i++) if (n % i == 0) { return 1; } return 0; This is a bit strange: //If prime return if (isPrime(n)) { return; } for (long i = n/2; i > 1; i--) { if (n % i) continue; //Check if indivisible How will this code work on 208907 * 208907 * 208907 (9117147423118643)? Your function isPrime will determine that n is divided by 208907, return false, and you'll start iterating from 9117147423118643 / 2 downwards. Looks like an infinite (and useless) loop. The solution I may suggest: make your isPrime function return the number itself if n is prime or number's divider if it isn't. for (long i = 0; i < primeCacheSize; i++) if (n % primeCache[i] == 0) { return primeCache[i]; } for (long i = 100001; i <= u; i++) if (n % i == 0) { return i; } return n; And rewrite factorize like that: void factorize(long n) { //If prime return while (n > 1) { divider = isPrime(n); n /= divider; } • I'd limit the prime cache loops with u as well to avoid unnecessary checks: i < primeCacheSize && i <= u. – DarkDust Sep 5 '14 at 10:53 • Also, with the suggestion of having isPrime return the divider: the function name isPrime is now misleading as I'd expect a boolean value to be returned, so its name should be changed to something like firstPrimeOf or whatever. Also, I'd recommend using a NSMutableSet instead of an NSMutableArray for factors. – DarkDust Sep 5 '14 at 10:56 • A note on your prime cache. if you have 10000 primes cached, and the number is not in the cache and the number is greater then the upper limit of the cache, you should not start the next loop' – recursion.ninja Sep 5 '14 at 14:10 • If the number is not in the cache, you should start the second loop not at i=cacheSize+1 but at i=cache[cacheSize]+2 (+2 so it's an odd number), and the loop should increment by i+=2. This ensures that you are only checking odd numbers. – recursion.ninja Sep 5 '14 at 14:11 Given that this answer is marked with the tag, and the user quite appropriately points out that the code is 99% , I decided that it might be worthwhile to demonstrate how simple the C solution to this problem is. There are only two aspects of Objective-C the user uses. 1. NSMutableArray. This is convenient and it handles the resizing of the array for us. However, it's important to note that there's nothing magical about how an NSMutableArray resizes. Objective-C mutable arrays are resized in the same way as a C-array. 2. NSNumber. We don't actually see NSNumber anywhere directly, but that's what the @() syntax is doing--creating NSNumber objects. And why? Because NSArray can not store primitive data types. It must store pointers. Unless the actual intent is to use this array in some other Objective-C code where the NSNumber objects will actually be easier to deal with, there's not a particularly good reason to use Objective-C here, except perhaps being uncomfortable in working with C-style arrays. So, here's my solution to this problem in C. As a starting point, I used Martin R's answer, which actually also is very similar to the algorithm I used for a slightly different problem here. Please note, the following code snippet includes the code from Martin's answer so that the two solutions can be compared side-by-side in case he ever edits/removes his answer. NSMutableArray * primeFactorization(long n) { NSMutableArray *factors = [NSMutableArray array]; // Divide by 2: while (n > 1 && n % 2 == 0) { n /= 2; } // Divide by 3, 5, 7, ... // // i is a possible *smallest* factor of the (remaining) number n. // If i * i > n then n is either 1 or a prime number. for (long i = 3; i * i <= n; i += 2) { while (n > 1 && n % i == 0) { n /= i; } } if (n > 1) { // Append last prime factor: } return factors; } My approach in straight C: long * cPrimeFactorization(long n, long *factorCount) { long currentSize = 2; long currentIndex = 0; long *factors = malloc(sizeof(long) * currentSize); while (n > 1 && n % 2 == 0) { factors[currentIndex++] = 2; if (currentIndex >= currentSize) { currentSize *= 2; long *reallocFactors = realloc(factors, currentSize * sizeof(long)); if (reallocFactors) { factors = reallocFactors; } else { printf("realloc failed"); free(factors); return NULL; } } n /= 2; } for (long i = 3; i * i <= n; i += 2) { while (n > 1 && n % i == 0) { factors[currentIndex++] = i; if (currentIndex >= currentSize) { currentSize *= 2; long *reallocFactors = realloc(factors, currentSize * sizeof(long)); if (reallocFactors) { factors = reallocFactors; } else { printf("realloc failed"); free(factors); return NULL; } } n /= i; } } if (n > 1) { factors[currentIndex++] = n; } *factorCount = currentIndex; return factors; } Running them side by side: int main(int argc, const char * argv[]) { long n = 1234567890; NSDate *startTime = [NSDate date]; NSArray *oFactors = primeFactorization(n); NSLog(@"time (Objective-C): %f", -[startTime timeIntervalSinceNow]); startTime = [NSDate date]; long factorCount; long * cFactors = cPrimeFactorization(n, &factorCount); NSLog(@"time (C): %f", -[startTime timeIntervalSinceNow]); NSLog(@"%@", oFactors); for (long i = 0; i < factorCount; ++i) { printf("%li\n", cFactors[i]); } return 0; } For the sake of benchmarking consistency, I just used the Objective-C approach (because Martin already had it and I didn't feel like looking up a C approach). I tried a handful of different numbers and ran them all multiple times. The C approach runs faster in every case. More than speed, the C approach actually takes less total memory than the Objective-C approach. Not only does the Objective-C array have more overhead than the C array, (each index is the same size, 8-bytes), but the Objective-C array is an index of pointers to objects. So the Objective-C array is already slightly bigger than the C array, plus there's several magnitudes more memory space for holding all of the NSNumber objects out on the heap as well. And that we have to spend so much time creating these NSNumber objects out on the stack is part of the reason the Objective-C solution is slower. It's also slower because passing messages to objects (the addObject: we call several times) is very slow compared to being able to directly insert a value in at a memory location (which is what we do with the C-array). Please note, I'm definitely an Objective-C programmer, and I am not a C programmer in the slightest. My C code almost certainly has room for improvement I'd imagine... but as Objective-C programmers, we must always keep in mind that pure-C should always be an option. As a note, this speed (and memory) difference becomes particularly noticeable with very large powers of two. For example, with 2^49th, which is 562,949,953,421,312, the Objective-C solution takes almost 3 times as long on my computer. • With regards to printf("realloc failed");, the caller should handle reporting errors. – Corbin Sep 5 '14 at 23:40 • Yeah, probably. I'm not familiar with C-standards or what should be done in a case like this... I just know it's possible for realloc to fail... – nhgrif Sep 5 '14 at 23:47 • @nhgrif: My last comment was wrong and I have deleted it. But it would be interesting if - after you added the necessary allocation to the while (n > 1 && n % 2 == 0) loop - the difference is still particularly noticeable for powers of two. – Martin R Sep 7 '14 at 6:04 There are potential problems with double sqrt(double) bool isPrime(long n) { ... long u = sqrt(n); ... for (long i = 3; i <= u; i+=2) 1. Conversion from long to double may lose precision. Although double typically can represent every value of an int54_t, long could be wider such as 64 bits. 2. A good sqrt() will return the exact value for a perfect square, but not all sqrt() behave so nicely. Code should be prepared for something like sqrt(121) --> 10.99999. 3. Since the for() loop has no trouble iterating a little pass ideal, recommend a simple solution for both above issues: Increase the upper limit by 1. long u = sqrt(n); u++;
# Change the order in ModernCV banking When using classic theme vs using banking theme, the classic highlights the job title whereas banking highlights the employer. How do I make it so that banking highlights the job title instead of the employer, and comes in the order of job title and then employer like in classic theme. The MWE is the following: (change the style from classic to banking for the change in the order of job title and employer} \documentclass[11pt,a4paper,sans]{moderncv} \moderncvstyle{classic} %\moderncvstyle{banking} \moderncvcolor{blue} \usepackage[scale=0.75]{geometry} \name{John}{Doe} \title{Resumé title} \phone[mobile]{+1~(234)~567~890} \phone[fixed]{+2~(345)~678~901} \phone[fax]{+3~(456)~789~012} \begin{document} \section{Experience} \subsection{Vocational} \cventry{year--year}{Job title}{Employer}{City}{}{General description no longer than 1--2 lines.\newline{}% } \end{document} - The obvious solution is to swap all the field entries around. I can swap all the fields around to get the effect I want but I have to change all the entries. Maybe there is a simpler one-line solution? –  Mark Sep 1 '14 at 0:17 \renewcommand*{\cventry}[7][.25em]{ \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r}% {\bfseries #3\ifthenelse{\equal{#6}{}}{}{, #6}} & {\bfseries #2}\\% {\itshape #4} & {\itshape #5}\\% \end{tabular*}% \ifx&#7&% \else{\\\vbox{\small#7}}\fi% Code: \documentclass[12pt,letterpaper,sans]{moderncv} \moderncvstyle{banking} \moderncvcolor{blue} \usepackage[scale=0.85]{geometry} \usepackage{multicol} \firstname{John} \familyname{Doe} \title{Banking Executive} \phone[mobile]{+1~(234)~567~890} \phone[fixed]{+2~(345)~678~901} \phone[fax]{+3~(456)~789~012} \email{Email} \social[github]{github} \quote{Some quote} \renewcommand*{\cventry}[7][.25em]{ \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r}% {\bfseries #3\ifthenelse{\equal{#6}{}}{}{, #6}} & {\bfseries #2}\\% {\itshape #4} & {\itshape #5}\\% \end{tabular*}% \ifx&#7&% \else{\\\vbox{\small#7}}\fi% \begin{document} \makecvtitle \section{Experience} \subsection{Vocational} \cventry{Jan/2010 -- Feb/2012}{Jobtitle}{Employer}{City}{}{General description no longer than 1--2 lines} \end{document} - Thank you. Works great! –  Mark Sep 1 '14 at 0:36 You should just update \cventry to suit your needs. This is \cventry from the banking style: \renewcommand*{\cventry}[7][.25em]{ \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r}% {\bfseries #4} & {\bfseries #5} \\% {\itshape #3\ifthenelse{\equal{#6}{}}{}{, #6}} & {\itshape #2}\\% \end{tabular*}% \ifx&#7&% \else{\\\vbox{\small#7}}\fi% Here is \cventry from the classic style: \renewcommand*{\cventry}[7][.25em]{% \cvitem[#1]{#2}{% {\bfseries#3}% \ifthenelse{\equal{#4}{}}{}{, {\slshape#4}}% \ifthenelse{\equal{#5}{}}{}{, #5}% \ifthenelse{\equal{#6}{}}{}{, #6}% .\strut% \ifx&#7&% \else{\newline{}\begin{minipage}[t]{\linewidth}\small#7\end{minipage}}\fi}} Adjustments are possible via xpatch due to the optional argument of \cventry: \documentclass[11pt,a4paper,sans]{moderncv} \moderncvstyle{banking} \moderncvcolor{blue} \usepackage[scale=0.75]{geometry} \name{John}{Doe} \title{Resumé title} \phone[mobile]{+1~(234)~567~890} \phone[fixed]{+2~(345)~678~901} \phone[fax]{+3~(456)~789~012} \newcommand{\employerfont}{\slshape} \newcommand{\jobfont}{\bfseries} \usepackage{xpatch}% http://ctan.org/pkg/xpatch % \xpatchcmd{<cmd>}{<search>}{<replace>}{<success>}{<failure>} \xpatchcmd{\cventry}{\bfseries #4}{\jobfont #3}{}{}% Swap Employer for Job \xpatchcmd{\cventry}{\itshape #3}{\employerfont #4}{}{}% Swap Job for Employer \begin{document} \section{Experience} \subsection{Vocational} \cventry{year--year}{Job title}{Employer}{City}{}{General description no longer than 1--2 lines.\newline{}% } \end{document} -
# What is the relation between electromagnetic wave and photon? At the end of this nice video, she says that electromagnetic wave is a chain reaction of electric and magnetic fields creating each other so the chain of wave moves forward. I wonder where the photon is in this explanation. What is the relation between electromagnetic wave and photon? - Please see my answer here. You can understand Willis Lamb's frustration and the waves and normal modes describe the electromagnetic field. Photons are then the changes of number state of each normal mode - they are like the discrete "communications" the whole EM field has with the other quantum fields of the World that make up "empty space". One can reinterpret this statement as Maxwell's equations being the propagation equation for a lone "photon", but only in terms of propagation equations for the mean of electric and magnetic field .... – WetSavannaAnimal aka Rod Vance Dec 19 '13 at 0:48 ...observables when the EM field is in a superposition of $n=1$ Fock states (so it is "one photon propagating"). – WetSavannaAnimal aka Rod Vance Dec 19 '13 at 0:49 Both the wave theory of light and the particle theory of light are approximations to a deeper theory called Quantum Electrodynamics (QED for short). Light is not a wave nor a particle but instead it is an excitation in a quantum field. QED is a complicated theory, so while it is possible to do calculations directly in QED we often find it simpler to use an approximation. The wave theory of light is often a good approximation when we are looking at how light propagates, and the particle theory of light is often a good approximation when we are looking at how light interacts i.e. exchanges energy with something else. So it isn't really possible to answer the question where the photon is in this explanation. In general if you're looking at a system, like the one in the video, where the wave theory is a good description of light you'll find the photon theory to be a poor description of light, and vice versa. The two ways of looking at light are complementary. For example if you look at the experiment described in Anna's answer (which is one of the seminal experiments in understanding diffraction!) the wave theory gives us a good description of how the light travels through the Young's slits and creates the interference pattern, but it cannot describe how the light interacts with the photomultiplier used to record the image. By contrast the photon theory gives us a good explanation of how the light interacts with the photomultiplier but cannot describe how it travelled through the slits and formed the diffraction pattern. - This is news because all QM teachers told me that photons abstractions, proposed by QED, which is more exact than wave discription. However, this should not stop us from figuring out how two are related. Actually quanta = particles. – Val Dec 20 '13 at 18:09 @Val The way we actually calculate things in QED is with a perturbative expansion that involves photons. The underlying exact theory is one of several completely quantum fields. – Kevin Driscoll Dec 20 '13 at 19:26 There is a sense in which the classical description of light is retrieved as the classical limit of a coherent state of photons. I would say that this would be an appropriate answer to "where is the photon in the classical wave theory of light?" – Prahar May 4 at 18:01 @Prahar Yes, but you just said it yourself - that's not the reality. That's just "how it fits in the models"- it doesn't help you outside of the constraints of the models, and that's exactly what the OP is asking here. In the classical wave theory of light... there's no photons. Not one per wave, not "infinite amounts" per wave, just no photons, period. – Luaan May 5 at 11:32 In this link there exists a mathematical explanation of how an ensemble of photons of frequency $\nu$ and energy $E=h\nu$ end up building coherently the classical electromagnetic wave of frequency $\nu$. It is not simple to follow if one does not have the mathematical background. Conceptually watching the build up of interference fringes from single photons in a two slit experiment might give you an intuition of how even though light is composed of individual elementary particles, photons, the classical wave pattern emerges when the ensemble becomes large. Figure 1. Single-photon camera recording of photons from a double slit illuminated by very weak laser light. Left to right: single frame, superposition of 200, 1’000, and 500’000 frames. - In 1995 Willis Lamb published a provocative article with the title "Anti-photon", Appl. Phys. B 60, 77-84 (1995). As Lamb was one of the great pioneers of 20th century physics it is not easy to dismiss him as an old crank. He writes in the introductory paragraph: The photon concepts as used by a high percentage of the laser community have no scientific justification. It is now about thirty-five years after the making of the first laser. The sooner an appropriate reformulation of our educational processes can be made, the better. There is a lot to talk about the wave-particle duality in discussion of quantum mechanics. This may be necessary for those who are unwilling or unable to acquire an understanding of the theory. However, this concept is even more pointlessly introduced in discussions of problems in the quantum theory or radiation. Here the normal mode waves of a purely classical electrodynamics appear, and for each normal mode there is an equivalent pseudosimple harmonic-oscillator particle which may then have a wave function whose argument is the corresponding normal-mode amplitude. Note that the particle is not a photon. One might rather think of a multiplicity of two distinct wave concepts and a particle concept for each normal mode of the radiation field. However, such concepts are really not useful or appropriate. The "Complementarity Principle" and the notion of wave-particle duality were introduced by N. Bohr in 1927. They reflect the fact that he mostly dealt with theoretical and philosophical concepts, and left the detailed work to postdoctoral assistants. It is very likely that Bohr never, by himself, made a significant quantum-mechanical calculation after the formulation of quantum mechanics in 1925-1926. It is high time to give up the use of the word "photon", and of a bad concept which will shortly be a century old. Radiation does not consist of particles, and the classical, i.e., non-quantum, limit of QTR is described by Maxwell's equations for the electromagnetic fields, which do not involve particles. Talking about radiation in terms of particles is like using such ubiquitous phrases as "You know" or "I mean" which are very much to be heard in some cultures. For a friend of Charlie Brown, it might serve as a kind of security blanket. - What are photons? Photons get emitted every time when a body has a temperature higher 0 Kelvin (the absolute zero temperature). All bodies, surrounding us (except black holes) at any time radiate. They emit radiation into the surrounding as well as the receive radiation from the surrounding. Max Planck was the physicist who found out that this radiation has to be emitted in small portions, later called quanta and even later called photons. Making some changes in the imagination of how electrons are distributed around the nucleus, it was concluded that electrons get disturbed by incoming photons, by this way gain energy and give back this energy by the emission of photons. Photons not only get emitted from electrons. The nucleus, if well disturbed, emits photons too. Such radiations are called X-rays and gamma rays. EM radiation is the sum of all emitted photons from the involved electrons, protons and neutrons of a body. All bodies emit infrared radiation, beginning with approx. 500°C they emit visible light, first glowing in red and then shining brighter and brighter. There are some methods to stimulate the emission of EM radiation. It was found out that beside the re-emission of photons there is a second possibility to generate EM radiation. Every time, an electron is accelerated, it emits photons. This explanation helps to understand what happens in the glow filament of an electric bulb. The electrons at the filament are not moving straight forwards, they bump together and running zig-zag. By this accelerations they lose energy and this energy is emitted as photons. Most of this photons are infrared photons, and some of this photons are in the range of the visible light. In a fluorescent tube the electrons get accelerated with higher energy and they emit ultraviolet photons (which get converted into visible light by the fluorescent coating of the glass). Higher energy (with higher velocity) electrons reach the nucleus and the nucleus emits X-rays. As long as the introduced energy is continuous, not one is able to measure an oscillation of EM radiation. What are EM waves? Using a wave generator it is possible to create oscillating EM radiation. Such radiations are called radio waves. It was found out that a modified LC circuit in unit with a wave generator is able to radiate and that it is possible to filter out such a modulated radiation (of a certain frequency) from the surrounding noisy EM radiation. So the wave generator has a double function. The generator has to accelerate forward and backward the electrons inside the antenna rod and by this the photons of the radio wave get emitted and the generator makes it possible to modulate this EM radiation with a carrier frequency. It has to be underlined that the frequency of the emitted photons are in the IR range and sometime in the X-ray range. There is an optimal ratio between the length of the antenna rod and the frequency of the wave generator. But of course one can change the length of the rod or one can change the frequency of generator. To conclude from the length of the antenna rod to the wavelength of the emitted photons is nonsense. What is the wave characteristic of the photon? Since the electrons in an antenna rod are accelerated more or less at the same time, they emit photons simultaneous. The EM radiation of an antenna is measurable and it was found out that the nearfield of an antenna has two components, an electric field component and a magnetic field component. This two components get converted in each other, the induce each other. Sometimes the emitted energy is in the electric field component and otherwise the energy is in the magnetic field component. So why not conclude from the overall picture to the nature of the involved photons? They are the constituents which make the radio wave. -
# Does some Lucas sequence contain infinitely many primes? Does some nontrivial Lucas sequence contain infinitely many primes? The Mersenne numbers $$M_n=2^n-1:n$$ not necessarily prime are a Lucas sequence with recurrence relation $$x_{n+1}=2x_n+1$$. It's an open problem how many Mersenne numbers are prime and we know neither whether $$0\%$$ or $$100\%$$ are prime (asymptotically speaking). There are also similar sequences of repunits base $$n$$ with some nice maths surrounding them. There are Lucas sequences having primes up to some point and then no more primes, such as the sequence with the relation $$x_{n+1}=4x_n+1$$ given by $$1,5,21,85,341,\ldots$$ for which it can be shown that there are no more primes beyond $$5$$. We can also find sequences having no primes at all such as the sequence with the same relation but starting at $$8$$, given by $$8,33,133,533,\ldots$$ - and in fact it is true for any sequence for which $$3x_0+1$$ is a square that it has no primes - so we can say there are infinitely many Lucas sequences having no primes. The obvious case to ask is whether infinitely many of the Fibonacci numbers are prime - and this is another open problem. Is it known, or is it possible to show, that there is some (nontrivial) Lucas sequence (identifiable or otherwise), having infinitely many primes, or that there is none? • @vadim123 I could've sworn I put "nontrivial" in the question! I must've edited it out - I've put it back in, sorry. Nov 3 '18 at 19:21 • Also, Lucas sequences are normally (a) second-order; and (b) nonhomogeneous. Neither of the two examples given satisfy those criteria. Nov 3 '18 at 19:22 • @vadim123 I can't find any reference of what you mean there. By 2nd order does this mean expressing $U_{n+2}$ in terms of $U_n$ and $U_{n-1}$ as done here: math.stackexchange.com/questions/2705983 rather than expressing $U_{n+1}$? Nov 4 '18 at 12:07 • Nov 4 '18 at 22:15 • @vadim123 thanks, plenty to digest there, I'll work through it. I'd come across characteristic polynomials before and wanted to understand better too. It's a long shot but do you know of some obvious link between these characteristic polynomials and the Cantor pairing function? My particular interest is whether a certain form of power series may be the limit of polynomial pairing (tuple) functions as the degree approaches infinity. Nov 4 '18 at 22:32 Although it has been conjectured, and may be widely believed that most Lucas sequences contain infinitely many primes, we have yet to find even one that does. There are, however, nontrivial such sequences that that have been shown to contain no primes at all! Richard Guy gives several examples in section A3 of his 2nd edition of Unsolved Problems in Number Theory.'' The Lehmer and companion Lehmer sequences are generalizations of the Lucas and companion Lucas sequences (of which, respectively, the sequence of Fibonacci numbers and the sequence of Lucas numbers are members of) are not currently known to have produced a sequence where it has been shown to contain infinitely many primes, albeit the conjecture seems to be many if not most do according to Richard Guy who uses the term $$$$Lucas-Lehmer sequences'' though his book is the only place I recall where that term is specifically used.
# Kahneman and Tversky Decision experiments contradict Neumann-Morgensterns utility theory [duplicate] I want to know the reason why Kahneman and Tversky Decision experiments contradict Neumann-Morgensterns utility theory. Could anyone please ellaborate this to me? Thanks. • You need to show some effort, we are not a homework-doing machine. Which experiments are you referring to? Have you at least read some Wikipedia articles on the topic before asking the question? – Oliv Mar 30 '17 at 10:41 • @Oliv Thanks. I agree with you. I appreciate your involvement in this conference. Actually I got the solution for this problem here "economics.stackexchange.com/questions/95/…" – UserAb Mar 30 '17 at 10:55 I will write out some generalized comments here because I am incredibly interested in the contributions of experimental economics to our understanding of economic man. Further, I am not convinced entirely that this is a homework question. The real thing here is that I think many other people - especially undergrads possibly interested in pursuing graduate school - might find the answer both interesting and compelling. So, here is a brief outline of what could be a more complete answer: Foremost, I recommend you think carefully about the axioms of expected utility theory. In particular, think about the independence axiom. As a reminder, the independence axiom is as follows: Assuming that $X,Y,Z \in \mathbb{R}$ are lotteries and $p\in(0,1)$ is a generic probability: $$X \succ Y, \implies pX + (1-p)Z \succ pY (1-p)Z$$ This axiom is most famously contradicted by the Allais Paradox. Khaneman and Tversky introduced Prospect Theory, which in very basic terms says that humans think differently about losses than they do gains. Further, they point out that people weight small and large gains/losses different when making decisions. They posit that economic man's value function is defined on deviations from a reference point, that this value function is concave for gains and convex for losses, and that this value function is steeper for losses that for gains. Graphically: Within that paper linking to prospect theory, you can read about the experiments of K&T. Several of their experiments are demonstrations of the Allais Paradox. And the reason that this contradicts expected utility theory (EUT) is very simple: they are showing that humans systematically violate one of the three axioms of EUT when making economic decisions. The work of these men was seminal. However, they themselves have updated this original paper by introducing cumulative prospect theory. Other theorists and behavioral economists have noted the inability of prospect theory to explain behavioral regularities observed in the lab.
# American Institute of Mathematical Sciences • Previous Article Limit behaviour of the minimal solution of a BSDE with singular terminal condition in the non Markovian setting • PUQR Home • This Issue • Next Article Upper risk bounds in internal factor models with constrained specification sets January  2020, 5: 2 doi: 10.1186/s41546-020-00044-z ## Moderate deviation for maximum likelihood estimators from single server queues P. G. Department of Statistics, Sambalpur University, Odisha, India Received  February 26, 2019 Published  March 2020 Consider a single server queueing model which is observed over a continuous time interval (0,T], where T is determined by a suitable stopping rule. Let θ be the unknown parameter for the arrival process and $\hat {\theta }_{T}$ be the maximum likelihood estimator of θ. The main goal of this paper is to obtain a moderate deviation result of the maximum likelihood estimator for the single server queueing model under certain regular conditions. Citation: Saroja Kumar Singh. Moderate deviation for maximum likelihood estimators from single server queues. Probability, Uncertainty and Quantitative Risk, 2020, 5 (0) : 2-. doi: 10.1186/s41546-020-00044-z ##### References: [1] Acharya, S.K. (1999). On normal approximation for Maximum likelihood estimation from single server queues, Queueing Syst. 31, 207–216. [2] Acharya, S.K. and S.K. Singh. (2019). Asymptotic properties of maximum likelihood estimators from single server queues: A martingale approach, Commun. Stat. Theory Methods 48, 3549–3557. [3] Basawa, I.V. and N.U. Prabhu. (1981). Estimation in single server queues, Naval. Res. Logist. Quart. 28, 475–487. [4] Basawa, I.V. and N.U. Prabhu. (1988). Large sample inference from single server queues, Queueing Syst. 3, 289–304. [5] Billingsley, P. (1961). Statistical Inference for Markov Processes, The University of Chicago Press, Chicago. [6] Clarke, A.B. (1957). Maximum likelihood estimates in a simple queue, Ann. Math. Statist 28, 1036–1040. [7] Cox, D.R. (1965). Some problems of statistical analysis connected with congestion (W.L. Smith and W. B. Wilkinson, eds.), University of North Carolina Press, Chapel Hill. [8] Dembo, A. and O. Zeitouni. (1998). Large deviation Techniques and Applications, 2nd edn, Springer, New York. [9] Ellis, R.S. (1984). Large deviations for a general class of random vectors, Ann. Probab. 12, 1–12. [10] Gärtner, J. (1977). On large deviations from the invariant measure, Theory Probab. Appl. 22, 24–39. [11] Gao, F. (2001). Moderate deviations for the maximum likelihood estimator, Stat. Probab. Lett. 55, 345– 352. [12] Goyal, T.L. and C.M. Harris. (1972). Maximum likelihood estimation for queues with state dependent service, Sankhya Ser. A 34, 65–80. [13] Hall, P. and C.C. Heyde. (1980). Martingale Limit Theory and Applications, Academic Press, New York. [14] Miao, Y. and Y.-X. Chen. (2010). Note on moderate deviations for the maximum likelihood estimator, Acta Appl. Math. 110, 863–869. [15] Miao, Y. and Y. Wang. (2014). Moderate deviation principle for maximum likelihood estimator, Statistics 48, 766–777. [16] Singh, S.K. and S.K. Acharya. (2019). Equivalence between Bayes and the maximum likelihood estimator in M/M/1 queue, Commun. Stat.–Theory Methods 48, 4780–4793. [17] Wolff, R.W. (1965). Problems of statistical inference for birth and death queueing models, Oper. Res. 13, 243–357. [18] Xiao, Z. and L. Liu. (2006). Moderate deviations of maximum likelihood estimator for independent not identically distributed case, Stat. Probab. Lett. 76, 1056–1064. show all references ##### References: [1] Acharya, S.K. (1999). On normal approximation for Maximum likelihood estimation from single server queues, Queueing Syst. 31, 207–216. [2] Acharya, S.K. and S.K. Singh. (2019). Asymptotic properties of maximum likelihood estimators from single server queues: A martingale approach, Commun. Stat. Theory Methods 48, 3549–3557. [3] Basawa, I.V. and N.U. Prabhu. (1981). Estimation in single server queues, Naval. Res. Logist. Quart. 28, 475–487. [4] Basawa, I.V. and N.U. Prabhu. (1988). Large sample inference from single server queues, Queueing Syst. 3, 289–304. [5] Billingsley, P. (1961). Statistical Inference for Markov Processes, The University of Chicago Press, Chicago. [6] Clarke, A.B. (1957). Maximum likelihood estimates in a simple queue, Ann. Math. Statist 28, 1036–1040. [7] Cox, D.R. (1965). Some problems of statistical analysis connected with congestion (W.L. Smith and W. B. Wilkinson, eds.), University of North Carolina Press, Chapel Hill. [8] Dembo, A. and O. Zeitouni. (1998). Large deviation Techniques and Applications, 2nd edn, Springer, New York. [9] Ellis, R.S. (1984). Large deviations for a general class of random vectors, Ann. Probab. 12, 1–12. [10] Gärtner, J. (1977). On large deviations from the invariant measure, Theory Probab. Appl. 22, 24–39. [11] Gao, F. (2001). Moderate deviations for the maximum likelihood estimator, Stat. Probab. Lett. 55, 345– 352. [12] Goyal, T.L. and C.M. Harris. (1972). Maximum likelihood estimation for queues with state dependent service, Sankhya Ser. A 34, 65–80. [13] Hall, P. and C.C. Heyde. (1980). Martingale Limit Theory and Applications, Academic Press, New York. [14] Miao, Y. and Y.-X. Chen. (2010). Note on moderate deviations for the maximum likelihood estimator, Acta Appl. Math. 110, 863–869. [15] Miao, Y. and Y. Wang. (2014). Moderate deviation principle for maximum likelihood estimator, Statistics 48, 766–777. [16] Singh, S.K. and S.K. Acharya. (2019). Equivalence between Bayes and the maximum likelihood estimator in M/M/1 queue, Commun. Stat.–Theory Methods 48, 4780–4793. [17] Wolff, R.W. (1965). Problems of statistical inference for birth and death queueing models, Oper. Res. 13, 243–357. [18] Xiao, Z. and L. Liu. (2006). Moderate deviations of maximum likelihood estimator for independent not identically distributed case, Stat. Probab. Lett. 76, 1056–1064. [1] Shan Gao, Jinting Wang. On a discrete-time GI$^X$/Geo/1/N-G queue with randomized working vacations and at most $J$ vacations. Journal of Industrial and Management Optimization, 2015, 11 (3) : 779-806. doi: 10.3934/jimo.2015.11.779 [2] Sujit Kumar Samanta, Rakesh Nandi. Analysis of $GI^{[X]}/D$-$MSP/1/\infty$ queue using $RG$-factorization. Journal of Industrial and Management Optimization, 2021, 17 (2) : 549-573. doi: 10.3934/jimo.2019123 [3] Tatsuaki Kimura, Hiroyuki Masuyama, Yutaka Takahashi. Light-tailed asymptotics of GI/G/1-type Markov chains. Journal of Industrial and Management Optimization, 2017, 13 (4) : 2093-2146. doi: 10.3934/jimo.2017033 [4] Sheng Zhu, Jinting Wang. Strategic behavior and optimal strategies in an M/G/1 queue with Bernoulli vacations. Journal of Industrial and Management Optimization, 2018, 14 (4) : 1297-1322. doi: 10.3934/jimo.2018008 [5] A. Guillin, R. Liptser. Examples of moderate deviation principle for diffusion processes. Discrete and Continuous Dynamical Systems - B, 2006, 6 (4) : 803-828. doi: 10.3934/dcdsb.2006.6.803 [6] Nicolas Rougerie. On two properties of the Fisher information. Kinetic and Related Models, 2021, 14 (1) : 77-88. doi: 10.3934/krm.2020049 [7] Panpan Ren, Shen Wang. Moderate deviation principles for unbounded additive functionals of distribution dependent SDEs. Communications on Pure and Applied Analysis, 2021, 20 (9) : 3129-3142. doi: 10.3934/cpaa.2021099 [8] Zsolt Saffer, Wuyi Yue. A dual tandem queueing system with GI service time at the first queue. Journal of Industrial and Management Optimization, 2014, 10 (1) : 167-192. doi: 10.3934/jimo.2014.10.167 [9] Veena Goswami, Gopinath Panda. Customers' joining behavior in an unobservable GI/Geo/m queue. Numerical Algebra, Control and Optimization, 2021  doi: 10.3934/naco.2021059 [10] Biao Xu, Xiuli Xu, Zhong Yao. Equilibrium and optimal balking strategies for low-priority customers in the M/G/1 queue with two classes of customers and preemptive priority. Journal of Industrial and Management Optimization, 2019, 15 (4) : 1599-1615. doi: 10.3934/jimo.2018113 [11] Jie Huang, Xiaoping Yang, Yunmei Chen. A fast algorithm for global minimization of maximum likelihood based on ultrasound image segmentation. Inverse Problems and Imaging, 2011, 5 (3) : 645-657. doi: 10.3934/ipi.2011.5.645 [12] Yanqing Liu, Jiyuan Tao, Huan Zhang, Xianchao Xiu, Lingchen Kong. Fused LASSO penalized least absolute deviation estimator for high dimensional linear regression. Numerical Algebra, Control and Optimization, 2018, 8 (1) : 97-117. doi: 10.3934/naco.2018006 [13] Torsten Lindström. Discrete models and Fisher's maximum principle in ecology. Conference Publications, 2003, 2003 (Special) : 571-579. doi: 10.3934/proc.2003.2003.571 [14] Shaojun Lan, Yinghui Tang, Miaomiao Yu. System capacity optimization design and optimal threshold $N^{*}$ for a $GEO/G/1$ discrete-time queue with single server vacation and under the control of Min($N, V$)-policy. Journal of Industrial and Management Optimization, 2016, 12 (4) : 1435-1464. doi: 10.3934/jimo.2016.12.1435 [15] Shaojun Lan, Yinghui Tang. Performance analysis of a discrete-time $Geo/G/1$ retrial queue with non-preemptive priority, working vacations and vacation interruption. Journal of Industrial and Management Optimization, 2019, 15 (3) : 1421-1446. doi: 10.3934/jimo.2018102 [16] Jonathan Zinsl. The gradient flow of a generalized Fisher information functional with respect to modified Wasserstein distances. Discrete and Continuous Dynamical Systems - S, 2017, 10 (4) : 919-933. doi: 10.3934/dcdss.2017047 [17] Ricardo J. Alonso, Véronique Bagland, Bertrand Lods. Uniform estimates on the Fisher information for solutions to Boltzmann and Landau equations. Kinetic and Related Models, 2019, 12 (5) : 1163-1183. doi: 10.3934/krm.2019044 [18] Yan Wang, Yanxiang Zhao, Lei Wang, Aimin Song, Yanping Ma. Stochastic maximum principle for partial information optimal investment and dividend problem of an insurer. Journal of Industrial and Management Optimization, 2018, 14 (2) : 653-671. doi: 10.3934/jimo.2017067 [19] Dequan Yue, Wuyi Yue, Gang Xu. Analysis of customers' impatience in an M/M/1 queue with working vacations. Journal of Industrial and Management Optimization, 2012, 8 (4) : 895-908. doi: 10.3934/jimo.2012.8.895 [20] Fadia Bekkal-Brikci, Giovanna Chiorino, Khalid Boushaba. G1/S transition and cell population dynamics. Networks and Heterogeneous Media, 2009, 4 (1) : 67-90. doi: 10.3934/nhm.2009.4.67 Impact Factor:
# Experiments in WordPress Routing 06 Testing multisite compatibility. ## Can I Multisite? WordPress has been offering the possibility to set up a multisite network for quite some time now and I find myself using such a feature in some client projects. Losing this ability to an improvised and makeshift routing system would break the deal. To test the capabilities of this solution I’ve set up a multisite network made of two sites: 1. wp.dev - the main site 2. subsite.wp.dev - the only subsite in the network For this test I’ve taken the subdomain approach to the multisite installation to keep things “easy” in these first steps. In the code I’ve set up just one route (/titles) that should show me the titles of the posts of the site I’m visiting. My code is relying on WordPress built in discrimination of the site and is not making any effort to discriminate one site from the other /** * Plugin Name: theAverageDev Routes * Plugin URI: http://theAverageDev.com * Description: Routing for WordPress * Version: 1.0 * Author: theAverageDev * Author URI: http://theAverageDev.com */ /** * Parse request */ function tad_routes_do_parse_request( $continue, WP$wp, $extra_query_vars ) { respond( '/titles', function () {$out = "<h2>Site: %s</h2><h3>Post titles</h3><ul>%s</ul>"; $posts = get_posts(); echo sprintf($out, get_bloginfo( 'title' ), implode( '', array_map( function ( $post ) { return "<li>{$post->post_title}</li>"; }, $posts ) ) ); } ); dispatch_or_continue(); return$continue; } Visiting the address wp.dev/titles and subsite.wp.dev/titles will yield the expected titles ## Sub-directories Switching to a sub-directory based multisite installation complicates things a bit. The sub-directory is part of the path and I expect the unchanged code above to work on the main site, its address did not change after all, and simply not match on the subsite and yield the 404 template. As expected the results are as follows For the code to work on the subsite too I will have to take the subfolder path into account and register an ad-hoc route. Luckily the klein52 library packs the with methods that’s perfect for the purpose. /** * Plugin Name: theAverageDev Routes * Plugin URI: http://theAverageDev.com * Description: Routing for WordPress * Version: 1.0 * Author: theAverageDev * Author URI: http://theAverageDev.com */ function common_routes() { respond( '/titles', function () { $out = "<h2>Site: %s</h2><h3>Post titles</h3><ul>%s</ul>";$posts = get_posts(); echo sprintf( $out, get_bloginfo( 'title' ), implode( '', array_map( function ($post ) { return "<li>{$post->post_title}</li>"; },$posts ) ) ); } ); } /** * Parse request */ function tad_routes_do_parse_request( $continue, WP$wp, $extra_query_vars ) { with( '/subsite', function () { common_routes(); } ); common_routes(); dispatch_or_continue(); return$continue; }
# Scalar Product PhysicsScalars and Vectors #### Class 11th Physics - Elasticity 6 Lectures 1 hours #### Class 11th Physics - Oscillations 12 Lectures 2 hours #### Class 11th Physics - Waves 14 Lectures 3 hours ## Introduction The scalar products are essential in order to understand the motion of vector quantities in real-time. These products can be subtracted or added at the same time with a proper algebraic formula for understanding the scalar quantity of different objects. These types of products are commonly denoted as the inner products or dot products based on their characteristics to outline scalar multiplication through dots. ## Definition of Scalar Product The scalar products is defined as the products of magnitude that consists of two vectors cosine angle between different vectors. Scalar products mostly involve the sum of the products for the corresponding entries along with two sequences of numbers. These products have various applications in terms of mechanics, engineering and geometry (Cavaglia et al. 2019). In simple words, the scalar products can be defined as the products of magnitude that has two different vectors and one angle between them. Figure 1: Scalar Products example 1 For example, if $\mathrm{\vec{a}}$ and $\mathrm{\vec{b}}$ are two non-zero vectors that have a certain magnitude of |a| and |b| with an angle of θ, then the algebraic operation for the solar product will be "$\mathrm{\vec{a}.\vec{b} \:= \:|a| |b| \:cos\: θ}$. Here, the expression denotes 0 ≤ θ ≤ according to the rule of scalar products. Here, either a or b is equal to 0, especially when θ is not defined within the equation. Under this circumstance, both a and b in this scalar product appear as equal to 0 (Sciencedirect, 2022). So, if the value of vector a and vector b equals 0 and then the value of vector a.b also stands at the value of 0. Figure 2: Scalar Products example 2 In the case of the second example, it can be seen that if two vectors, “$\mathrm{\vec{a}}$ and “$\mathrm{\vec{b}}$ are drawn to the θ of the scalar product, the scalar product can appear as $\mathrm{\vec{a}.\vec{b} \:= \:|a| |b| \:cos\: θ}$. Here, the |a| is the magnitude of vector a, the |b| signifies the modulus of vector b and the θ represents the angle between vector a and vector b (Mathcentre, 2022). Due to the symbolic representation of the scalar products as a dot, it is further denoted as dot products while using them in real-time. Components like area, volume, work, energy, pressure, mass, density, time and distance are prime examples of scalar products. ## Matrix representation of Scalar Products The representation of scalar products on a matrix can be done using two different patterns, the column matrix and the row matrix. The unit vectors of scalar products can be ordinarily spatial with different vectors in the column matrices. Here, the matrix involves components like x, y and z to transpose the vectors in the row matrices (Pei & Terras, 2021). For example, if A and B are two different vectors present in the matrix of scalar products, then both the matrix will collaboratively deliver only one number at a time as a result of the addition and subtraction of two vectors within the scalar products. $$\mathrm{(A_X | A_Y | A_Z)\begin{pmatrix}B_X \B_Y \B_Z \\end{pmatrix} \:=\: A_X B_X+A_Y B_Y+A_Z B_Z \:=\: \vec{A}. \: \vec{B}}$$ The single number that is commonly extracted from the scalar matrix, generally involves the sum of the products, especially while corresponding to the spatial components of more than one vector at a time. Hence, the matrix representation of scalar products is represented with an illustrious process that is used in the multiplication of matrix within a certain element. Here, the sum of the products is presented in a column and a row format with a certain given number within the scalar products in real-time. ## Characteristics of Scalar products Scalar products have a few distinctive characteristics that make them different from the vector products in real-time. The primary characteristic of the scalar products, are commutatively and distributivity. Here the calculation of the scalar properties is done in a different order starting from vector b to vector a, exactly opposite from the general scalar products. Apart from that, this type of product generally follows the distributive law that can be implied in three consecutive vectors, a, b and c. Figure 3: Scalar products characteristics The scalar products can be defined based on their magnitude only along with the algebraic solutions related to the addition and subtraction of vectors in real-time. Another characteristic of scalar products is the manual perpendicular standing of the vectors at a time, especially when the value of two vectors stands on 0 only (Thefactfactor, 2022). Lastly, the square of the magnitudes of the vector stands equally with the self-product of a vector according to the commutative law of scalar products. ## Conclusion Scalar products are commutative products along with the incorporation of equal-length number sequences. These products generally return only one number after taking two vectors at a time. Most importantly, the scalar products can be subtracted and added from one another on the basis of algebraic equations in terms of mathematical representation. Moreover, the scalar products are the communicative products that have direct involvement in the alteration of vector components in real-time. ## FAQs Q1. What is the algebraic formula of Scalar products? Ans. The algebraic formula of scalar products is |a| |b| cos θ. The formula is dependent on two vectors, vector a and vector b and follows the magnitude of |a| and |b|. Q2. Which law do the scalar products follow? Ans. The scalar products generally follow two distinctive rules, the commutative law and the distributive law. The distribution of these products follows the approach related to addition whereas the commutative law follows the commutative approach. Q3. What are some examples of scalar products in real-time? Ans. Scalar products have real-time usages starting from deciding routes and searching the routes in a particular place. Apart from that, there is major usage of these products in the fields of calculating the Pythagoras theorem in real life. Updated on 13-Oct-2022 11:19:47
# What is the origin of the word “Keccak”? Where does the word or acronym Keccak come from? 1. Guido Bertoni, Joan Daemen, Michael Peeters, and Gilles Van Assche. Keccak sponge function family main document. Submission to NIST (updated), 2009. 2. "NIST Selects Winner of Secure Hash Algorithm (SHA-3) Competition". NIST 10/2/2012. - I've wondered about this myself so I'll upvote. Although this question is not about crypto per se, I can't find anything wrong with it as per the FAQ. I assume you've already searched the documents you cite for terms such as Keccak stands for, the name of the algorithm... etc.? – rath Aug 25 '13 at 15:16 Note that given the many unpronounceable names of the hash methods in the competition that I'm glad we will just continue to call Keccak SHA-3 from now on. – Maarten Bodewes Aug 25 '13 at 17:50
# This blog is about noteworthy pivot points about Java Concurrent Framework Back to Java old days there were wait()/notify() which is error prone, while from Java 5.0 there was Concurrent framework being introduced, this page list some pivot points. # CountDownLatch • CountDownLatch in Java is a kind of synchronizer which allows one Thread to wait for one or more Threads before starts processing. • You can also implement same functionality using wait and notify mechanism in Java but it requires lot of code and getting it write in first attempt is tricky, With CountDownLatch it can be done in just few lines. • One of the disadvantage of CountDownLatch is that its not reusable once count reaches to zero you can not use CountDownLatch any more, but don’t worry Java concurrency API has another concurrent utility called CyclicBarrier for such requirements. ## When to use CountDownLatch Classical example of using CountDownLatch in Java is any server side core Java application which uses services architecture, where multiple services is provided by multiple threads and application can not start processing until all services have started successfully as shown in our CountDownLatch example. ## Summary • Main Thread wait on Latch by calling CountDownLatch.await() method while other thread calls CountDownLatch.countDown() to inform that they have completed. # CyclicBarrier • there is different you can not reuse CountDownLatch once the count reaches zero while you can reuse CyclicBarrier by calling reset() method which resets Barrier to its initial State. What it implies that CountDownLatch is a good for one-time events like application start-up time and CyclicBarrier can be used to in case of the recurrent event e.g. concurrently calculating a solution of the big problem etc. • a simple example of CyclicBarrier in Java on which we initialize CyclicBarrier with 3 parties, means in order to cross barrier, 3 thread needs to call await() method. each thread calls await method in short duration but they don’t proceed until all 3 threads reached the barrier, once all thread reach the barrier, barrier gets broker and each thread started their execution from that point. • Sample can be found at CyclicBarrierDemo.java ## Use cases: • To implement multi player game which can not begin until all player has joined. • Perform lenghty calculation by breaking it into smaller individual tasks. In general, to implement Map-Reduce technique. • CyclicBarrier can perform a completion task once all thread reaches to the barrier, This can be provided while creating CyclicBarrier. • If CyclicBarrier is initialized with 3 parties means 3 thread needs to call await method to break the barrier. • The thread will block on await() until all parties reach to the barrier, another thread interrupt or await timed out. • CyclicBarrier.reset() put Barrier on its initial state, other thread which is waiting or not yet reached barrier will terminate with java.util.concurrent.BrokenBarrierException. • ThreadLocal in Java is another way to achieve thread-safety apart from writing immutable classes. • ThreadLocal in Java is a different way to achieve thread-safety, it doesn’t address synchronization requirement, instead it eliminates sharing by providing explicitly copy of Object to each thread. • Since Object is no more shared there is no requirement of Synchronization which can improve scalability and performance of application. • One of the classic example of ThreadLocal is sharing SimpleDateForamt. Since SimpleDateFormat is not thread safe, having a global formatter may not work but having per Thread formatter will certainly work. but it can be source of severe memory leak and java.lang.OutOfMemoryError if not used carefully. so avoid until you don’t have any other option. # Semaphone • Semaphore provides two main method acquire() and release() for getting permits and releasing permits. acquire() method blocks until permit is available. • Semaphore provides both blocking method as well as unblocking method to acquire permits. This Java concurrency tutorial focus on a very simple example of Binary Semaphore and demonstrate how mutual exclusion can be achieved using Semaphore in Java. ## Binary Semaphone a Counting semaphore with one permit is known as binary semaphore because it has only two state permit available or permit unavailable. Binary semaphore can be used to implement mutual exclusion or critical section where only one thread is allowed to execute. Thread will wait on acquire() until Thread inside critical section release permit by calling release() on semaphore. ## Scenarios usage 1. To implement better Database connection pool which will block if no more connection is available instead of failing and handover Connection as soon as its available. 2. To put a bound on collection classes. by using semaphore you can implement bounded collection whose bound is specified by counting semaphore. 3. That’s all on Counting semaphore example in Java. Semaphore is real nice concurrent utility which can greatly simply design and implementation of bounded resource pool. Java 5 has added several useful concurrent utility and deserve a better attention than casual look. # Race condition • Race conditions occurs when two thread operate on same object without proper synchronization and there operation interleaves on each other. Classical example of Race condition is incrementing a counter since increment is not an atomic operation and can be further divided into three steps like read, update and write. if two threads tries to increment count at same time and if they read same value because of interleaving of read operation of one thread to update operation of another thread, one count will be lost when one thread overwrite increment done by other thread. • I found that two code patterns namely “check and act” and “read modify write” can suffer race condition if not synchronized properly. • classical example of “check and act” race condition in Java is getInstance() method of Singleton Class, • put if absent scenario. consider below code if(!hashtable.contains(key)){ hashtable.put(key,value); } ## Fix race condition: -In order to fix this race condition in Java you need to wrap this code inside synchronized block which makes them atomic together because no thread can go inside synchronized block if one thread is already there. • IllegalMonitorStateException in Java which will occur if we don’t call wait (), notify () or notifyAll () method from synchronized context. • Any potential race condition between wait and notify method in Java ## details: • A thread is essentialy a subdivision of a process, or LWP: lightweight process. • Crucially, each process has its own memory space. • A thread is a subdivision that shares the memory space of its parent process. • Threads belonging to a process usually share a few other key resources as well, such as their working directory, environment variables, file handles etc. • On the other hand, each thread has its own private stack and registers, including program counter. program counter (PC) register keeps track of the current instruction executing at any moment. That is like a pointer to the current instruction in sequence of instructions in a program. • Method area: In general, method area is a logical part of heap area. But that is left to the JVM implementers to decide. Method area has per class structures and fields. Nothing but static fields and structures. • Depending on the OS, threads may have some other private resources too, such as thread-local storage (effectively, a way of referring to “variable number X”, where each thread has its own private value of X). ## Wait & Notify • Since wait method is not defined in Thread class, you cannot simply call Thread.wait(), that won’t work but since many Java developers are used to calling Thread.sleep() they try the same thing with wait() method and stuck. • You need to call wait() method on the object which is shared between two threads, in producer-consumer problem its the queue which is shared between producer and consumer threads. synchronized(lock){ while(!someCondition){ lock.wait(); } } ## Tips • Always call wait(), notify() and notifyAll() methods from synchronized method or synchronized block otherwise JVM will throw IllegalMonitorStateException. • Always call wait and notify method from a loop and never from if() block, because loop test waiting condition before and after sleeping and handles notification even if waiting for the condition is not changed. • Always call wait in shared object e.g. shared queue in this example. • Prefer notifyAll() over notify() method due to reasons given in this article. ## Fork-Join • Fork/join tasks is “pure” in-memory algorithms in which no I/O operations come into picture.it is based on a work-stealing algorithm. • Java’s most attractive part is it makes things easier and easier. • its really challenging where several threads are working together to accomplish a large task so again java has tried to make things easy and simplifies this concurrency using Executors and Thread Queue. • it work on divide and conquer algorithm and create sub-tasks and communicate with each other to complete. • New fork-join executor framework has been created which is responsible for creating one new task object which is again responsible for creating new sub-task object and waiting for sub-task to be completed.internally it maintains a thread pool and executor assign pending task to this thread pool to complete when one task is waiting for another task to complete. whole Idea of fork-join framework is to leverage multiple processors of advanced machine. • This static method is essentially used to notify the system that the current thread is willing to “give up the CPU” for a while. The general idea is that: The thread scheduler will select a different thread to run instead of the current one. However, the details of how yielding is implemented by the thread scheduler differ from platform to platform. In general, you shouldn’t rely on it behaving in a particular way. Things that differ include: • when, after yielding, the thread will get an opportunity to run again; • whether or not the thread foregoes its remaining quantum. ### Windows • In the Hotspot implementation, the way that Thread.yield() works has changed between Java 5 and Java 6. • In Java 5, Thread.yield() calls the Windows API call Sleep(0). This has the special effect of clearing the current thread’s quantum and putting it to the end of the queue for its priority level. In other words, all runnable threads of the same priority (and those of greater priority) will get a chance to run before the yielded thread is next given CPU time. When it is eventually re-scheduled, it will come back with a full quantum, but doesn’t “carry over” any of the remaining quantum from the time of yielding. This behaviour is a little different from a non-zero sleep where the sleeping thread generally loses 1 quantum value (in effect, 1/3 of a 10 or 15ms tick). • In Java 6, this behaviour was changed. The Hotspot VM now implements Thread.yield() using the Windows SwitchToThread() API call. This call makes the current thread give up its current timeslice, but not its entire quantum. This means that depending on the priorities of other threads, the yielding thread can be scheduled back in one interrupt period later. ### Linux • Under Linux, Hotspot simply calls sched_yield(). The consequences of this call are a little different, and possibly more severe than under Windows: • a yielded thread will not get another slice of CPU until all other threads have had a slice of CPU; • (at least in kernel 2.6.8 onwards), the fact that the thread has yielded is implicitly taken into account by the scheduler’s heuristics on its recent CPU allocation — thus, implicitly, a thread that has yielded could be given more CPU when scheduled in the future. ### When to use yield()? • I would say practically never. Its behaviour isn’t standardly defined and there are generally better ways to perform the tasks that you might want to perform with yield(): • if you’re trying to use only a portion of the CPU, you can do this in a more controllable way by estimating how much CPU the thread has used in its last chunk of processing, then sleeping for some amount of time to compensate: see the sleep() method; • if you’re waiting for a process or resource to complete or become available, there are more efficient ways to accomplish this, such as by using join() to wait for another thread to complete, using the wait/notify mechanism to allow one thread to signal to another that a task is complete, or ideally by using one of the Java 5 concurrency constructs such as a Semaphore or blocking queue. • thread scheduler, part of the OS (usually) that is responsible for sharing the available CPUs out between the various threads. How exactly the scheduler works depends on the individual platform, but various modern operating systems (notably Windows and Linux) use largely similar techniques that we’ll describe here. • Note that we’ll continue to talk about a single thread scheduler. On multiprocessor systems, there is generally some kind of scheduler per processor, which then need to be coordinated in some way. • Across platforms, thread scheduling tends to be based on at least the following criteria: • a priority, or in fact usually multiple “priority” settings that we’ll discuss below; • a quantum, or number of allocated timeslices of CPU, which essentially determines the amount of CPU time a thread is allotted before it is forced to yield the CPU to another thread of the same or lower priority (the system will keep track of the remaining quantum at any given time, plus its default quantum, which could depend on thread type and/or system configuration); • a state, notably “runnable” vs “waiting”; • metrics about the behaviour of threads, such as recent CPU usage or the time since it last ran (i.e. had a share of CPU), or the fact that it has “just received an event it was waiting for”. • Most systems use what we might dub priority-based round-robin scheduling to some extent. The general principles are: • a thread of higher priority (which is a function of base and local priorities) will preempt a thread of lower priority; • otherwise, threads of equal priority will essentially take turns at getting an allocated slice or quantum of CPU; • there are a few extra “tweaks” to make things work. ### States Depending on the system, there are various states that a thread can be in. Probably the two most interesting are: • runnable, which essentially means “ready to consume CPU”; being runnable is generally the minimum requirement for a thread to actually be scheduled on to a CPU; • waiting, meaning that the thread currently cannot continue as it is waiting for a resource such as a lock or I/O, for memory to be paged in, for a signal from another thread, or simply for a period of time to elapse (sleep). Other states include terminated, which means the thread’s code has finished running but not all of the thread’s resources have been cleared up, and a new state, in which the thread has been created, but not all resources necessary for it to be runnable have been created. ### Quanta and clock ticks • Each thread has a quantum, which is effectively how long it is allowed to keep hold of the CPU if: • it remains runnable; • the scheduler determines that no other thread needs to run on that CPU instead. • Thread quanta are generally defined in terms of some number of clock ticks. If it doesn’t otherwise cease to be runnable, the scheduler decides whether to preempt the currently running thread every clock tick. As a rough guide: • a clock tick is typically 10-15 ms under Windows; under Linux, it is 1ms (kernel 2.6.8 onwards); • a quantum is usually a small number of clock ticks, depending on the OS: either 2, 6 or 12 clock ticks on Windows, depending on whether Windows is running in “server” mode: Windows mode Foreground process Non-foreground process Normal 6 ticks 2 ticks Server 12 ticks 12 ticks between 10-200 clock ticks (i.e. 10-200 ms) under Linux, though some granularity is introduced in the calculation— see below. a thread is usually allowed to “save up” unused quantum, up to some limit and granularity. • In Windows, a thread’s quantum allocation is fairly stable. In Linux, on the other hand, a thread’s quantum is dynamically adjusted when it is scheduled, depending partly on heuristics about its recent resource usage and partly on a nice value #### Switching and scheduling algorithms • At key moments, the thread scheduler considers whether to switch the thread that is currently running on a CPU. These key moments are usually: • periodically, via an interrupt routine, the scheduler will consider whether the currently running thread on each CPU has reached the end of its allotted quantum; • at any time, a currently running thread could cease to be runnable (e.g. by needing to wait, reaching the end of its execution or being forcibly killed); • when some other attribute of the thread changes (e.g. its priority or processor affinity4) which means that which threads are running needs to be re-assessed. • At these decision points, the scheduler’s job is essentially to decide, of all the runnable threads, which are the most appropriate to actually be running on the available CPUs. Potentially, this is quite a complex task. But we don’t want the scheduler to waste too much time deciding “what to do next”. So in practice, a few simple heuristics are used each time the scheduler needs to decide which thread to let run next: • there’s usually a fast path for determining that the currently running thread is still the most appropriate one to continue running (e.g. storing a bitmask of which priorities have runnable threads, so the scheduler can quickly determine that there’s none of a higher priority than that currently running); • if there is a runnable thread of higher priority than the currently running one, then the higher priority one will be scheduled in3; • if a thread is “preempted” in this way, it is generally allowed to keep its remaining quantum and continue running when the higher-priority thread is scheduled out again; • when a thread’s quantum runs out, the thread is “put to the back of the queue” of runnable threads with the given priority and if there’s no queued (runnable) thread of higher priority, then next thread of the same priority will be scheduled in; • at the end of its quantum, if there’s “nothing better to run”, then a thread could immediately get a new quantum and continue running; • a thread typically gets a temporary boost to its quantum and/or priority at strategic points. • Quantum and priority boosting Both Windows and Linux (kernel 2.6.8 onwards) implement temporary boosting. Strategic points at which a thread may be given a “boost” include: • when it has just finished waiting for a lock/signal or I/O5; • when it has not run for a long time (in Windows, this appears to be a simple priority boost after a certain time; in Linux, there is an ongoing calculation based on the thread’s nice value and its recent resource usage); • when a GUI event occurs; • while it owns the focussed window (recent versions of Windows give threads of the owning process a larger quantum; earlier versions give them a priority boost). #### Context switching • context switching. Roughly speaking, this is the procedure that takes place when the system switches between threads running on the available CPUs. • the thread scheduler must actually manage the various thread structures and make decisions about which thread to schedule next where, and every time the thread running on a CPU actually changes— often referred to as a context switch • switching between threads of different processes (that is, switching to a thread that belongs to a different process from the one last running on that CPU) will carry a higher cost, since the address-to-memory mappings must be changed, and the contents of the cache almost certainly will be irrelevant to the next process. • Context switches appear to typically have a cost somewhere between 1 and 10 microseconds (i.e. between a thousandth and a hundredth of a millisecond) between the fastest and slowest cases (same-process threads with little memory contention vs different processes). So the following are acceptable: 1 nanoseconds is billionth of one second, 1 microsecond is millionth of one second, 1 millisecond is thousandth of one second ##### What causes too many slow context switches in Java? • Every time we deliberately change a thread’s status or attributes (e.g. by sleeping, waiting on an object, changing the thread’s priority etc), we will cause a context switch. But usually we don’t do those things so many times in a second to matter. Typically, the cause of excessive context switching comes from contention on shared resources, particularly synchronized locks: • rarely, a single object very frequently synchronized on could become a bottleneck; • more frequently, a complex application has several different objects that are each synchronized on with moderate frequency, but overall, threads find it difficult to make progress because they keep hitting different contended locks at regular intervals. ##### Avoiding contention and context switches in Java • Firstly, before hacking with your code, a first course of action is upgrading your JVM, particularly if you are not yet using Java 6. Most new Java JVM releases have come with improved synchronization optimisation. • Then, a high-level solution to avoiding synchronized lock contention is generally to use the various classes from the Java 5 concurrency framework (see the java.util.concurrent package). For example, instead of using a HashMap with appropriate synchronization, a ConcurrentHashMap can easily double the throughput with 4 threads and treble it with 8 threads (see the aforementioned link for some ConcurrentHashMap performance measurements). A replacement to synchronized with often better concurrency is offered with various explicit lock classes (such as ReentrantLock). • Lower-priority threads are given CPU when all higher priority threads are waiting (or otherwise unable to run) at that given moment. • Thread priority isn’t very meaningful when all threads are competing for CPU. • The number should lie in the range of two constants MIN_PRIORITY and MAX_PRIORITY defined on Thread, and will typically reference NORM_PRIORITY, the default priority of a thread if we don’t set it to anything else. • For example, to give a thread a priority that is “half way between normal and maximum”, we could call: thr.setPriority((Thread.MAX_PRIORITY - Thread.NORM_PRIORITY) / 2); • depending on your OS and VM version, Thread.setPriority() may actually do nothing at all (see below for details); • what thread priorities mean to the thread scheduler differs from scheduler to scheduler, and may not be what you intuitively presume; in particular: Priority may not indicate “share of the CPU”. As we’ll see below, it turns out that “priority” is more or less an indication of CPU distribution on UNIX systems, but not under Windows. • thread priorities are usually a combination of “global” and “local” priority settings, and Java’s setPriority() method typically works only on the local priority— in other words, you can’t set priorities across the entire range possible (this is actually a form of protection— you generally don’t want, say, the mouse pointer thread or a thread handling audio data to be preempted by some random user thread); • the number of distinct priorities available differs from system to system, but Java defines 10 (numbered 1-10 inclusive), so you could end up with threads that have different priorities under one OS, but the same priority (and hence unexpected behaviour) on another; • most operating systems’ thread schedulers actually perform temporary manipulations to thread priorities at strategic points (e.g. when a thread receives an event or I/O it was waiting for), and often “the OS knows best”; trying to manually manipulate priorities could just interfere with this system; • your application doesn’t generally know what threads are running in other processes, so the effect on the overall system of changing the priority of a thread may be hard to predict. So you might find, for example, that your low-priority thread designed to “run sporadically in the background” hardly runs at all due to a virus dection program running at a slightly higher (but still ‘lower-than-normal’) priority, and that the performance unpredictably varies depending on which antivirus program your customer is using. Of course, effects like these will always happen to some extent or other on modern systems. ## Thread scheduling implications in Java • the granularity and responsiveness of the Thread.sleep() method is largely determined by the scheduler’s interrupt period and by how quickly the slept thread becomes the “chosen” thread again; • the precise function of the setPriority() method depends on the specific OS’s interpretation of priority (and which underlying API call Java actually uses when several are available): for more information, see the more detailed section on thread priority; • the behaviour of the Thread.yield() method is similarly determined by what particuar underlying API calls do, and which is actually chosen by the VM implementation. • Although our introduction to threading focussed on how to create a thread, it turns out that it isn’t appropriate to create a brand new thread just for a very small task. Threads are actually quite a “coarse-grained” unit of execution, for reasons that are hopefully becoming clear from the previous sections. • creating and tearing down threads isn’t free: there’ll be some CPU overhead each time we do so; • there may be some moderate limit on the number of threads that can be created, determined by the resources that a thread needs to have allocated (if a process has 2GB of address space, and each thread as 512K of stack, that means a maximum of a few thousands threads per process). • In applications such as servers that need to continually execute short, multithreaded tasks, the usual way to avoid the overhead of repeated thread creation is to create a thread pool. # Dinnig Philosophers problem • The problem was designed to illustrate the challenges of avoiding deadlock, a system state in which no progress is possible. To see that a proper solution to this problem is not obvious, consider a proposal in which each philosopher is instructed to behave as follows: 1. think until the left fork is available; when it is, pick it up; 2. think until the right fork is available; when it is, pick it up; 3. when both forks are held, eat for a fixed amount of time; 4. then, put the right fork down; 5. then, put the left fork down; 6. repeat from the beginning. • This attempted solution fails because it allows the system to reach a deadlock state, in which no progress is possible. This is a state in which each philosopher has picked up the fork to the left, and is waiting for the fork to the right to become available, vice versa. With the given instructions, this state can be reached, and when it is reached, the philosophers will eternally wait for each other to release a fork • Resource starvation might also occur independently of deadlock if a particular philosopher is unable to acquire both forks because of a timing problem. For example, there might be a rule that the philosophers put down a fork after waiting ten minutes for the other fork to become available and wait a further ten minutes before making their next attempt. • This scheme eliminates the possibility of deadlock (the system can always advance to a different state) but still suffers from the problem of livelock. If all five philosophers appear in the dining room at exactly the same time and each picks up the left fork at the same time the philosophers will wait ten minutes until they all put their forks down and then wait a further ten minutes before they all pick them up again. ## Solutions ### Arbitrator solution Another approach is to guarantee that a philosopher can only pick up both forks or none by introducing an arbitrator, e.g., a waiter. In order to pick up the forks, a philosopher must ask permission of the waiter. The waiter gives permission to only one philosopher at a time until the philosopher has picked up both of their forks. Putting down a fork is always allowed. The waiter can be implemented as a mutex. In addition to introducing a new central entity (the waiter), this approach can result in reduced parallelism. if a philosopher is eating and one of their neighbors is requesting the forks, all other philosophers must wait until this request has been fulfilled even if forks for them are still available. # Queue ## What is the difference between poll() and remove() method of Queue interface? (answer) • Though both poll() and remove() method from Queue is used to remove the object and returns the head of the queue, there is a subtle difference between them. If Queue is empty() then a call to remove() method will throw Exception, while a call to poll() method returns null. ## What is the difference between fail-fast and fail-safe Iterators? • Fail-fast Iterators throws ConcurrentModificationException when one Thread is iterating over collection object and other thread structurally modify Collection either by adding, removing or modifying objects on underlying collection. They are called fail-fast because they try to immediately throw Exception when they encounter failure. On the other hand fail-safe Iterators works on copy of collection instead of original collection ## To remove entry from collection • you need to use Iterator’s remove() method. This method removes current element from Iterator’s perspective. If you use Collection’s or List’s remove() method during iteration then your code will throw ConcurrentModificationException. That’s why it’s advised to use Iterator remove() method to remove objects from Collection. ## What is the difference between Synchronized Collection and Concurrent Collection? • One Significant difference is that **Concurrent Collections has better performance than synchronized Collection ** because they lock only a portion of Map to achieve concurrency and Synchronization. ## When do you use ConcurrentHashMap in Java • ConcurrentHashMap is better suited for situation where you have multiple readers and one Writer or fewer writers since Map gets locked only during the write operation. If you have an equal number of reader and writer than ConcurrentHashMap will perform in the line of Hashtable or synchronized HashMap. ## Sorting collections • Sorting is implemented using Comparable and Comparator in Java and when you call Collections.sort() it gets sorted based on the natural order specified in compareTo() method while Collections.sort(Comparator) will sort objects based on compare() method of Comparator. ## Hashmap vs Hasset • HashSet implements java.util.Set interface and that’s why only contains unique elements, while HashMap allows duplicate values. In fact, HashSet is actually implemented on top of java.util.HashMap. ## What is NavigableMap in Java • NavigableMap Map was added in Java 1.6, it adds navigation capability to Map data structure. It provides methods like lowerKey() to get keys which is less than specified key, floorKey() to return keys which is less than or equal to specified key, ceilingKey() to get keys which is greater than or equal to specified key and higherKey() to return keys which is greater specified key from a Map. It also provide similar methods to get entries e.g. lowerEntry(), floorEntry(), ceilingEntry() and higherEntry(). Apart from navigation methods, it also provides utilities to create sub-Map e.g. creating a Map from entries of an exsiting Map like tailMap, headMap and subMap. headMap() method returns a NavigableMap whose keys are less than specified, tailMap() returns a NavigableMap whose keys are greater than the specified and subMap() gives a NavigableMap between a range, specified by toKey to fromKey ## Array vs ArrayList • Array is fixed length data structure, once created you can not change it’s length. On the other hand, ArrayList is dynamic, it automatically allocate a new array and copies content of old array, when it resize. • Another reason of using ArrayList over Array is support of Generics. ## Can we replace Hashtable with ConcurrentHashMap? • Since Hashtable locks whole Map instead of a portion of Map, compound operations like if(Hashtable.get(key) == null) put(key, value) works in Hashtable but not in concurrentHashMap. instead of this use putIfAbsent() method of ConcurrentHashMap ## What is CopyOnWriteArrayList, how it is different than ArrayList and Vector • CopyOnWriteArrayList is new List implementation introduced in Java 1.5 which provides better concurrent access than Synchronized List. better concurrency is achieved by Copying ArrayList over each write and replace with original instead of locking. Also CopyOnWriteArrayList doesn’t throw any ConcurrentModification Exception. Its different than ArrayList because its thread-safe and ArrayList is not thread-safe and it’s different than Vector in terms of Concurrency. CopyOnWriteArrayList provides better Concurrency by reducing contention among readers and writers. ## Why ListIterator has added() method but Iterator doesn’t or Why to add() method is declared in ListIterator and not on Iterator. (answer) • ListIterator has added() method because of its ability to traverse or iterate in both direction of the collection. it maintains two pointers in terms of previous and next call and in a position to add a new element without affecting current iteration. ## What is BlockingQueue, how it is different than other collection classes? (answer) • BlockingQueue is a Queue implementation available in java.util.concurrent package. It’s one of the concurrent Collection class added on Java 1.5, main difference between BlockingQueue and other collection classes is that apart from storage, it also provides flow control. It can be used in inter-thread communication and also provides built-in thread-safety by using happens-before guarantee. You can use BlockingQueue to solve Producer Consumer problem, which is what is needed in most of concurrent applications. ## You have thread T1, T2 and T3, how will you ensure that thread T2 run after T1 and thread T3 run after T2 • To use join method. # Happen before • In computer science, the happened-before relation (denoted: → {\displaystyle \to ;} \to ;) is a relation between the result of two events, such that if one event should happen before another event, the result must reflect that, even if those events are in reality executed out of order (usually to optimize program flow). • In Java specifically, a happens-before relationship is a guarantee that memory written to by statement A is visible to statement B, that is, that statement A completes its write before statement B starts its read # Concurrent framework • The advantage of using Callable over Runnable is that Callable can explicitly return a value. • Executors are a big step forward compared to plain old threads because executors ease the management of concurrent tasks. • Some types of algorithms exist that require tasks to create subtasks and communicate with each other to complete. Those are the “divide and conquer” algorithms, which are also referred to as “map and reduce,” in reference to the eponymous functions in functional languages. • The fork/join framework added to the java.util.concurrent package in Java SE 7 through Doug Lea’s efforts fills that gap. The Java SE 5 and Java SE 6 versions of java.util.concurrent helped in dealing with concurrency, and the additions in Java SE 7 help with parallelism. • First and foremost, fork/join tasks should operate as “pure” in-memory algorithms in which no I/O operations come into play. Also, communication between tasks through shared state should be avoided as much as possible, because that implies that locking might have to be performed. • The core addition is a new ForkJoinPool executor that is dedicated to running instances implementing ForkJoinTask. ForkJoinTask objects support the creation of subtasks plus waiting for the subtasks to complete. With those clear semantics, the executor is able to dispatch tasks among its internal threads pool by “stealing” jobs when a task is waiting for another task to complete and there are pending tasks to be run. • ForkJoinTask objects feature two specific methods: • The fork() method allows a ForkJoinTask to be planned for asynchronous execution. This allows a new ForkJoinTask to be launched from an existing one. • In turn, the join() method allows a ForkJoinTask to wait for the completion of another one. • There are two types of ForkJoinTask specializations: • Instances of RecursiveAction represent executions that do not yield a return value. • In contrast, instances of RecursiveTask yield return values. In general, RecursiveTask is preferred because most divide-and-conquer algorithms return a value from a computation over a data set. • The fork and join principle consists of two steps which are performed recursively. These two steps are the fork step and the join step. • A task that uses the fork and join principle can fork (split) itself into smaller subtasks which can be executed concurrently. This is illustrated in the diagram below: • By splitting itself up into subtasks, each subtask can be executed in parallel by different CPUs, or different threads on the same CPU. • The limit for when it makes sense to fork a task into subtasks is also called a threshold. It is up to each task to decide on a sensible threshold. It depends very much on the kind of work being done. • Once the subtasks have finished executing, the task may join (merge) all the results into one result. • Of course, not all types of tasks may return a result. If the tasks do not return a result then a task just waits for its subtasks to complete. No result merging takes place then. • The ForkJoinPool is a special thread pool which is designed to work well with fork-and-join task splitting. The ForkJoinPool located in the java.util.concurrent package, so the full class name is java.util.concurrent.ForkJoinPool. • You create a ForkJoinPool using its constructor. As a parameter to the ForkJoinPool constructor you pass the indicated level of parallelism you desire. • The parallelism level indicates how many threads or CPUs you want to work concurrently on on tasks passed to the ForkJoinPool. • You submit tasks to a ForkJoinPool similarly to how you submit tasks to an ExecutorService. You can submit two types of tasks. A task that does not return any result (an “action”), and a task which does return a result (a “task”). ## Fork/Join framework details • When you call fork method on ForkJoinTask, program will call “pushTask” asynchronously of ForkJoinWorkerThread, and then return result right away. final void pushTask(ForkJoinTask t) { if ((q = queue) != null) { // ignore if queue removed long u = (((s = queueTop) & (m = q.length - 1)) << ASHIFT) + ABASE; UNSAFE.putOrderedObject(q, u, t); queueTop = s + 1; // or use putOrderedInt if ((s -= queueBase) <= 2) pool.signalWork(); else if (s == m) growQueue(); } } • “join” method main functionality is blocking current thread and wait for resutls. public final V join() { if (doJoin() != NORMAL) return reportResult(); else return getRawResult(); } private V reportResult() { int s; Throwable ex; if ((s = status) == CANCELLED) throw new CancellationException(); if (s == EXCEPTIONAL && (ex = getThrowableException()) != null) UNSAFE.throwException(ex); return getRawResult(); } • When do call doJoin(), you can get status of curent thread. There are 4 status: • NORMAL: completed • CANCELLED • SIGNAL • EXCEPTIONAL • The method of doJoin() private int doJoin() { if ((s = status) < 0) return s; try { completed = exec(); } catch (Throwable rex) { return setExceptionalCompletion(rex); } if (completed) return setCompletion(NORMAL); } } else return externalAwaitDone(); } If a SocketUsingTask is cancelled through its Future, the socket is closed and the As of Java 6, ExecutorService implementations can override newTaskFor in AbstractExecutorService to control instantiation of the Future corresponding to a submitted Callable or Runnable. The default implementation just creates a new FutureTask, as shown in Listing 6.12. protected <T> RunnableFuture<T> newTaskFor(Callable<T> task) { } • As with any other encapsulated object, thread ownership is not transitive: the application may own the service and the service may own the worker threads, but the application doesn’t own the worker threads and therefore should not attempt to stop them directly. Instead, the service should provide lifecycle methods for shutting itself down that also shut down the owned threads; then the application can shut down the service, and the service can shut down the threads. Executor- Service provides the shutdown and shutdownNow methods; other thread-owning services should provide a similar shutdown mechanism. ## Log service implemented by blocking queue • If you are logging multiple lines as part of a single log message, you may need to use additional client-side locking to prevent undesirable interleaving of output from multiple threads. If two threads logged multiline stack traces to the same stream with one println call per line, the results would be interleaved unpredictably, and could easily look like one large but meaningless stack trace. public class LogWriter { private final BlockingQueue<String> queue; private final LoggerThread logger; public LogWriter(Writer writer) { } public void start() { logger.start(); } public void log(String msg) throws InterruptedException { queue.put(msg); } ... public void run() { try { while (true) writer.println(queue.take()); } catch(InterruptedException ignored) { } finally { writer.close(); } } } } ### Stop logging • However, this approach has race conditions that make it unreliable. The implementation of log is a check-then-act sequence: producers could observe that the service has not yet been shut down but still queue messages after the shutdown, again with the risk that the producer might get blocked in log and never become unblocked. There are tricks that reduce the likelihood of this (like having the consumer wait several seconds before declaring the queue drained), but these do not change the fundamental problem, merely the likelihood that it will cause a failure. public void log(String msg) throws InterruptedException { if (!shutdownRequested) queue.put(msg); else throw new IllegalStateException("logger is shut down"); } • The way to provide reliable shutdown for LogWriter is to fix the race con- dition, which means making the submission of a new log message atomic. But we don’t want to hold a lock while trying to enqueue the message, since put could block. Instead, we can atomically check for shutdown and conditionally increment a counter to “reserve” the right to submit a message, as shown in Log- Service in Listing 7.15. ### Delegate shutdown to high level service public class LogService { private final ExecutorService exec = newSingleThreadExecutor(); ... public void start() { } public void stop() throws InterruptedException { try { exec.shutdown(); exec.awaitTermination(TIMEOUT, UNIT); } finally { writer.close(); } } public void log(String msg) { try { } } } catch (RejectedExecutionException ignored) { } • It can even delegate to one shot Executor, OneShotExecutionService.java import java.util.Set; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; /** * Created by todzhang on 2017/1/30. * If a method needs to process a batch of tasks and does not return * until all the tasks are finished, it can simplify service lifecycle management * by using a private Executor whose lifetime is bounded by that method. * * * The checkMail method in Listing checks for new mail in parallel * on a number of hosts. It creates a private executor and submits * a task for each host: it then shuts down the executor and waits * for termination, which occurs when all */ public class OneShotExecutionService { boolean checkMail(Set<String> hosts, long timeout, TimeUnit unit) throws InterruptedException{ final AtomicBoolean hasNewMail=new AtomicBoolean(false); try { for (final String host : hosts ) { exec.execute(new Runnable() { @Override public void run() { if (checkMail(host)) { hasNewMail.set(true); } } }); } } finally{ exec.shutdown(); exec.awaitTermination(timeout,unit); } return hasNewMail.get(); } boolean checkMail(String host){ return true; } } • When an ExecutorService is shut down abruptly with shutdownNow, it attempts to cancel the tasks currently in progress and returns a list of tasks that were sub- mitted but never started so that they can be logged or saved for later processing. Detailed logic can be found at CancelledTaskTrackingExecutor.java # JVM shutdown • The JVM can shut down in either an orderly or abrupt manner. An orderly shut- down is initiated when the last “normal” (nondaemon) thread terminates, some- one calls System.exit, or by other platform-specific means (such as sending a SIGINT or hitting Ctrl-C). While this is the standard and preferred way for the JVM to shut down, it can also be shut down abruptly by calling Runtime.halt or by killing the JVM process through the operating system (such as sending a SIGKILL). ## Shutdown hooks • In an orderly shutdown, the JVM first starts all registered shutdown hooks. Shutdown hooks are unstarted threads that are registered with Runtime.addShutdownHook. The JVM makes no guarantees on the order in which shutdown hooks are started. If any application threads (daemon or nondaemon) are still running at shutdown time, they continue to run concurrently with the shutdown process. • When all shutdown hooks have completed, the JVM may choose to run finalizers if runFinalizersOnExit is true, • and then halts. • The JVM makes no attempt to stop or interrupt any application threads that are still running at shutdown time; they are abruptly terminated when the JVM eventually halts. If the shutdown hooks or finalizers don’t complete, then the orderly shutdown process “hangs” and the JVM must be shut down abruptly. In an abrupt shutdown, the JVM is not required to do anything other than halt the JVM; shutdown hooks will not run. • Shutdown hooks should be thread-safe: they must use synchronization when accessing shared data and should be careful to avoid deadlock, just like any other concurrent code. Further, they should not make assumptions about the state of the application (such as whether other services have shut down already or all normal threads have completed) or about why the JVM is shutting down, and must therefore be coded extremely defensively. • Finally, they should exit as quickly as possible, since their existence delays JVM termination at a time when the user may be expecting the JVM to terminate quickly. • Shutdown hooks can be used for service or application cleanup, such as deleting temporary files or cleaning up resources that are not automatically cleaned up by the OS. Listing 7.26 shows how LogService in Listing 7.16 could register a shutdown hook from its start method to ensure the log file is closed on exit. • Because shutdown hooks all run concurrently, closing the log file could cause trouble for other shutdown hooks who want to use the logger. To avoid this problem, shutdown hooks should not rely on services that can be shut down by the application or other shutdown hooks. One way to accomplish this is to use a single shutdown hook for all services, rather than one for each service, and have it call a series of shutdown actions. This ensures that shutdown actions execute sequentially in a single thread, thus avoiding the possibility of race conditions or deadlock between shutdown actions. This technique can be used whether or not you use shutdown hooks; executing shutdown actions sequentially rather than concurrently eliminates many potential sources of failure. public void start() { public void run() { try { LogService.this.stop(); } catch (InterruptedException ignored) {} } }); } • Normal threads and daemon threads differ only in what happens when they exit. When a thread exits, the JVM performs an inventory of running threads, and if the only threads that are left are daemon threads, it initiates an orderly shutdown. When the JVM halts, any remaining daemon threads are abandoned— finally blocks are not executed, stacks are not unwound—the JVM just exits. • Daemon threads should be used sparingly—few processing activities can be safely abandoned at any time with no cleanup. In particular, it is dangerous to use daemon threads for tasks that might perform any sort of I/O. Daemon threads are best saved for “housekeeping” tasks, such as a background thread that periodically removes expired entries from an in-memory cache. Daemon threads are not a good substitute for properly managing the life- cycle of services within an application. ### Finalizer • Finalizers offer no guarantees on when or even if they run, and they impose a significant performance cost on objects with nontrivial finalizers. They are also extremely difficult to write correctly.9 In most cases, the combination of finally blocks and explicit close methods does a better job of resource management than finalizers; the sole exception is when you need to manage objects that hold resources acquired by native methods. • Java does not provide a preemptive mechanism for cancelling activities or terminating threads. Instead, it provides a cooperative interruption mechanism that can be used to facilitate cancellation, but it is up to you to construct protocols for cancellation and use them consistently. Using FutureTask and the Executor framework simplifies building cancellable tasks and services. • Thread pools work best when tasks are homogeneous and independent. Mix- ing long-running and short-running tasks risks “clogging” the pool unless it is very large; submitting tasks that depend on other tasks risks deadlock unless the pool is unbounded. Fortunately, requests in typical network-based server applications—web servers, mail servers, file servers—usually meet these guide- lines. • Some tasks have characteristics that require or preclude a specific exe- cution policy. Tasks that depend on other tasks require that the thread pool be large enough that tasks are never queued or rejected; tasks that exploit thread confinement require sequential execution. Document these requirements so that future maintainers do not undermine safety or live- ness by substituting an incompatible execution policy. • In a single-threaded executor, a task that submits another task to the same executor and waits for its result will always deadlock. • The same thing can happen in larger thread pools if all threads are executing tasks that are blocked waiting for other tasks still on the work queue. This is called thread starvation deadlock, and can occur whenever a pool task initiates an unbounded blocking wait for some resource or condition that can succeed only through the action of another pool task, such as waiting for the return value or side effect of another task, unless you can guarantee that the pool is large enough. Whenever you submit to an Executor tasks that are not independent, be aware of the possibility of thread starvation deadlock, and document any pool sizing or configuration constraints in the code or configuration file where the Executor is configured. Future<String> header,footer; String body=renderBody(); • Thread pools can have responsiveness problems if tasks can block for extended periods of time, even if deadlock is not a possibility. A thread pool can become clogged with long-running tasks, increasing the service time even for short tasks. If the pool size is too small relative to the expected steady-state number of long- running tasks, eventually all the pool threads will be running long-running tasks and responsiveness will suffer. • One technique that can mitigate the ill effects of long-running tasks is for tasks to use timed resource waits instead of unbounded waits. Most blocking methods in the plaform libraries come in both untimed and timed versions, such as Thread.join, BlockingQueue.put, CountDownLatch.await, and Selector.sel- ect. If the wait times out, you can mark the task as failed and abort it or requeue it for execution later. This guarantees that each task eventually makes progress towards either successful or failed completion, freeing up threads for tasks that might complete more quickly. If a thread pool is frequently full of blocked tasks, this may also be a sign that the pool is too small. • The ideal size for a thread pool depends on the types of tasks that will be submitted and the characteristics of the deployment system. Thread pool sizes should rarely be hard-coded; instead pool sizes should be provided by a configuration mechanism or computed dynamically by consulting Runtime.availableProcessors. • If you have different categories of tasks with very different behaviors, consider using multiple thread pools so each can be tuned according to its workload. • The optimal pool size for keeping the processors at the desired utilization is: Nthreads=Ncpu∗Ucpu∗ (1+((W/C) Ncpu: Number of CPU Ucpu: target CPU utilization , 0<Ucpu<1 W/C: ratio of wait time to compute time public ThreadPoolExecutor(int corePoolSize,int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue,ThreadFactory threadFactory,RejectedExecutionHandler handler){...} 1. corePoolSize is the target size, the implementation attempts to maintain the pool at this size when there are no tasks to execute. and will not create more threads than this unless the work queue is full. When a ThreadPoolExecutor is initially created, the core threads are not started immediately, but instead as tasks are submitted. Unless you call prestartAllCoreThreads 2. The maximum pool size is the upper bound on how many threads can be active at once. 3. A thread that has been idel for longer than the keep-alive time becomes a candidate for reaping and can be terminated if the current pool size exceed the core size. • By tuning the core pool size and keep-alive times, you can encourage the pool to reclaim resources used by otherwise idle threads, making them available for more useful work. (Like everything else, this is a tradeoff: reaping idle threads incurs additional latency due to thread creation if threads must later be created when demand increases.) • The newFixedThreadPool factory sets both the core pool size and the maxi- mum pool size to the requested pool size, creating the effect of infinite timeout; • the newCachedThreadPool factory sets the maximum pool size to Integer.MAX_VALUE and the core pool size to zero with a timeout of one minute, creating the effect of an infinitely expandable thread pool that will contract again when demand decreases. • Other combinations are possible using the explicit ThreadPool- Executor constructor. • ThreadPoolExecutor allows you to supply a BlockingQueue to hold tasks awaiting execution. There are three basic approaches to task queueing: un- bounded queue, bounded queue, and synchronous handoff. The choice of queue interacts with other configuration parameters such as pool size. • The default for newFixedThreadPool and newSingleThreadExecutor is to use an unbounded LinkedBlockingQueue. Tasks will queue up if all worker threads are busy, but the queue could grow without bound if the tasks keep arriving faster than they can be executed. • A more stable resource management strategy is to use a bounded queue, such as an ArrayBlockingQueue or a bounded LinkedBlockingQueue or Priority- BlockingQueue. Bounded queues help prevent resource exhaustion but introduce the question of what to do with new tasks when the queue is full. (There are a number of possible saturation policies for addressing this problem; • Using a FIFO queue like LinkedBlockingQueue or ArrayBlockingQueue causes tasks to be started in the order in which they arrived. For more con- trol over task execution order, you can use a PriorityBlockingQueue, which orders tasks according to priority. Priority can be defined by natural order (if tasks implement Comparable) or by a Comparator. • The newCachedThreadPool factory is a good default choice for an Executor, providing better queuing performance than a fixed thread pool.5 A fixed size thread pool is a good choice when you need to limit the number of concurrent tasks for resource-management purposes, as in a server application that accepts requests from network clients and would otherwise be vulnerable to overload. # Saturation policies • When a bounded work queue fills up, the saturation policy comes into play. The saturation policy for a ThreadPoolExecutor can be modified by calling setRejectedExecutionHandler. • Several implementations of RejectedExecutionHandler are provided, each implementing a different saturation policy: AbortPolicy, CallerRunsPolicy, DiscardPolicy, and DiscardOldestPolicy. • The default policy, abort, causes execute to throw the unchecked Rejected- ExecutionException; the caller can catch this exception and implement its own overflow handling as it sees fit. The discard policy silently discards the newly submitted task if it cannot be queued for execution; the discard-oldest policy discards the task that would otherwise be executed next and tries to resubmit the new task. (If the work queue is a priority queue, this discards the highest-priority element, so the combination of a discard-oldest saturation policy and a priority queue is not a good one.) • The caller-runs policy implements a form of throttling that neither discards tasks nor throws an exception, but instead tries to slow down the flow of new tasks by pushing some of the work back to the caller. It executes the newly submitted task not in a pool thread, but in the thread that calls execute. If we modified our WebServer example to use a bounded queue and the caller-runs policy, after all the pool threads were occupied and the work queue filled up the next task would be executed in the main thread during the call to execute. • There are a number of reasons to use a custom thread factory. You might want to specify an UncaughtExceptionHandler for pool threads, or instantiate an instance of a custom Thread class, such as one that performs debug logging. public interface ThreadFactory{ } • BoundedExecutor.java is using semaphore and Executor for bounded executor service. • MyExtendedThreadPool.java implemented beforeExecute, afterExecute, etc method to add statistics, such as log and timing for each operations in the thread pool ## Process sequential processing to parallel void processSequentially(List<Element> elements) { for (Element e : elements) process(e); } void processInParallel(Executor exec, List<Element> elements) { for (final Element e : elements) exec.execute(new Runnable() { public void run() { process(e); } }); } • If you want to submit a set of tasks and wait for them all to complete, you can use ExecutorService.invokeAll; to retrieve the results as they become available, you can use a CompletionService. • There is often a tension between safety and liveness. We use locking to ensure thread safety, but indiscriminate use of locking can cause lock-ordering deadlocks. Similarly, we use thread pools and semaphores to bound resource consumption, but failure to understand the activities being bounded can cause resource deadlocks. Java applications do not recover from deadlock, so it is worthwhile to ensure that your design precludes the conditions that could cause it. • When a thread holds a lock forever, other threads attempting to acquire that lock will block forever waiting. When thread A holds lock L and tries to acquire lock M, but at the same time thread B holds M and tries to acquire L, both threads will wait forever. This situation is the simplest case of deadlock (or deadly embrace), • Database systems are designed to detect and recover from deadlock. A trans- action may acquire many locks, and locks are held until the transaction commits. So it is quite possible, and in fact not uncommon, for two transactions to deadlock. Without intervention, they would wait forever (holding locks that are probably re- quired by other transactions as well). But the database server is not going to let this happen. When it detects that a set of transactions is deadlocked (which it does by searching the is-waiting-for graph for cycles), it picks a victim and aborts that transaction. This releases the locks held by the victim, allowing the other transactions to proceed. The application can then retry the aborted transaction, which may be able to complete now that any competing transactions have com- pleted. • A program will be free of lock-ordering deadlocks if all threads acquire the locks they need in a fixed global order. ## To break deadlock by ensuring lock order • uses System.identityHashCode to induce a lock ordering. It involves a few extra lines of code, but eliminates the possibility of deadlock. public static native int identityHashCode(Object x); • In the rare case that two objects have the same hash code, we must use an arbitrary means of ordering the lock acquisitions, and this reintroduces the pos- sibility of deadlock. To prevent inconsistent lock ordering in this case, a third “tie breaking” lock is used. By acquiring the tie-breaking lock before acquiring either Account lock, we ensure that only one thread at a time performs the risky task of acquiring two locks in an arbitrary order, eliminating the possibility of deadlock (so long as this mechanism is used consistently). If hash collisions were common, this technique might become a concurrency bottleneck (just as having a single, program-wide lock would), but because hash collisions with System.identity- HashCode are vanishingly infrequent, this technique provides that last bit of safety at little cost. • two locks are acquired by two threads in different orders, risking deadlock. • Calling a method with no locks held is called an open call [CPJ 2.4.1.3], and classes that rely on open calls are more well-behaved and composable than classes that make calls with locks held. Using open calls to avoid deadlock is analogous to using encapsulation to provide thread safety: while one can certainly construct a thread-safe program without any encapsulation, the thread safety analysis of a program that makes effective use of encapsulation is far easier than that of one that does not. • A program that never acquires more than one lock at a time cannot experience lock-ordering deadlock. Of course, this is not always practical, but if you can get away with it, it’s a lot less work. If you must acquire multiple locks, lock ordering must be a part of your design: try to minimize the number of potential locking interactions, and follow and document a lock-ordering protocol for locks that may be acquired together. • In programs that use fine-grained locking, audit your code for deadlock free- dom using a two-part strategy: first, identify where multiple locks could be ac- quired (try to make this a small set), and then perform a global analysis of all such instances to ensure that lock ordering is consistent across your entire pro- gram. Using open calls wherever possible simplifies this analysis substantially. With no non-open calls, finding instances where multiple locks are acquired is fairly easy, either by code review or by automated bytecode or source code anal- ysis. ## Timed lock attempts • Another technique for detecting and recovering from deadlocks is to use the timed tryLock feature of the explicit Lock classes (see Chapter 13) instead of intrinsic locking. Where intrinsic locks wait forever if they cannot acquire the lock, explicit locks let you specify a timeout after which tryLock returns failure. • There are two threads trying to accquire two locks in different orders Java stack information for the threads listed above: "ApplicationServerThread ": at MumbleDBConnection.remove_statement - waiting to lock <0x650f7f30> (a MumbleDBConnection) at MumbleDBStatement.close - locked <0x6024ffb0> (a MumbleDBCallableStatement) ...
# Distorted metal coordination geometry after relaxation (SetupMetalMover was used, fold tree and constraints were set manually) 3 posts / 0 new Distorted metal coordination geometry after relaxation (SetupMetalMover was used, fold tree and constraints were set manually) #1 Hello, I am trying to relax Zn containing peptides like zinc fingers, but always got distorted geometries of the coordination site and much higher scores after the relax. Still, the rest of the peptide looks nice. To detect the Zn coordination the SetupMetalsMover() was used. Because the fold tree still contains a jump between the first residue and the metal (in the documentation of Rosetta it was said that the function should set up the fold tree correctly by itself), the fold tree was set manually to connect the Zn with one coordinating residue. Also, the distance and angle constraints were set with a cst-file. FastRelax() was used with the standard ref2015 score function for relaxing. An ensemble of ten outputs was created and stored in pdb-files. The files (the cst file was uploaded as txt) and the used code are attached. All ten runs lead to similar results (therefore only one pdb output file is attached). It seems like that Rosetta tried to maximize the distances between the His and the Zn. Then I also changed the radius of the Zn from 1.09 to 0.5 in the atom_properties.txt and the mm_atom_properties.txt in the pyrosetta\database\chemicals folder or set the charge to 0 in the ZN.params file, hoping to minimize the maybe detected clash between the Zn and the N of His, but the results stayed identical. I surely have overlooked something important and would like to ask, if someone can help. #Start from pyrosetta import * init() #Metal detection from pyrosetta.rosetta.protocols.simple_moves import SetupMetalsMover metaldetector = SetupMetalsMover() #Fold Tree ft = FoldTree() #Scorefunction from pyrosetta.teaching import get_fa_scorefxn scorefxn = get_fa_scorefxn() #Constraints from pyrosetta.rosetta.protocols.constraint_movers import ConstraintSetMover constraints = ConstraintSetMover() constraints.constraint_file('constr_ZnF.cst') #Relax function from rosetta.protocols.relax import FastRelax relax = FastRelax() relax.set_scorefxn(scorefxn) relax.constrain_relax_to_start_coords(True) relax.constrain_coords(True) #Generation of ensemble E_relax=[] for i in range(1,11): #Create the pose from PDB-File ZnF_Pose = pose_from_pdb('5znf.clean.pdb') #Using the SetupMetalMover metaldetector.apply(ZnF_Pose) #Set the correct fold tree ZnF_Pose.fold_tree(ft) #Set constraints constraints.apply(ZnF_Pose) #Relax relax.apply(ZnF_Pose) #Calculate score of relaxed pose and store it the list E_relax E_relax.append(scorefxn(ZnF_Pose)) #Create PDB-File from relaxed pose ZnF_Pose.dump_pdb('ZnF_Pose_relax_{}.pdb'.format(i)) #Mean Score, standard deviation and standard error mean_E_rel = sum(E_relax)/len(E_relax) A = [] for i in range(0,10): a = (E_relax[i]-mean_E_rel)**2 A.append(a) std_E_rel = (sum(A) / (len(E_relax)-1))**(0.5) SEM_E_rel = std_E_rel/((len(E_relax))**(0.5)) #Score of the starting structure ZnF_Pose_start = pose_from_pdb('5znf.clean.pdb') metaldetector.apply(ZnF_Pose_start) ZnF_Pose_start.fold_tree(ft) constraints.apply(ZnF_Pose_start) start_E = scorefxn(ZnF_Pose_start) ZnF_Pose_start.dump_pdb('ZnF_pose_start.pdb') #Outputfile with score of starting structure, mean score of ensemble and score of each relaxed structure txtfile = 'ZnF_FastRelax_score.txt' myfile = open(txtfile, 'w') myfile.write('start_E = {}\nmean_E_rel = {} ; Std = {} ; SEM = {}\n\nE_rel\n'.format(start_E, mean_E_rel, std_E_rel, SEM_E_rel)) for i in range(0,len(E_relax)): myfile.write('{}\n'.format(E_relax[i])) myfile.close() AttachmentSize 39.61 KB 46.33 KB 446 bytes Category: Post Situation: Tue, 2020-01-21 11:18 TLP You need to do your relaxation with a scorefunction that has the metalbinding_constraint scoreterm turned on (given a nonzero weight).  Reweight this scoreterm to, say, 1.0 in the scorefxn object with scorefxn.set_weight( core.scoring.metalbinding_constraint, 1.0 ). Tue, 2020-01-21 12:16 vmulligan Thank you very much, it helped a lot! (Sorry for my delayed answer) I used scorefxn.set_weight(pyrosetta.rosetta.core.scoring.ScoreType.metalbinding_constraint, 1.0) and now the coordination geometry is normal again. Fri, 2020-04-03 10:19 TLP
# A particle is constrained to move along the positive x-axis under the influence of a force... ###### Question: A particle is constrained to move along the positive x-axis under the influence of a force whose potential energy is U(x) = U_0(2 cos x/a - x/a) where U_0 and a are positive constants. Plot U versus x. A simple hand sketch is fine. Find the equilibrium point(s). For each equilibrium point, determine whether the equilibrium is stable or unstable. #### Similar Solved Questions ##### Help Se & Ext Gabs Corporation uses the FO method in its process costing system. The... Help Se & Ext Gabs Corporation uses the FO method in its process costing system. The Grinding Department started the month with 17,000 units in its beginning work in process inventory that were 60% complete with respect to conversion costs. An additional 66.000 units were transferred in from the... ##### Core: 0 of 1 pt 7 of 20 (4 complete) 4.39 Find a polynomial function of... core: 0 of 1 pt 7 of 20 (4 complete) 4.39 Find a polynomial function of lowest degree with rational coefficients that has the given numbers as some of its zeros. 2-i, 3 f(x) =... ##### Roblem 5.60 onts)ing a smaeverapicans alcohol, a CO,p-rich vapor containi sieve-tray be recovered by Pnts) When... roblem 5.60 onts)ing a smaeverapicans alcohol, a CO,p-rich vapor containi sieve-tray be recovered by Pnts) When mo is fermented to produce a liquor containing ethyl mall mount of ethyl alcohol is evolved. The alcohol can absorption with n n with water in a sieve-tray tower. For the following conditi... ##### 1. Annenbaum Corporation uses the weighted-average method in its process costing system. This month, the beginning... 1. Annenbaum Corporation uses the weighted-average method in its process costing system. This month, the beginning inventory in the first processing department consisted of 400 units. The costs and percentage completion of these units in beginning inventory were: Cost Percent Complete Materials cost... ##### What is the equation of the line that has a slope of 3 and goes through (0, -4)? What is the equation of the line that has a slope of 3 and goes through (0, -4)?... A coupon bond paying semiannual interest is reported as having an ask price of 105% of its $1,000 par value. If the last interest payment was made 60 days ago and the coupone rate is 496, what is the invoice price of the bond? Assume there are 182 days between coupon payments. Round your answer to t... 1 answer ##### Two masses, m and m2, are joined by a massless string. A force Fis applied vertically... Two masses, m and m2, are joined by a massless string. A force Fis applied vertically to the upper mass m2. Express you answers in terms of F, m, m2, and required constants. a)(4p) What is the magnitude of the acceleration of mass mı? b)(4p) What is the magnitude of tension in the string connec... 1 answer ##### Firmicutes refers to? 1. a) Cute bugs with firm bodies 2. b) A phylum of low... Firmicutes refers to? 1. a) Cute bugs with firm bodies 2. b) A phylum of low G-C, Gram positive bacteria, that includes genera such as Staphylococcus 3. c) A type of virus 4. d) Protozoa that have a cyst stage 5. e) A group of fungi that includes molds and yeasts... 1 answer ##### Consider the titration of 100.0 mL of 0.100 M methylamine (CH3NH2) with 0.500 M HNO3. Calculate... Consider the titration of 100.0 mL of 0.100 M methylamine (CH3NH2) with 0.500 M HNO3. Calculate the pH at the following volumes of acid added. For CH3NH3+, pKa = 10.632 (a) Find the equivalence point volume. (b) 0 mL (c) 9.0 mL (d) 10.0 mL (e) 20.0 mL (f) 30.0 mL... 1 answer ##### How do you convert 7/25 into a percent and decimal? How do you convert 7/25 into a percent and decimal?... 2 answers ##### This can be C This can be C. OR D. couldn't it be? A second hand statement? Dang! or even B.Keep running this through my headWhich one of the following statements is an example of hearsay?A. I saw the doctor administer the injection.B. I checked the patient’s blood pressure three times.C. The report sai... 1 answer ##### 71 Determine the slope (in atm/K) of a solid- liquid coexistence line (in a P-T phase... 71 Determine the slope (in atm/K) of a solid- liquid coexistence line (in a P-T phase diagram) at 273.15 K, given that AHfus = 6.01 kJ /mol, Vi 0.0180 L/mol, and Vs 0.0196 L/mol. Make sure your slope has appropriate units for the type of phase diagram we are dealing with (i.e. it should only have pr... 1 answer ##### AaBbce AaBbcc AaBb AaBbce AaB Aabbccc AaBbce dabbel daBbc AaBbce AaBbc Aabe 1 Normal T No... AaBbce AaBbcc AaBb AaBbce AaB Aabbccc AaBbce dabbel daBbc AaBbce AaBbc Aabe 1 Normal T No Spac... Heading 1 Heading 2 Title Subtitle Subtle Em... Emphasis Intense Strong Quote Intense Paragraph Styles E3-13 Below are transactions for Hurricane Company during 2018. a. On October 1, 2018, Hurricane le... 1 answer ##### The financing activities section of the statement of cash flows includes paying dividends and paying off... The financing activities section of the statement of cash flows includes paying dividends and paying off loans. True O False... 1 answer ##### Suppose a simple random sample of siren. 200 is obtained from a population whose size o... Suppose a simple random sample of siren. 200 is obtained from a population whose size o N 30,000 and whose population proportion with a specified chap 6 Complete parts though how A. Approximately normal because n <0.05N and nipit - p) 10. OB. Not normal becausen 0.06 and p1-p) < 10 OC. Not nor... 1 answer ##### 1) What is the difference between Tricare and ChampVA? 2) If Jill's covered surgery cost$10,000... 1) What is the difference between Tricare and ChampVA? 2) If Jill's covered surgery cost $10,000 and her out-of-pocket maximum was$1000, how much would Jill's surgery end up costing her? 3)Jack had knee surgery. His physician charged 5000 His insurance allowed 4000 Jack has a 1000 deductibl...
# Find the other root of quadratic when $b^2-4ac = 0$? According to many sources, the fundamental theorem of algebra states that every polynomial of degree $$n$$ has exactly n roots. But where's the other root when $$b^2-4ac = 0$$? What's the other root of $$4x^2 - 32x + 64$$, for example? (the real root is 4). • I just want to say that you're right: the given polynomial has exactly one root, not two. When counting roots with multiplicity, the polynomial has two roots (agreeing with its degree, as per the fundamental theorem of algebra), but when counting actual numbers that make the expression $0$, there is exactly one. – Theo Bendit Apr 24 at 5:54 • The most bare-bones statement of the Fundamental Thm of Algebra is that a non-constant polynomial with complex coeffs has at least one complex root; ie, we can "peel off" a factor: $p(z)=(z-r)q(z)$, where $q$ has lower degree than $p$. ... Now, we can continue, peeling-off factors until "$q(z)$" becomes constant; clearly, that'll take degree-of-$p$ steps altogether. Because we can encounter the same root/factor more than once in the process, the number of steps need not match the number of distinct roots. That's why it's important to say that the degree counts roots "with multiplicity". – Blue Apr 24 at 13:14 When $$b^{2} - 4ac = 0$$, the quadratic formula becomes $$-\frac{b}{2a}$$. You are wondering about the other root. This is where the concept of repeated roots/multiplicity would come in. The second root is equal to the first root, that's why you get only one value from the quadratic formula. In your case, you have $$4x^{2} - 32x + 64 = 0$$. Notice that the left hand side can be factored into $$4(x - 4)^{2} = 0$$. In that case, by applying the zero property of multiplication, you get \begin{align*}x - 4 &= 0 &\qquad x - 4 &= 0 \\ x &= 4 & \qquad x &= 4.\end{align*} You can see that the root $$x = 4$$ is repeated twice. Or, just by looking at the expression, you know that $$(x - 4)$$ is a factor which gives a root of $$x = 4$$. In your case, it is squared $$(x - 4)^{2}$$, that's why the number of roots increased, although the roots are the same. The fundamental theorem of algebra requires roots to be counted with multiplicity. In conjunction with the factor theorem it implies that every univariate polynomial with complex coefficients can be broken into linear factors (of the form $$x-a$$, corresponding to root $$a$$); the multiple roots are the ones appearing more than once. Here $$4x^2-32x+64=4(x-4)^2$$ and the root $$4$$ appears twice; it is a double root. I believe the root is $$4$$ but occurs twice. This occurs when you can factor the expression as $$(x-4)^2=0$$. In this case, the discriminant of the quadratic is zero. So there will only be one solution. The root $$x=4$$ is a repeated root. It can easily be proven that it is repeated if we solve the equation by completing the square: $$4x^2-32x+64 = 0$$ $$(2x-8)^2 = 0$$ Take the square root on both sides: $$2x-8 = ±\sqrt{0}$$ $$2x-8 = ±0$$ $$2x-8 = 0 \lor 2x-8 = -0$$ By moving the 8 to the other side, it's easy to see that the first root is the solution of the linear equation $$2x = 8+0$$ and the second root is the solution of the linear equation $$2x = 8-0$$ $$2x = 8+0 \Rightarrow 2x = 8 \Rightarrow x = 4$$ $$2x = 8-0 \Rightarrow 2x = 8 \Rightarrow x = 4$$ Both roots are 4 so the root is repeated. This is true for every quadratic polynomial where the discriminant is equal to zero.
## Problem 1 Recreate this figure using the tools in ggplot2. The data come from the Zachos et al 2001 deep sea oxygen isotope dataset. Make a line graph of the o18 values against time, and include rectangular overlays that show the geological epochs of the Cenozoic. Use these approximate start dates for the geological epochs: epoch = c("Paleocene", "Eocene", "Oligocene", "Miocene", "Pliocene", "Pleistocene", "Holocene") start = c(65, 58.8, 33.9, 23.03, 5.5, 1.8, 0.01) Note: The original dataset is noisy. Use a convolution filter with the filter() function to reduce noise in the dataset. If you have dplyr loaded, then you will get funny errors when you try to use this function, because you will be inadvertently using the dplyr::filter() function. Be explicit that you want the stats::filter() function, from the stats package. Note one more thing: This is a good example of a case in which you need separate dataframes for separate layers.
# Arithmetic-geometric means for hyperelliptic curves and Calabi-Yau varieties - Mathematics > Algebraic Geometry Arithmetic-geometric means for hyperelliptic curves and Calabi-Yau varieties - Mathematics > Algebraic Geometry - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. Abstract: In this paper, we define a generalized arithmetic-geometric mean $\mu g$among $2^g$ terms motivated by $2\tau$-formulas of theta constants. By usingThomae-s formula, we give two expressions of $\mu g$ when initial terms satisfysome conditions. One is given in terms of period integrals of a hyperellipticcurve $C$ of genus $g$. The other is by a period integral of a certainCalabi-Yau $g$-fold given as a double cover of the $g$-dimensional projectivespace $\mathbf{P}^g$. Autor: Keiji Matsumoto, Tomohide Terasoma Fuente: https://arxiv.org/
## Introduction Cardiovascular diseases (CVD) are a major public health burden1. Prognostic CVD prediction models allow identifying individuals at high risk that are eligible for lifestyle interventions and preventive treatment by estimating individual CVD risk. Their development is largely focussed on applications in clinical settings to support treatment decisions as for example with the Systematic COronary Risk Evaluation (SCORE) and the Pooled Cohort Equations (PCE)2,3,4,5. However, as these evaluations require information from physical examinations (blood pressure) and blood tests (cholesterol), application of these scores is unfeasible in most physician-independent settings like self-assessment of individuals, health education campaigns, and step-wise screening procedures including a non-clinical stage. The few available non-clinical models to be used independently of physical examinations are limited in terms of study design, originating from case–control studies or high-risk cohorts6,7; short follow-ups and lack of equations to calculate absolute risks6,7; the endpoints, predicting only myocardial infarction (MI) or stroke7,8; or inclusion of dietary predictors on a nutrient level requiring assessment of a large variety of individual foods, thus hampering the applicability in practice6,9. We only identified one model allowing large-scale estimation of individual CVD risk based on non-clinical parameters10. However, despite established risk associations, the score does not include potentially informative dietary information11. Moreover, overlap in risk factor profiles of CVD and type 2 diabetes (T2D) offers the potential for combined risk assessment with only minor deviations in the required predictors, including dietary parameters. The German Diabetes Risk Score (GDRS) is a multiply validated non-clinical score to predict T2D and its extension for CVD risk prediction would enable simultaneous quantification of individual CVD and T2D risk in non-clinical settings12. Thus, we aimed to develop and externally validate a non-clinical risk score to predict 10-year CVD risk based on shared predictors with the GDRS and to compare its performance to the identified non-clinical and established clinical CVD risk scores. Furthermore, we developed a clinical extension with routinely available clinical predictors for step-wise screening approaches. ## Results Descriptive comparison of the unimputed and imputed data, including the proportion of missingness, is presented in the supplement (Supplementary Table (ST) 1). The median follow-up time in the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam was 11.35 years (interquartile range (IQR) 1.38). Both samples contained proportionally more women than men (female EPIC-Potsdam: 61.6%; EPIC-Heidelberg: 54.6%) and the median age at baseline was 50 years (Table 1). Prevalence of self-reported hypertension was higher in Potsdam (31.8%) compared to Heidelberg (27.2%), while the proportion of participants reporting a family history of CVD was higher in Heidelberg (52.8%, Potsdam: 37.1%), as well as current heavy smoking (≥ 20 units/day) at baseline (Potsdam 5.7%, Heidelberg 9.5%). ### Score derivation The final non-clinical model included the predictors age, gender, waist circumference, smoking status, self-reported hypertension and T2D, CVD family history, and consumption of whole grain, red meat, coffee, high energy soft drinks, and plant oil. The clinical model additionally contained systolic and diastolic blood pressure, total and HDL cholesterol. The proportional hazards assumption was fulfilled for all included predictors. The supremum test for functional form was only significant for ‘CVD points’ in the clinical model. However, subsequent examination of the according restricted cubic splines did not indicate strong deviations from a linear function (Supplementary Figure (SF) 1). Estimates derived by using Cox proportional hazards regression and the Fine and Gray model were overall comparable. However, comparison of the model performance indicated slightly better calibration of absolute risks by the Fine and Gray model compared to the Cox model in the upper risk range (SF2). As a consequence, we proceeded with the competing risk approach. Adding statistically significant interaction terms or squared terms as well as deriving gender-specific equations of the Fine and Gray models did not improve overall performance relevantly (SF3). The final parameters used for absolute risk calculation based on the competing risk model are depicted in Table 2 (example calculation: Supplementary Note (SN) 1). ### Performance in EPIC-Potsdam and EPIC-Heidelberg #### Discrimination Competing risk-adjusted C-indices indicated good discrimination of both developed models in EPIC-Potsdam (non-clinical: 0.786, 95% confidence interval (95%CI) 0.736–0.832; clinical 0.796, 0.746–0.841) and EPIC-Heidelberg (non-clinical: 0.762, 0.715–0.807; clinical: 0.769, 0.721–0.813). The categorical Net-Reclassification-Improvement (NRI) suggested only slight improvement of risk category assignment by additional clinical parameters (NRI EPIC-Potsdam: 0.015, 95%CI − 0.028 to 0.057; EPIC-Heidelberg 0.078, 0.041–0.116). Sensitivity and specificity in both cohorts are shown in the ST2. As an example, when using a cut-off of 5% predicted risk in EPIC-Heidelberg, sensitivity and specificity were 48.8% and 83.4% for the non-clinical and 53.3% and 81.9% for the clinical score. Comparison of the performance with established risk scores demonstrated that the two derived equations reached the highest C-indices in EPIC-Potsdam (e.g., Framingham CVD Risk Score (FRS) with blood lipids 0.781, 0.730–0.828) (Fig. 1). In EPIC-Heidelberg, C-indices were overall slightly lower than in EPIC-Potsdam. The C-index of the non-clinical score ranged among the highest, comparable to established clinical scores (e.g., FRS with blood lipids 0.764, 0.717–0.809), while the derived clinical score still showed the highest C-index. C-indices of the non-clinical chronic metabolic disease (CMD) score were considerably lower in EPIC-Potsdam (0.738, 0.685–0.789) and EPIC-Heidelberg (0.722, 0.672–0.769). #### Calibration The derived scores were well calibrated for the majority of individuals in the lower nine deciles of predicted risk while they slightly overestimated risk in the highest decile of predicted risk (Fig. 2). Expected-to-observed ratios were 1.17 (95%CI 1.08–1.27) for the non-clinical and 1.13 (1.04–1.22) for the clinical score in EPIC-Potsdam and 1.05 (0.97–1.13) and 1.11 (1.03–1.20) in EPIC-Heidelberg, respectively. Calibration plots suggested slight overestimation of risk by the recalibrated PCE (Fig. 2) and substantial overestimation by both FRS (not shown). ### Subgroup and sensitivity analyses Subgroup analyses indicated that C-indices were consistently higher for women compared to men and for MI compared to stroke for both derived scores in EPIC-Potsdam and EPIC-Heidelberg (SF4). Calibration plots showed better calibration of the scores for women than men, with a more pronounced overestimation of risk for the higher decile groups of predicted risk in men (SF5). Additional appraisal of CVD mortality discrimination resulted in higher C-indices for the derived scores than for SCORE in both cohorts (C-index EPIC-Heidelberg non-clinical: 0.774, 95%CI 0.525–0.960; clinical: 0.763, 0.513–0.954; SCORE: 0.740, 0.486–0.939). However, due to the limited number of fatal cases, estimates were imprecise. ## Discussion We derived and externally validated a non-clinical risk score predicting 10-year CVD risk with superior or comparable performance to established clinical CVD risk scores. Additional clinical parameters only slightly improved discrimination. Our results suggest that estimation of 10-year CVD risk based on the selected and easily obtainable non-clinical CVD risk factor information is feasible without loss of predictive accuracy compared to clinical models. Other external validations of the CMD Score showed acceptable to good discrimination in an Iranian (areas under the receiver operating characteristic curve (AUC): men 0.71, 95%CI 0.66–0.75; women 0.81, 0.76–0.85) and an Australian population (AUC: men 0.82, 0.77–0.86; women 0.88, 0.83–0.94) which is comparable or higher than in our samples13,14. Two meta-analyses, one based on 86 prospective studies, concluded that the PCE discriminates relatively well (C-index 0.723, 0.719–0.727) and reported a prediction interval (men 0.70, 0.60–0.79; women 0.74, 0.63–0.83) covering the observations from our study samples15,16. A pooled analysis of two other German population-based cohort studies showed a C-index (0.76, 0.73–0.79) comparable to our findings17. For the FRS including blood lipids, a meta-analysis of prospective studies reported a C-index of 0.719 (0.715–0.723), which is lower than in our cohorts16. For SCORE, the same meta-analysis suggested relatively good discrimination for all CVD events (C-index 0.719, 0.715–0.723) and better discrimination for fatal events only (C-index 0.758, 0.752–0.763)16. SCORE showed higher discriminatory ability in our samples when including all cases, while discrimination for fatal events was comparable. The PROCAM score for MI showed lower discrimination in other European validation studies than in our sample, with AUCs ranging from 0.55 to 0.7418. The PCE endpoint definition (MI or coronary heart disease death, or fatal or non-fatal stroke) is largely comparable to our definition. While the PCE was well calibrated in a German sample after recalibration, our study still suggests slight overestimation of the recalibrated equation17. This could be related to deviations in the documented CVD incidence as a result of actual incidence differences in the studied populations and/or differences in the case identification and ascertainment procedure, potentially leading to systematically fewer or more identified cases. Additional inclusion of heart failure and angina in the FRS endpoint definition might explain the strong overestimation of risk detected in our samples. Despite minor heterogeneity across individual validation studies potentially related to deviations in the population characteristics and covariate structure19, these findings indicate that the established clinical CVD models performed mostly comparable or better in EPIC-Potsdam and EPIC-Heidelberg compared to other studies. This suggests that underestimation of the performance in our samples is unlikely. Several features of our approach are worth mentioning. Firstly, and most importantly, the developed non-clinical risk score extends individual risk prediction to prevention settings that are not covered by existing clinical risk scores without loss of predictive precision. These include self-assessment of individuals, health education campaigns, and step-wise screening procedures with a non-clinical stage. Secondly, the inclusion of selected GDRS parameters (age, waist circumference, smoking status, self-reported hypertension, consumption of whole grains, red meat, and coffee) in the non-clinical score allows simultaneous risk assessment of CVD and T2D with only a few additional parameters. Thirdly, our non-clinical score contains several lifestyle risk factors, modifiable and easily to be obtained, including dietary information. As effect sizes and directions of the modifiable predictors are in line with previous evidence (compare ST3), the score plausibly supports health behaviour recommendations, pointing out potential ways to reduce CVD risk, for example, by choice of a healthy diet or reducing waist circumference. Inclusion of behavioural over clinical parameters emphasises the role of primary lifestyle prevention rather than focussing on (medicinal) treatment of clinical parameters such as blood lipids or blood pressure, frequently used for CVD risk prediction, as potential consequences of adverse health behaviour. This is supported by our results showing that the investigated clinical parameters don’t provide much predictive information beyond our non-clinical predictors. There are several strengths to our study. We based our analyses on physician-verified cases, reducing false-positive case assignment to a minimum. The application of the World Health Organization (WHO) Monitoring trends and determinants in cardiovascular disease (MONICA) criteria in the derivation cohort facilitates reproduction in other cohorts based on a standardised outcome definition. Harmonised data collection and procession methods between the EPIC centres in Potsdam and Heidelberg enabled us to fully rebuild the prediction model for external validation without regression or substitution of predictors that could be unavailable in other cohorts. Relevant sample sizes and case numbers in both cohorts (events per variable EPIC-Potsdam: non-clinical model 40.2, clinical model 136.8; events EPIC-Heidelberg n = 692) allowed the derivation of robust estimates, to perform sensitivity analyses, and to examine the performance in subgroups20,21. However, there are some limitations. Firstly, due to the case-cohort design, the proportion of missingness was high for most biomarkers. However, it has been shown that multiple imputation is a valid approach to handle missing data for absolute risk estimations22. Secondly, we used the non-clinical score points as one predictor for the clinical score instead of individually modelling its risk factors. This approach may have diminished performance improvement. However, post-hoc re-estimation of the clinical model including the non-clinical risk factors individually showed that C-index increased only by 0.001, suggesting negligible loss of discriminatory ability. Thirdly, heterogeneous outcome definitions of the composite endpoint CVD may have hampered performance comparison with other risk scores, especially calibration. Finally, as we developed and validated our scores in German adults, generalisability to other populations with differences in case-mix and deviations in predictor and outcome assessment remains unclear. To conclude, we developed and externally validated a non-clinical risk score predicting 10-year CVD risk based on shared predictors with a validated T2D risk score with comparable or superior performance to established clinical CVD risk scores. It can be used independently of physical examinations and includes a variety of modifiable risk factors supporting both, risk assessment and subsequent counselling for preventive lifestyle modifications, e.g., through an online calculator. The models will be implemented in the online tool of the GDRS (https://drs.dife.de/) and a paper questionnaire will be developed. ## Methods ### Study population Analyses were based on the EPIC-Potsdam and EPIC-Heidelberg cohorts consisting of 27,548 and 25,540 participants recruited in the areas of Potsdam (age mainly 35–65 years, 60.4% female) and Heidelberg (age 35–66 years, 53.3% female). The data was collected from 1994 to 2012. Detailed information on recruitment and follow-up procedures is described elsewhere23,24. For baseline assessment, participants underwent physical examinations and blood sample drawing by trained medical personnel. Information on lifestyle, sociodemographic characteristics, and health status were documented with validated questionnaires and during face-to-face interviews. Participants were actively re-contacted every 2–3 years for follow-up information by sending questionnaires and phone calls if required. Additionally, passive follow-up sources like registry linkage or information of death certificates were used. Response rates ranged from 90 to 96% per follow-up round23. In both cohorts, participants with prevalent CVD, non-verifiable, silent events, stroke cases with prior brain cancer, meninges, or leukaemia, and with missing follow-up information were excluded. Exclusively in EPIC-Potsdam, we excluded individuals with ‘possible’ events according to the WHO MONICA criteria. Exclusively in EPIC-Heidelberg, we excluded participants with events only indicated by a death certificate but without further sources suggesting an event. The analysis sample in EPIC-Potsdam contained 25,993 participants for the full follow-up, including 684 overall CVD cases (fatal n = 82), 383 myocardial infarctions (MI), and 315 stroke cases and after 10 years 584 overall CVD (fatal n = 70), 324 MI, and 269 stroke cases. Non-CVD death was documented for 2312 participants (8.9%) during the full follow-up and 847 participants (3.3%) within the first 10 years. The respective analysis sample in EPIC-Heidelberg contained 23,529 participants, including 692 overall CVD (fatal n = 87), 370 MI and 345 stroke cases after 10 years of follow-up (details: SF6). Non-CVD death was documented for 2596 participants (11.0%) during the full follow-up and 1074 participants (4.6%) during the first 10 years of follow-up. The studies were approved by the Ethical Committee of the State of Brandenburg and the Heidelberg University Hospital, Germany, and were carried out according to The Code of Ethics of the World Medical Association (Declaration of Helsinki). Participants gave written informed consent for participation. ### Assessment of predictors Self-reported information on smoking, diet, prevalent hypertension and T2D, and medication was collected at baseline via questionnaires. Daily food consumption was assessed with self-administered semi-quantitative Food Frequency Questionnaires including photographs of portion sizes to estimate intake, summarised into food groups, and translated to portions per day as described elsewhere (overview of selected food groups and included dietary items: ST4)25. Waist circumference, systolic and diastolic blood pressure were measured by trained personnel at baseline examination (details: SN2). Biomarker measurements were performed in the established case-cohorts, consisting of a randomly drawn sample (subcohorts: Potsdam n = 2500; Heidelberg n = 2739) of participants who provided blood samples at baseline and incident cases of the according disease (case-cohorts: SF7, SN3, ST5; biomarker measurements: SN2)26. Family history of MI and stroke was collected at the 5th follow-up via questionnaires and summarised to parental and sibling history of CVD. ### Case ascertainment Incident CVD was defined as all incident cases of non-fatal and fatal MI and stroke (International Statistical Classification of Diseases and Related Health Problems, Tenth revision (ICD-10) codes: I21 acute MI, I63.0–I63.9 ischemic stroke, I61.0–I61.9 intracerebral haemorrhage, I60.0–I60.9 subarachnoid haemorrhage, I64.0–I64.9 unspecified stroke). In both cohorts, events were systematically detected via self-report of a diagnosis, information of death certificates, and reports by local hospitals or treating physicians. If an event was indicated by the aforementioned sources, treating physicians were contacted for diagnosis verification, occurrence date, and diagnostic details. Only events with physician–verified diagnoses were considered as incident CVD cases. In EPIC-Potsdam, physician-verified cases were additionally ranked into ‘definite’, ‘probable’, and ‘possible’ events by two trained physicians based on the WHO MONICA criteria for MI and an adapted version for stroke (details: SN4). ### Statistical analyses We applied multiple imputation by chained equations (m = 10) to handle missing values in predictor candidates and parameters needed to derive other scores for comparison (SN5)27,28. Data of the EPIC-Potsdam cohort (follow-up time: median 11.35 years, IQR 1.38 years) was used for score derivation. We used the predictors of the GDRS in the first step and assessed their association with CVD using Cox proportional hazard regression in each imputed set separately2,22,29. Only parameters that were consistent in regards to effect size and direction with available meta-analyses or large-scale studies remained in the model. For the identification of CVD-specific predictor candidates, the literature was screened for established non-clinical and routinely available clinical CVD risk factors. To derive the non-clinical score, we considered candidates with regard to anthropometric measures, gender, CVD family history, self-reported prevalent diseases, medication, weight history, and dietary information as the main focus. The final selection of the predictors was based on the following criteria: performance improvement, assumed availability in physician-independent settings or routine care, consistency with previous evidence, and robustness of the association. Different predictor candidate combinations were added to the previously identified shared predictors from the GDRS to assess the independence and robustness of the associations. For the clinical extension, we used the score points of the non-clinical score as one predictor and subsequently added clinical candidates with regard to blood pressure measurements, blood pressure or lipid-lowering medication, blood lipid concentrations (total cholesterol, HDL cholesterol, and the respective ratio), and HbA1c. Predictor candidates meeting the previously defined criteria were included in the final scores. Linearity assumptions of the risk associations were examined by deriving Martingale residuals and performing supremum tests for functional form30. The proportional hazards assumption was assessed by visual inspection of the Schoenfeld residuals. Even though previous studies have commonly used Cox proportional hazards regression models for absolute risk predictions, including the PCE and FRS, non-CVD mortality is considered a competing risk event for the analysis of CVD endpoints. Despite a limited proportion of non-CVD mortality events in EPIC-Potsdam (3.3% during the first 10 years of follow-up), we additionally used Fine and Gray models accounting for competing risks, calculated absolute risks, assessed the model performance, and compared it to the performance of the Cox proportional hazards models31,32. In a final step, we additionally considered squared terms and multiplicative interaction terms of the selected predictors with gender and age and, if statistically significantly associated with the outcome, added them to the model and re-evaluated the performance. To assess the potential benefit of modelling gender-stratified equations, we re-estimated the models in men and women separately and compared their performance. β estimates of the final models were rounded, multiplied by 100 and the following equation including the subdistribution baseline survival S0 and mean values $$\overline{{X}_{i}}$$ of all participants was applied to calculate the absolute 10-year risks29,33: $${\widehat{p}=1-{S}_{0}(t)}^{\mathrm{exp}(({\sum }_{i=1}^{p}{\beta }_{i } {X}_{i}-{\sum }_{i=1}^{p}{\beta }_{i }\overline{{X}_{i}})/100)}$$ We evaluated the performance of the generated scores in EPIC-Potsdam and for external validation in EPIC-Heidelberg censored at 10 years of follow-up and compared it to the performance of established CVD risk scores. Namely the non-clinical CMD risk score, the for Germany recalibrated PCE, two FRS including blood lipids or BMI, the ESC SCORE, and two PROCAM Scores predicting MI or stroke (calculation of scores: ST6)3,10,17,33,34,35. To quantify the discrimination of the scores, we calculated C-indices by using a bootstrap approach dividing each imputed set into 10 random subsets and adjusting for competing risks36,37,38. Calibration was assessed with calibration plots and expected-to-observed ratios. The calibration of the CMD score, SCORE, and both PROCAM scores was not evaluated due to differences in the predicted time frame or in the endpoint definitions (CVD mortality, MI, stroke). Potential changes in risk group assignment between the derived non-clinical and clinical score were assessed using the NRI with previously implemented risk groups (< 5%, ≥ 5%–< 7.5%, ≥ 7.5%–< 10%, ≥ 10%)2. Sensitivity and specificity were calculated based on the aforementioned risk cut-offs. Sensitivity analyses were performed assessing the discrimination separately for men and women and for MI and stroke. For comparison with SCORE, we additionally calculated C-indices for fatal cases only. Statistical analyses were performed with SAS (version 9.4).
+0 # Pre-Calculus 0 636 8 cos2x - cosx = 0 i need steps.... Guest Mar 3, 2016 #8 +20151 +10 cos2x - cosx = 0 i need steps.... rearrange $$\begin{array}{rcll} \cos{(2x)} - \cos{(x)} &=& 0 \\ \cos{(2x)} &=& \cos{(x)} \\\\ \boxed{~ \cos{(\varphi)} = \cos{(-\varphi)}~}\\\\ \end{array}$$ We have 4 variations: {nl} $$\begin{array}{lrclrcl} (1) & \cos{(2x)} &=& \cos{(x)} \qquad \Rightarrow &\qquad 2x \pm 2k_1\pi &=& x \pm 2k_2\pi\\ (2) & \cos{(-2x)} &=& \cos{(x)} \qquad \Rightarrow &\qquad -2x \pm 2k_1\pi &=& x \pm 2k_2\pi\\ (3) & \cos{(2x)} &=& \cos{(-x)} \qquad \Rightarrow &\qquad 2x \pm 2k_1\pi &=& -x \pm 2k_2\pi\\ (4) & \cos{(-2x)} &=& \cos{(-x)} \qquad \Rightarrow &\qquad -2x \pm 2k_1\pi &=& -x \pm 2k_2\pi\\\\ (1) & 2x \pm 2k_1\pi &=& x \pm 2k_2\pi\qquad \Rightarrow &\qquad x &=& 0 \pm 2k\pi\\ (2) & -2x \pm 2k_1\pi &=& x \pm 2k_2\pi \qquad \Rightarrow &\qquad -3x &=& 0 \pm 2k\pi\\ (3) & 2x \pm 2k_1\pi &=& -x \pm 2k_2\pi\qquad \Rightarrow &\qquad 3x &=& 0 \pm 2k\pi\\ (4) & -2x \pm 2k_1\pi &=& -x \pm 2k_2\pi\qquad \Rightarrow &\qquad -x &=& 0 \pm 2k\pi\\\\ & \Rightarrow x &=& \pm 2k\pi \\ & \Rightarrow x &=& \pm \frac{2}{3}k\pi \\ \end{array}$$ We can put together  $$x = \pm \frac{2}{3}k\pi \qquad k \in N_0$$ heureka  Mar 3, 2016 edited by heureka  Mar 3, 2016 edited by heureka  Mar 3, 2016 #1 +131 0 x would 0. = cos(2*0) - cos(0) = 1 - 1 = 0 MATHBITCH  Mar 3, 2016 #2 +93866 +10 Hi Guest. cos2x - cosx = 0 $$cos2x - cosx = 0\\ cos^2x-sin^2x-cosx=0\\ cos^2x-(1-cos^2x)-cosx=0\\ cos^2x-1+cos^2x-cosx=0\\ 2cos^2x-cosx-1=0\\ let \;\;y=cosx\\ 2y^2-y-1=0\\ 2y^2-2y+y-1=0\\ 2y(y-1)+1(y-1)=0\\ (2y+1)(y-1)=0\\ 2y=-1\;\;\;\;or\;\;\;\;y=1\\ y=-\frac{1}{2}\;\;\;\;or\;\;\;\;y=1$$ $$cosx=-\frac{1}{2} \qquad or \qquad cosx=1\\ for\;\; 0\le x<360\; degrees\\ x=180\pm 60 \;degrees \qquad or \qquad x=0\; degrees\\ General \;solutions\\ x=180\pm 60 \;+360n\;degrees \qquad or \qquad x=0+360n\; degrees\\ x=180\pm 60 \;+360n\;degrees \qquad or \qquad x=360n\; degrees\\ x=180(2n+1)\;\pm\ 60\;degrees \qquad or \qquad x=360n\; degrees\qquad n\in Z\\ \mbox{Normally this would be expressedin radians}\\ x=(2n+1)\pi\pm\frac{\pi}{3} \qquad or \qquad x=2\pi n \qquad \qquad n\in Z$$ Now that I have finished it has occurred tyo ma that maybe you wanted  the solutions to cos^2x - cosx = 0 that is (cosx)^2 - cosx = 0 which is a very much easier question :// Melody  Mar 3, 2016 #3 +5252 0 90% of all the math there is beyond me, at least, I think it is. rarinstraw1195  Mar 3, 2016 #5 +93866 +10 I expect you are right rarinstraw but that is the beauty of this place. Almost all of us see mathematics here that is new or unknown to them.  I regularly see things that Alan, Heureka, CPhill, Bertie and Geno3141 put up on the forum that are either totally beyond me or that I can learn from. I love that. You are getting a glimpse of what is to come.  Little bits of it you can run with and learn from if you choose to.  Plus of course we are here if you do not understand your classwork :) Melody  Mar 3, 2016 #7 +93866 0 I left out our physics people off my Credit list.  Alan is a Physicist and his posts sometimes blows my mind but Nauseated and Dragonlance as well as one or more guests often answer physics questions.  Physics is fascintating.  If there was an infinite amount of time I would definitely put effort into understanding physics.  :)) I have probably left some other great high end answerers off my list.... sorry. Melody  Mar 3, 2016 #4 +91135 +5 cos2x - cosx = 0       using an identity, we have 2cos^2x - 1 - cosx  = 0   rearrange 2cos^2x - cosx - 1  =   0     factor (2cosx + 1) (cosx - 1)  = 0 Setting each factor to 0, we have 2cosx + 1  = 0     subtract 1 from each side 2cosx  = -1      divide by 2 on each side cos x  = -1/2       and this happens at x = 120° ± n360°   and at x = 240°  ± n360°  where n is a positive integer For the other factor, we have cos x - 1   = 0   add 1 to both sides cos x = 1       and this is true at x = 0° ± n360°    where n is a positive integer Here's a graph : https://www.desmos.com/calculator/bza7s6b83u CPhill  Mar 3, 2016 #6 +93866 +5 Yep, CPhill's answer id the same as mine :) Melody  Mar 3, 2016 #8 +20151 +10 cos2x - cosx = 0 i need steps.... rearrange $$\begin{array}{rcll} \cos{(2x)} - \cos{(x)} &=& 0 \\ \cos{(2x)} &=& \cos{(x)} \\\\ \boxed{~ \cos{(\varphi)} = \cos{(-\varphi)}~}\\\\ \end{array}$$ We have 4 variations: {nl} $$\begin{array}{lrclrcl} (1) & \cos{(2x)} &=& \cos{(x)} \qquad \Rightarrow &\qquad 2x \pm 2k_1\pi &=& x \pm 2k_2\pi\\ (2) & \cos{(-2x)} &=& \cos{(x)} \qquad \Rightarrow &\qquad -2x \pm 2k_1\pi &=& x \pm 2k_2\pi\\ (3) & \cos{(2x)} &=& \cos{(-x)} \qquad \Rightarrow &\qquad 2x \pm 2k_1\pi &=& -x \pm 2k_2\pi\\ (4) & \cos{(-2x)} &=& \cos{(-x)} \qquad \Rightarrow &\qquad -2x \pm 2k_1\pi &=& -x \pm 2k_2\pi\\\\ (1) & 2x \pm 2k_1\pi &=& x \pm 2k_2\pi\qquad \Rightarrow &\qquad x &=& 0 \pm 2k\pi\\ (2) & -2x \pm 2k_1\pi &=& x \pm 2k_2\pi \qquad \Rightarrow &\qquad -3x &=& 0 \pm 2k\pi\\ (3) & 2x \pm 2k_1\pi &=& -x \pm 2k_2\pi\qquad \Rightarrow &\qquad 3x &=& 0 \pm 2k\pi\\ (4) & -2x \pm 2k_1\pi &=& -x \pm 2k_2\pi\qquad \Rightarrow &\qquad -x &=& 0 \pm 2k\pi\\\\ & \Rightarrow x &=& \pm 2k\pi \\ & \Rightarrow x &=& \pm \frac{2}{3}k\pi \\ \end{array}$$ We can put together  $$x = \pm \frac{2}{3}k\pi \qquad k \in N_0$$ heureka  Mar 3, 2016 edited by heureka  Mar 3, 2016 edited by heureka  Mar 3, 2016
## CryptoDB ### Paper: Secure Adiabatic Logic: a Low-Energy DPA-Resistant Logic Style Authors: Mehrdad Khatir Amir Moradi URL: http://eprint.iacr.org/2008/123 Search ePrint Search Google The charge recovery logic families have been designed several years ago not in order to eliminate the side-channel leakage but to reduce the power consumption. However, in this article we present a new charge recovery logic style not only to gain high energy efficiency but also to achieve the resistance against side-channel attacks (SDA) especially against differential power analysis (DPA) attacks. Simulation results show a significant improvement in DPA-resistance level as well as in power consumption reduction in comparison with DPA-resistant logic styles proposed so far. ##### BibTeX @misc{eprint-2008-17800, title={Secure Adiabatic Logic: a Low-Energy DPA-Resistant Logic Style}, booktitle={IACR Eprint archive}, keywords={DPA, DPA-Resistance, Cell Level Countermeasure, SAL}, url={http://eprint.iacr.org/2008/123}, note={ [email protected] 13955 received 17 Mar 2008, last revised 17 Mar 2008},
# How to train LSTM score prediction with very little data? (Bounty to be added) I am trying to make a text score prediction network, and my dataset have 500 samples only. I know there is a public dataset called the ASAP Dataset. I have tested my model (word embedding layer --> LSTM Layer --> FC Layer) on this dataset and it performed as expected. The public dataset have 13000 data samples while my private one has only 500. When I trained the network on my private dataset, it performed very poorly, and started overfitting from the first epoch. I have tried reducing my model size to the minimum and still no improvement. I have also added dropout and l2 regularization ans still nothign works. Is there any suggestion that could help like a differnet model or something. I am thinking methods like cropping the text to generate more data or some other methods like a Siamese network approach will help. Will they help? For your refernce, this is my code. https://github.com/Clement-Hui/EssayGrading Thank you very much for your help. I would be glad to add a bounty if the answer really helps me. Thanks. • you can use transfer learning for a small dataset. Look in the fastai library they have used AWD-LSTM with ULMFIT for transfer learning. – aman5319 Nov 18 '19 at 13:19 • I have tried transfer learning and got bad results ( 0.3 kappa). do I have to lock specific layers? i locked the weights of the lstm layer only. – Clement Hui Nov 18 '19 at 14:27 • How did you do the transfer learning?? what approach did you took mention that. – aman5319 Nov 18 '19 at 14:42 • I used the ASAP dataset to pretrain the model and stops the model when it starts to overfit. Then I used the model to train with my private dataset, locking the weights of the LSTM layer, and only allow thw wmbedding layer and the output FC layer to train. – Clement Hui Nov 18 '19 at 15:08 • The Method which you described is the wrong way of doing transfer learning use fast.ai library for that. – aman5319 Nov 18 '19 at 17:29
+0 alysia 0 157 1 what is 20 x 3 1/2 pls my math is so horrible pls & thank u Oct 7, 2021 #1 +50 0 I'm assuming 3 1/2 is a mixed fraction. If you get confused with mixed fractions, you can always convert them to normal fractions like $$\frac{1}{2}$$ $$3\frac{1}{2}$$ converts to $$\frac{7}{2}$$. You can find this by multiply the denominator(bottom of the fraction) by the integer part then adding the numerator(top of the fraction). You get $$3 \cdot 2 + 1 = 7$$. Then divide that by the denominator or $$2$$ to get $$\frac{7}{2}$$$$\frac{7}{2} \cdot 20 = 70$$ because the $$2$$ in the denominator cancels out with the $$2$$ in the $$20$$. Remember that $$20 = 2 \cdot 10$$ which is why they cancel. $$70$$ is your answer. Oct 7, 2021
### A man covered a distance of 50 miles on his first trip, on a later... A man covered a distance of 50 miles on his first trip, on a later trip he traveled 300 miles while going 3 times as fast. His new time compared with the old distance was? • A. three times as much • B. the same • C. twice as much • D. half as much ##### Explanation Let the speed of the 1st trip be x miles/hr and the speed of the 2nd trip be 3x miles/hr Speed = distance/time ∴ Time taken to cover a distance of 50 miles on the 1st trip = $$\frac{50}{xhr}$$ time taken to cover a distance of 300 miles on the next trip = $$\frac{300}{3xhr}$$ = $$\frac{100}{xhr}$$ ∴the new time compared with the old time is twice as much There is an explanation video available below.
# Mettaton Attacks Replay of Father Timm Mem... Limits 1s, 512 MB · Interactive This is an interactive problem. Mettaton, the star of underworld TV show attacks you again with his pop quiz! Oh, you never played before? No problem! It's simple! There's only one rule. Answer correctly... or you lose 50% of your points in every wrong submission. Mettaton's quiz show consists of a set of multiple choice questions. There are a total of $T$ questions, each with a total of $N < 2^{60}$ options, numbered from 1 to $N$ and only one of them is correct, labeled $H$. Your task is to find out $H$ ($H < 2^{60}$). Ofcourse you aren't alone in this, the underground researcher Alphys is with you. Last time he got caught supplying participants with answers using hand signs, so he can't do that again. But worry not, he has devised a nice plan and installed an application in your phone. Each time you use your the phone, you give it two integers $L$ and $R$. Now Alphys tells you, the number of integers $x$ such that $L \leq x \leq R$ where $x \oplus H$ can be written as a power of 2. More formally, $x \oplus H = 2^k$ and $k$ is a non-negative integer ($\oplus$ is the bitwise xor operation). Since it is a live show, you don't have too much time to use the phone. You calculated that, you have just enough time to use the phone $Q$ times. But you don't have to use your phone to tell the answer to Mettaton. ## Input Read the value of $T$( $T \leq 500$), which is the number of questions. Then you can start interacting. If you want to use the phone, print ? L R . Then you can read an integer which is what Alphys told you. You must ensure that $0\leq L \leq R < 2^{63}$ and you haven't exceeded your asking limit $Q$ ($Q \leq 69$). In which case judge will give you -1 as the answer, and you must terminate your program immediately. Once you have figured out the answer to a question, tell Mettaton like this ! h where h is the answer you found. Then the judge outputs "OK" if your answer was correct or, "WA" otherwise. If you read "WA" you must terminate your program. If you got "OK" proceed to the interaction of your next question immediately (if there is any). Terminate where you were told, otherwise you might get TLE or other random verdicts instead of WA. Also remember to print newlines and flush the output after every ? L R or ! h operation. You can use: • fflush(stdout) or cout.flush() in C++ • System.out.flush() in Java • flush(output) in Pascal • stdout.flush() in Python • See documentation of other languages. ## Output You only interact with the judge. There is no other output you need to print. Interaction Example: Here is an example of how your program will interact with the judge. > indicates you print something, < indicates you take input from judge. < 2 Mettaton has 2 questions. > ? 1 5You use the phone. < 2 Alphys replies. > ? 3 3 You use the phone again. < 0 Alphys replies. > ! 3 You miraculously found the answer? < OK Mettaton agrees!!!
#### T-count Optimized Design of Quantum Integer Multiplication ##### Edgard Muñoz-Coreas, Himanshu Thapliyal Quantum circuits of many qubits are extremely difficult to realize; thus, the number of qubits is an important metric in a quantum circuit design. Further, scalable and reliable quantum circuits are based on Clifford + T gates. An efficient quantum circuit saves quantum hardware resources by reducing the number of T gates without substantially increasing the number of qubits. Recently, the design of a quantum multiplier is presented by Babu [1] which improves the existing works in terms of number of quantum gates, number of qubits, and delay. However, the recent design is not based on fault-tolerant Clifford + T gates. Also, it has large number of qubits and garbage outputs. Therefore, this work presents a T-count optimized quantum circuit for integer multiplication with only $4 \cdot n + 1$ qubits and no garbage outputs. The proposed quantum multiplier design saves the T-count by using a novel quantum conditional adder circuit. Also, where one operand to the controlled adder is zero, the conditional adder is replaced with a Toffoli gate array to further save the T gates. To have fair comparison with the recent design by Babu and get an actual estimate of the T-count, it is made garbageless by using Bennett's garbage removal scheme. The proposed design achieves an average T-count savings of $47.55\%$ compared to the recent work by Babu. Further, comparison is also performed with other recent works by Lin et. al. [2], and Jayashree et. al.[3]. Average T-count savings of $62.71\%$ and $26.30\%$ are achieved compared to the recent works by Lin et. al., and Jayashree et. al., respectively. arrow_drop_up
# Write a program in Python to verify kth index element is either alphabet or number in a given series Input − Assume, you have a Series, a    abc b    123 c    xyz d    ijk ## Solution To solve this, we will follow the steps given below − • Define a Series • Get the index from user • Set the if condition to check the value is digit or not. It is defined below, if(data[x].isdigit()): print("digits present") else: print("not present") ### Example Let us see the following implementation to get a better understanding. import pandas as pd dic = {'a':'abc','b':'123','c':'xyz','d':'ijk'} data = pd.Series(dic) x = input("enter the index : ") if(data[x].isdigit()): print("digits present") else: print("not present") ### Output enter the index : a not present enter the index : b digits present