text
stringlengths
104
605k
Page Last Updated: Tuesday, 19 January 2016 11:02 EDT, (c) 1973, 2007, 2008, 2016 ## PUBS: Quasi-Cellularity Criteria ### Dean S. Hartley III Note: If SUV-infinity appears as SUV4, you may not have the font "WP MathA" installed. You may download the WP MathA and WP MathA Extended fonts with control+click on the following links: WPHV06NA.TTF and  WPHV07NA.TTF. (Or you may find them at http://instruct1.cit.cornell.edu/courses/fontfix.htm.) Save the files to your Fonts folder (probably C:\Windows\Fonts). If you cannot save them there, save them somewhere else. Then right click on the file and choose "install." You may have to reboot for these to be applied by Windows. Label Name Other Year DurationYrs Client University of Georgia Math Deparment none NonProfit Dates 1973 0.5 Employer University of Georgia Partner N/A Pubs "Quasi-Cellularity Criteria" author 1974 Science, Math and Medicine Topology Abstract. It is demonstrated that a set embedded in a manifold (with certain technical conditions) is strong quasi-cellularity if and only if it has the SUV4 property and satisfies a strong quasi-cellularity criterion. Moreover, a locally compact, finite-dimensional metric space has the SUV4 property with respect to metric ANRs if and only if it embeds as a strongly quasi-cellular subset of some (high dimensional) manifold. In like manner, a set embedded in an n-manifold (with technical conditions) is weakly quasi-cellular if it satisfies a weak quasi-cellularity criterion. This research is a condensation of the second half of the author's Ph.D. dissertation, written at the University of Georgia under the direction of Professor R. B. Sher.  I would like to express my thanks to Professor Sher for his aid.  His advice and criticism have been invaluable. AMS 1970 Subject Classifications. Primary 57A60, 57C99; Secondary 57C30, 57C40. MSC 2000 Subject Classifications. Primary 57N60, 57Q99; Secondary 57Q30, 57Q40, 57Q65. Key words and phrases. Cellular, cellularity criterion, properly Sk-inessential, quasi-cell, quasi-cellular, quasi-cellularity criterion, quasi-trivial, strong quasi-cellularity criterion, strongly quasi-cellular, SUV4, tree, UV4, weak quasi-cellularity criterion, weakly quasi-cellular. NOTATION UE and Int(U) symbolize the topological interior of U. MEi+1 symbolizes the topological interior of Mi+1 (the apparent sequence of the superscript and subscripts is purely an artifact of the HTML). ( i)i=14 symbolizes an operation or group of indexed entities ranging from the index of 1 to the index of 4 (the apparent sequence of the superscript and subscripts is purely an artifact of the HTML). J+ symbolizes the set of positive integers. ú1 symbolizes the real number line. Sn symbolizes the Euclidean n-sphere. Mn and Xx symbolize n-dimensional and x-dimensional objects, respectively. Fi and Gi, where F and G are maps, symbolize indexed maps, not n-dimensional objects. F0 and F1, where F is a homotopy, F:XHIàY, symbolize F(X,0) and F(X,1), respectively. MB symbolizes the boundary of B. -->> symbolizes an onto map. Cl(X) symbolizes the topological closure of X. K symbolizes the complex closure of K. FY symbolizes the Freudenthal compactification of a space Y. EY symbolizes the set of ends of Y. ` symbolizes a collapse. 1 INTRODUCTION. In this work we show that our extensions of cellularity [5] were the correct ones in that we are able to prove McMillan type theorems [8 Theorem 1] for strong quasi-cellularity and for weak quasi-cellularity. In fact, our extension of UV4 to SUV4 shows an even closer analogy to the compact case. McMillan remarks [9, p 21] that, for finite dimensional continua X, X0UV4 if and only if X has cellular embeddings into some (high dimensional) Euclidean space. We show in Theorem 3.12 that, for locally compact, finite dimensional metric spaces X, X0SUV4 if and only if X has a strongly quasi-cellular embedding into some (high dimensional) manifold. B. J. Ball and R. B. Sher have developed a theory of proper shape [1] and Sher has used this [13, Theorem 3.1] to show that, for X a locally compact metrizable space, X0SUV4 if and only if X has the proper shape of a tree. The work below comes from the second half of [6]; however, there is additional background material in the Appendix of [6] that is not included. This material includes proofs on infinite regular neighborhoods, proper maps, mapping cylinders, general position, and engulfing theorems. Some of the material on engulfing theorems is repeated in section 5, below. 2 PRELIMINARY DEFINITIONS AND NOTATION. We wish to recall some definitions and notation from [5] for the reader's convenience. We will continue to use cl(K) to mean the topological closure of a set K and K to mean the complex closure, that is the smallest complex containing K.  We will continue to use Dugundji [4] and Hudson [7] for general background material and most of the spaces we deal with will be assumed to be locally compact metric spaces. 2.1 DEFINITION. A map f:XàY of topological spaces is proper if f -l(C) is compact whenever C is compact. 2.2 DEFINITION. A tree is a connected, simply connected, locally-finite l-complex. 2.3 DEFINITION. Suppose N is an n-manifold. If there is a PL n-manifold N'dEn which is a closed regular neighborhood of some tree, and if there is a homeomorphism h from N' onto N, then N is called an n-quasi-cell. If the dimension is obvious, or not relevant, N will be referred to simply as a quasi-cell. 2.4 DEFINITION. Suppose X is a compact topological space, YdZdW are topological spaces, and J+ is the space consisting of the positive integers with the discrete topology. A proper map g:XHJ+àY is said to be properly X-inessential in Z if there exists a proper map G:(XHJ+)HIàZ such that for each x in X and j in J+, G(x, j, 0)=g(x, j), and for each j in J+, G|(XH{j})H{1} is a constant map. If, in addition, A is a subset of W and there is a map G as above such that G[(XHJ+)HI]ÇA=f, then we say that g is properly X-inessential in Z missing A. 2.5.1 DEFINITION. A set X contained in the interior of an n-manifold is said to be quasi-trivial in M if X is contained in a quasi-cell in ME. 2.5 DEFINITION. Suppose M is a PL n-manifold, XdME and U is a neighborhood of X in M. A sequence of triples {Hi-1,Ti,Fi}i=14 is said to be a quasi-special sequence for X in U provided: (i)    Hi+1 and Ti+1 are closed PL subspaces of M lying in HEi, i=0, 1, 2, ... , (ii)   Hi-1 is a PL n-manifold with MHi-1¹f, and Ti is a tree, i=1, 2, ... , (iii)  Fi+1:Hi+1HIàM is a proper PL homotopy with Fi+1(Hi+1HI)dHEi, F0i+1 the inclusion on Hi+1, and F1i+1(Hi+1)=Ti+1, i=0, 1, 2, ... , (iv)  Hi is a closed neighborhood of X, i=1, 2, ... , and X=Çi=14Hi, and (v)   if Y is a closed PL subspace of Hi+1 and DIM Y#n-3, then Y is quasi-trivial in Hi, i=1, 2, ... . If for each closed neighborhood U of X, there is a quasi-special sequence for X in U, then we say that quasi-special sequences for X in M exist strongly. If there is a quasi-special sequence for X in M and if each quasi-cell N with XdNE there is a quasi-special sequence for X in N, then we say that quasi-special sequences for X in M exist weakly. We say that {Hi-1,Ti,Fi}i=14 has the null-homotopy property if for each i=0, 1, 2, ... and each proper map g:SkHJ+àHi+1-X which is properly Sk-inessential in Hi+1, g is properly Sk-inessential in Hi missing X, k=0 or 1. 2.6 DEFINITION. Suppose M is a topological space and X is a subset of M. If, for each closed neighborhood U of X, there is a closed neighborhood V of X contained in UE, a tree T, a map f:VàT, a proper map g:TàU, and a proper homotopy H:VHIàU such that H0 is the inclusion and Hl =gBf, the we say that X has the strong UV4 property in M; this being the case, we say that X has property SUV4 in M. The following theorem [5, Theorem 2.10] gives an equivalent definition for the case that M is a PL n-manifold, n\$3. 2.7 THEOREM. Suppose M is a PL n-manifold, n\$3, and X is a closed subset of M lying in ME. Then X has SUV4 in M if and only if for each closed neighborhood U of X, there is a PL submanifold V, with a non-empty boundary, such that XdVEdVdUE, a tree T embedded as a closed PL subset of M in UE, and a proper PL homotopy H:VHIàU with H0 the inclusion, H1(V)=T and H(VHI)dUE. We will also use the following theorem [5, Theorem 3.7]. 2.8 THEOREM. Let M be a PL n-manifold, n\$3. Then a closed subset X of M lying in ME has SUV4 if and only if quasi-special sequences for X in M exist strongly. 2.9 DEFINITION. Suppose X is a set contained in the topological space Y. We say that X satisfies the strong quasi-cellularity criterion (in Y), if for each closed neighborhood U of X, there is a closed neighborhood V of X lying in UE such that each proper map g:SkHJ+àV-X which is properly Sk-inessential in V is properly Sk-inessential in U missing X, for k=0 or 1. 2.10 DEFINITION. If X is contained in a manifold M and there is a quasi-special sequence for X in M with the null-homotopy property, and for each quasi-cell Q with X in its interior there is such a quasi-special sequence in Q, then we say that X satisfies the weak quasi-cellularity criterion. 3 STRONG QUASI-CELLULARITY AND SUV4. 3.1 LEMMA. Suppose X is a closed subset of a PL n-manifold M, X lies in ME and DIM X#n-1. Further suppose that V is a closed neighborhood of X in M and g:J+àV is a proper map. Then there exists a proper map G:J+HIàV such that G0=g and G(i,1)ÏX, for all i0J+. Proof. Clearly, we may suppose that for each i0J+, g(i)0VE; for otherwise, we would simply define G on these points which are not carried into VE to be the constant homotopy. Let {ei}i=14 be a sequence of positive numbers converging to zero. For each i0J+, let NidVE be a PL n-ball whose interior contains g(i) and whose diameter is less than ei. Since DIM X#n-1, NEiÇ(V-X)¹f, for all i0J+. Since NEi is arc-connected, there exists a path ai:[0,1]àNEi such that ai(0)=g(i) and ai(1)0V-X. Then, since g is proper and {ei}i=14®0, the map G:J+HIàV defined by G(i,t)=ai(t) is proper. Since G0=g and G(i,1)=ai(1)0V-X, this completes the proof. 3.2 LEMMA. Suppose that U is a locally compact topological space, VdU, T is a tree embedded as a closed subset of U, and F:VHIàU is a proper map such that F0 the inclusion and F1(V)=T. Then, if P is a compact, connected space and g:PHJ+àV is a proper map, g is properly P-inessential in U. Proof. We shall define a proper map H:(PHJ+)HIàV such that H0=g and, for each j0J+, H1|PH{j} is a constant map. We first define G on (PHJ+)H[0, 1/2] by Gt=F2tBg, for t0[0, 1/2]. Since P is compact and connected, G1/2(PH{j}) is a compact subtree of T, for each j0J+, and as such is contractible. Thus we have that the map G1/2|PH{j} is homotopic to a constant map in G1/2(PH{j}). We shall call this homotopy rj, with r0j=G1/2|PH{j}. Let R:(PHJ+)H[1/2, 1]àT be the map defined by Rt|PH{j}=r2t-1j, for t0[1/2, 1]. We note that for each j0J+, the set {k|G1/2(PH{j})ÇG1/2(PH{k})¹f} is finite. It follows from this that R is proper. Since R1/2=G1/2, we may define H:(PHJ+)HIàU by Ht=Gt, t0[0, 1/2] and Ht=Gt, t0[1/2, 1]. Since H is proper, H0=g, and H1|PH{j} is a constant map, for each j0J+, then we are through. 3.3 DEFINITION. Suppose M is a PL n-manifold, V is a given open set in M, and U is a given closed set in M containing M-V in its interior. Let P denote an arbitrary closed k-dimensional PL subspace of M, and Q a closed PL subspace of P with QdV. We say locally finite k-complexes in M can be pulled into V in U if for each such P and Q there is a proper homotopy H:PH[0, 1]àM such that H0 is the inclusion on P, H1(P)dV, Ht is the inclusion on QÈ[(M-U)ÇP] for each t0[0, 1], H(P-QÇUH[0, 1]dU, and there is a map d:Pà(0, 4) with DIST(H1(p), M-V)>d(p), for each point p0P. 3.4 LEMMA. Let M be a PL n-manifold and let X be a closed, connected subset of ME with DIM X#n-1. Let U be a closed neighborhood of X in M such that a quasi-special sequence for X in U with the null-homotopy property exists. Then locally finite 2-complexes in M can be pulled into M-X in U. Proof. We first note that [M-(M-X)]dUE, so that the conclusion makes sense according to the definition. Now let P be any closed 2-dimensional PL subspace of M and Q be any closed PL subspace of P in M-X. Let {Hi-1,Ti,Fi}i=14 be a quasi-special sequence for X in U with the null-homotopy property. Since X is connected, we may assume that each Hi is connected. We shall only be concerned with the first five manifolds, H0, H1, H2, H3, and H4. Let R be a triangulation of M with P, Q and Hi, i=0, 1, 2, 3, 4, triangulated as subcomplexes of R. We wish to produce a proper homotopy G:PH[0, 1]àM such that G0 is the inclusion on P, G1(P)dM-X, Gt is the inclusion on QÈ[(M-U)ÇP] for each t0[0, 1], G(P-QÇUH[0, 1]dU, and so that there is a map d:Pà(0, 4) with DIST(G1(p), X)>d(p), for each point p0P. We will define G (a) on (the vertices of P)H[0, 1], (b) on (the 1-simplexes of P)H[0, 1], and (c) on (the 2-simplexes of P)H[0, 1]. (a) Let V={v1, v2, v3, ...} be the set of vertices of P. We define G:VHIàM by defining G(vi, t)=vi, for all t0[0, 1], if vi is a vertex of R-H4ÈQ. Let {v~1, v~2, v~3, ...} be the set of vertices of P not in R-H4ÈQ. Define g:J+àH3 by g(j)=v~j. Lemma 3.1 gives us a proper map G:J+HIàH4 such that G0=g and G(i, 1)ÏX, for all i0J+. We define G(v~i, t)=G(i, t), for all i0J+. (b) Let S={s1, s2, s3, ...} be the set of 1-simplexes of P. We define G:SHIàM by defining G(x, t)=x, for all t0[0, 1] and x0si if si0R-H4ÈQ. Note that if si0R-H4ÈQ, then so are its vertices, so that G, here defined, agrees with its definition on VHI. Let {s~1, s~2, s~3, ...} be the 1-simplexes of P not in R-H4ÈQ. Define G0|s~i to be the inclusion on s~i, for each i0J+. Now for each i0J+, Ms~i is a pair of vertices, and G(Ms~iH{1})dH4-X. For each i0J+, let hi be a homeomorphism from S0 onto Ms~i. Define F:S0HJ+àH4 by F(z, i)=G(hi(z), 1). Then since G(s~iH{0}ÈMs~iH[0, 1])dH4 and G is proper, F is properly S0-inessential in H4. By the null-homotopy property, F is properly S0-inessential in H3 missing X. It follows from this that G can be properly extended to (Èi=14s~i)H{1}, where the extension carries (Èi=14s~i)H[0, 1] into H3-X. In a natural way (cf. the construction of F above), G determines a proper map S1HJ+ from into H3. Since S1 is compact and connected, such a map is properly S1-inessential in H2 by Lemma 3.2. It follows from this that we may properly extend G to Èi=14(s~iH[0, 1]), where the extension carries Èi=14(s~iH[0, 1]) into H2. (c) Let T={t1, t2, t3, ...} be the set of 2-simplexes of P. We define G:THIàM by defining G(x, t)=x, for all t0[0, 1] and x0ti if ti0R-H4ÈQ. Note that if ti0R-H4ÈQ, then so are the 1-simplexes and vertices which are its faces, so that G, here defined, agrees with its definition on SHI. Let {t~1, t~2, t~3, ...} be the 2-simplexes of P not in R-H4ÈQ. Define G0|t~i to be the inclusion on t~i, for each i0J+. Now for each i0J+, Mt~i is identifiable with S1 and G(Mt~iH{1})dH3-X. In a natural way (cf. the construction of F above), G determines a proper map S1HJ+ from into H3-X which is properly S1-inessential in H2. By the null-homotopy property, this map is properly S1-inessential in H1 missing X. It follows from this that G can be properly extended to (Èi=14t~i)H{1}, where the extension carries (Èi=14t~i)H[0, 1] into H1-X. We now have the proper map G defined on Èi=14M(t~iH[0, 1]). In a natural way again, G determines a proper map S2HJ+ from into H1. Since S2 is compact and connected, such a map is properly S2-inessential in H0 by Lemma 3.2. It follows from this that we may properly extend G to Èi=14(t~iH[0, 1]), where the extension carries Èi=14(t~iH[0, 1]) into H0. We now have our desired homotopy G:PH[0, 1]àM such that G0 is the inclusion on P, G1(P)dM-X, Gt is the inclusion on QÈ[(M-U)ÇP] for each t0[0, 1], and G(P-QÇUH[0, 1]dU. Since X is closed and G is continuous, we may define a map d:Pà(0, 4) with DIST(G1(p), X)>d(p), for each point p0P. The properties we are concerned with are topological in nature. We do, however, use a metric defined on whichever manifold is our ambient space. Some of our proofs (e.g., the following lemma and many of the results of Section 4) will require certain properties of that metric which are not available for every metric which might be defined on our manifold. One property we sometimes require is that the closed e-ball, e>0, about a point in the manifold be compact. This means that if a closed subset is non-compact, it has infinite diameter. We have this, for example, if the metric for the manifold is complete [4, Chapter XIV, Theorem 2.3]. Requiring that the metric for our manifold be complete is not a great restriction, since every locally compact metric space has a complete metric [4, Chapter XIV, Corollary 2.4]. We shall henceforth assume that our manifolds are equipped with such metrics. 3.5 DEFINITION. We say that a space Y is k-ULC at the closed subset X, if for each e>0, there is a d>0 such that for each point x0X and map g:SkàN(x; d), g is e-null-homotopic in Y. 3.6 DEFINITION. Let (M, r) be a metric space with P and U subspaces. We say that P tends to U if given e>0, there is a compact subset CdP such that r(x, U)<e, for each x0P-C. The following lemma is analogous to Lemma 3.1 with the requirement that X be (n-1)-dimensional replaced by X tends to M-X and M is 0-ULC at X. It is easy to see that if X is (n-1)-dimensional, then X tends to M-X and M is 0-ULC at X. 3.7 LEMMA. Suppose X is a closed subset of a PL n-manifold M, X lies in ME, X tends to M-X and M is 0-ULC at X. If V is a closed neighborhood of X in M and g:J+àV is a proper map, then there exists a proper map G:J+HIàV such that G0=g and G(i,1)ÏX, for all i0J+. Proof. We write M as an expanding union of compact sets C1dCE2dC2dCE3d ... dÈi=14Ci=M where C1 is chose arbitrarily and, for i>1, Ci is chose inductively by the following method. Suppose Ci-1 has been chosen. Using the 0-ULC condition of the hypothesis, let di>0 be such that both di<1/i and such that for each point x0X, any map g:S0àN(x; d) is (1/i)-null-homotopic. Now choose Ci so large that if xÏCi, then DIST(x, Ci-1)>1 and, using the hypothesis that X tends to M-X, so large that if xÏCi, then N(x; di)ÇM-X¹f. Now consider the proper map g:J+àV. If g(j)ÏX, for some j0J+, we let G(j, t)=g(j), for all t0[0, 1]. Thus, for the rest of the argument, we assume that g(j)0X. If j is such that g(j)0C2, pick pj0V-X so that pj is in the component of V containing g(j). Define G|jH[0, 1]:jH[0, 1]àV so that G0(j)=g(j) and G1(j)=pj. For i>1, let j0J+ be such that g(j)0Ci+1-Ci. Since g(j)ÏCi, there is a point pj in (M-X)ÇN(g(j); di). By the 0-ULC condition, we may define G~j:jH[0, 1]àN(g(j); di) so that G~j0(j)=g(j) and G~j1(j)=pj. Since g(j)ÏCi, then DIST(g(j), Ci-1)>1; and since di<1/i<1, we have G~j(jH[0, 1])ÇCi-1=f. We now let qj be the right hand endpoint of the component of (G~j)-1(X) containing (j,0). (We regard {j}H[0, 1] as running from left to right, with (j,0) on the left.) Since G~j is continuous and V is a neighborhood of X, there is a neighborhood of qj in {j}H[0, 1] contained in (G~j)-1(V). Let rj be a point in this neighborhood not in (G~j)-1(X). Define Gj:jH[0, 1]àV by Gj(j,t)=G~j(j,t.rj). We now define the map G:J+HIàV such that G|{j}H[0, 1]=Gj. It follows that G0=g and G(j,1)ÏX, for all j0J+. To see that G is proper, let C be a compact subset of M. then let Ci be such that CdCi. Since g is proper, the subset of J+, K=G-1(Ci+1), is finite. Since G has been constructed so that for jÏK, G(jH[0, 1])ÇCi=f, then for j0K, G(jH[0, 1])ÇC=f. Thus G-1(C) is compact, G is proper, and the argument is complete. The following lemma is the analog of Lemma 3.4, just as the previous lemma was the analog of Lemma 3.1. Also, just as was the case for the previous lemma and Lemma 3.1, the following lemma subsumes Lemma 3.4. 3.8 LEMMA. Let M be a PL n-manifold and let X be a closed, connected subset of ME such that X tends to M-X and M is 0-ULC at X. Let U be a closed neighborhood of X in M such that a quasi-special sequence for X in U with the null-homotopy property exists. Then locally finite 2-complexes in M can be pulled into M-X in U. Proof. The proof is the same as that of Lemma 3.4, except that where in the proof of Lemma 3.4 an appeal to Lemma 3.1 is made, here the appeal is made to Lemma 3.7. We now state and prove our main theorem. 3.9 THEOREM. Let M be a PL n-manifold, n\$6, and let X be a closed, connected subset of ME such that M is 0-ULC at X and X tends to M-X. Then, X is strongly quasi-cellular if and only if X has SUV4 and satisfies the strong quasi-cellularity criterion. Proof. The proof of necessity is given by [5, Theorem 4.13]. We will now show the sufficiency of the conditions. For each closed neighborhood U of X, we must find a quasi-cell N such that XdNEdNdUE. Let H0 and H1 be the first two manifolds of a quasi-special sequence for X in U with the null-homotopy property. Let R be a triangulation of M with H0 and H1 as subcomplexes. Let R2 be the 2-skeleton of R. We use H1 as the U of Lemma 3.8 to obtain a proper homotopy G:R2H[0, 1]àM such that G0 is the inclusion on R2, G1(R2)dM-X, Gt is the inclusion on (M-H1)ÇR2 for each t0[0, 1], G(R2ÇH1H[0, 1])dH1, and so that there is a map d:R2à(0, 4) with DIST(G1(p), X)>d(p), for each p0R2. If we let {Ap} be {G(pH[0, 1])} for all points p in 2-complexes in M and all homotopies G pulling these 2-complexes into M-X in H2, we see that we may apply the Infinite Radial Engulfing Theorem (5.2, below) to obtain an engulfing isotopy G':MH[0, 1]àM such that G'0 is the identity, G't|R2ÇH1=1|R2ÇH1, R2dG'1(M-X), and G't|cl(M-H1)=1|cl(M-H1). This last result is achievable since G moves no point which is in cl(M-H1) and moves no point into cl(M-H1) from HE1 and since G' is picked so that points are moved close to the movement of points by G. Now define K to equal R2ÇH1dG'1(M-X)ÇG'1(H1) which equals G'1(H1-X). Let L be the complementary complex of K in H1. Since DIM L#n-3, there is a quasi-cell N* in HE0 with LdNE*, by the definition of quasi-special sequences. Since G'1(X) does not intersect K, there is an ambient isotopy, fixed outside of H0, G":MH[0, 1]àM which pushes N* along the join structure from L to K far enough so that G'1(X)dG"1(NE*). Define N=(G'1)-1BG"1(N*). Then XdNEdNdHE0dUE, and the proof is complete. 3.10 REMARK. Not every set having SUV4 is quasi-cellular as shown by [5, Example 4.19]. However, the following shows that if X is a locally compact, finite dimensional metric space with SUV4 (with respect to metric ANRs), then X embeds in some Euclidean space as a strongly quasi-cellular set. 3.11 THEOREM. If X is a closed subset of En with SUV4, then X is strongly quasi-cellular in En+3, for n\$3. Proof. Since n+3\$6, since DIM X<n+3 supplies that X tends to En+3-X and En+3 is 0-ULC at X, and since SUV4 is a topological property, so that X has SUV4 in En+3, Then Theorem 3.9 shows that we need only demonstrate that X satisfies the strong quasi-cellularity criterion to have that X is strongly quasi-cellular. Suppose that U is a closed neighborhood of X in En+3. Let V be any closed neighborhood of X in En+3 lying in UE. Now let g:SkHJ+àV-X be a map such that there is a proper homotopy G:(SkHJ+)HIàV with G0=g and G1|SkH{j} a constant map, for j0J+ and k=0 or 1. We may suppose, without loss of generality, that G is in general position with respect to En. Thus, since DIM(G[SkHJ+HI])#2, then EnÇG[SkHJ+HI]=f. Thus g is properly Sk-inessential in U missing X, k=0 or 1, and the proof is complete. 3.12 COROLLARY. Suppose X is a locally compact, finite dimensional, metric space. Then X has SUV4 with respect to metric ANRs if and only if X embeds as a strongly quasi-cellular subset of some PL n-manifold, n\$6. Proof. Since X is a locally compact, finite dimensional metric space, it embeds in En, for some n\$3. Theorem 3.11 thus completes the proof in one direction. [5, Theorem 4.13] completes the proof for the other direction. We now state the following theorem of R. B. Sher [13, Theorem 3.1] to complete the analogy with the compact case mentioned in the introduction (Shp is the proper shape function). 3.13 THEOREM. Suppose X is a locally compact metrizable space. Then X0SUV4 if and only if there exists a tree T such that ShpX=ShpT. 4 STRONG QUASI-CELLULARITY AND WEAK QUASI-CELLULARITY. In section 3 we presented one McMillan type theorem connecting strong quasi-cellularity with SUV4 and the strong quasi-cellularity criterion. Our use of the Infinite Radial Engulfing Theorem required certain technical conditions. Rushing [11] has other infinite engulfing theorems which yield similar theorems with other technical conditions. We state these here. We also give McMillan type theorems (of one direction only) for weak quasi-cellularity. 4.1 DEFINITION. Let M be a PL manifold and U an open subset of M. Then we say that (M, U) is p-connected if and only if for each integer i, 0 £ i £ p, and each map f:(Di, MDi)à(M, U) there exists a homotopy F:(Di, MDi)HIà(M, U) such that for each t0[0, 1], Ft is a map of the pair (Di, MDi) into the pair (M, U), F0=f, and F1(Di)dU. 4.2 DEFINITION. We call a manifold M uniformly locally p-connected, p-ULC, if given e>0, there is a d>0 for which every map f:SpàM such that the diameter of f(Sp) is less than d is e-null-homotopic. If M is p-ULC for 0 £ p £ k, the we say M is ULCk. 4.3 DEFINITION. Let M be a PL n-manifold, n\$2. A set XdME is said to thin down in M, if for each locally finite triangulation R of M, the 2-skeleton R2 of R tends to M-X. 4.4 LEMMA. Suppose M is a PL n-manifold, n\$2. Then a Set XdME tends to M-X if and only if X thins down in M. Proof. Suppose X tends to M-X and R is any locally finite triangulation of M. We shall show that R2 tends to M-X. Let e>0 be given. Then there is a compact set C such that if x is a point of X-C, then DIST (x, M-X)<e. Now let p be a point of R2-C. If p0X, we have p0X-C, so DIST (p, M-X)<e. If pÏX, then p0M-X, so that DIST (p, M-X)=0. Suppose X thins down in M. Let e>0 be given. Let R be a locally finite triangulation of M so that MESH R<e/2. Let C be a compact set such that if p0R2-C, then DIST (p, M-X)<e/2. Let C' be the smallest subcomplex of R containing C and let C" be the closed star of C' in R. It is clear that C" is compact. Let x be a point of X-C" and let Q be the set {q|q0R2 and DIST(x, q)<e/2}. Now Q is not empty since there is a simplex s in R with x in the interior (rel s) of s, and since the diameter of s is less than e/2, then R2dQ. Also, there is a point p in Q with p0R2-C', for if x0sE and (R2)dC', then s must be in C", but xÏC". Now R2-CÉR2-C', so that if p0R2-C and DIST (p, M-X)<e/2, we have DIST (x, M-X)<e. 4.5 THEOREM. Let M be a PL n-manifold, n\$5, and let X be a subset of ME which tends to M-X. Suppose that for each open neighborhood U of X there is an open neighborhood V of X lying in U such that V is ULC2 and V-X is ULC1. Then X is strongly quasi-cellular if and only if X is closed, has SUV4 and satisfies the strong quasi-cellularity criterion. Proof. The proof of necessity is given by [5, Theorem 4.13]. Suppose then that XdME is closed, has property SUV4 in M, and satisfies the strong quasi-cellularity criterion. Let U be any closed neighborhood of X. Let Hi, i=1, 2, 3, 4, and 5, be connected PL submanifolds of M so that (1) XdHEi+1dHi+1dHEidHi-UE, i=1, 2, 3, or 4, (2) there exists a proper PL homotopy Fi+1:Hi+1H[0, 1]àHi, with F0i+1 the inclusion, Fi+1(Hi+1H[0, 1])dHEi, and F1i+1(Hi+1)=Ti+1 a closed tree, i=1, 2, 3, or 4, (3) MHi¹f, i=1, 2, 3, 4, or 5, (4) if YdHi+1 is a polyhedron and DIM Y£n-3, then Y is quasi-trivial in Hi, i=1, 2, 3, or 4, (5) each map g:SkàHi+1-X is null-homotopic in HEi-X, i=1, 2, 3, or 4, and k=0 or 1. This last condition derives from the strong quasi-cellularity criterion [5, Remark 4.12 (f)]. Let R be a locally finite triangulation of M with Hi i=1, 2, 3, 4, or 5, as subcomplexes and let R2 be the 2-skeleton of R. By Lemma 4.4 we know that R2 tends to M-X. Let V be an open subset of HE5 such that XdV, V is ULC2, and V-X is ULC1. Using the first of Rushing's infinite engulfing theorems (Theorem 5.4, below). Let e't:VH[0, 1]àV be an ambient isotopy which extends by the identity to all of M, for which e'1(M-X) contains all of R2 except some compact subset contained in e'1(V)=V. Let L be a compact subcomplex of R2 which contains R2Çe'1(X) and such that R2-Lde'1(M-X). We now wish to apply Stallings' Engulfing Theorem with (L, HE2Ç(R2-L), e'1(HE2-X), HE2) replacing (P, Q, U, M) (see Theorem 5.3, below, for the variation of the theorem used here). We must show that the pair p=(HE2, HE2-X)»(e'1(HE2), e'1(HE2-X))=(HE2, e'1(HE2-X)) is 2-connected. Since H5 is arc connected, if f:(D0, MD0)à(HE2, HE2-X) is a map, with f(D0)dX, then there is an arc in H5 from f(D0) to a point in H5-X. Thus p is 0-connected. Now consider any map f:(D1, MD1)à(HE2, HE2-X). Suppose f(D1)dH5. Since f( MD1) is a pair of points in H5-X, there is a map g:D1àHE4-X with g|MD1=f|MD1. Since F14[f(D1)Èg(D1)] is a continuum in T4, f is homotopic (rel MD1) in HE3 to g. In the general case, cover f-1(X) with a finite number of disjoint arcs s1, s2, s3, ... sk so close to f-1(X) that each f(si)dH5. By the special case f|si is homotopic (rel Msi) in HE3 to a path in HE4-X. Putting these homotopies together, we see that f is homotopic (rel [D1-Èi=1ksEi]) in HE2 to a path in HE2-X. so p is 1-connected. Now consider any map f:(D2, MD2)à(HE2, HE2-X). Suppose f(MD2)dH4-X and f(D2)dH3. Then we have a map g:D2àHE3-X with g|MD2=f|MD2. Since F13[f(D2)Èg(D2)] is a continuum in T3, f is homotopic (rel MD2) in HE2 to g. In the general case, cover f-1(X) with the interiors of a finite number of punctured polyhedral 2-cells, t1, t2, t3, ... tm which do not meet MD2 and such that to f(ti)dH5, i=1,2, ...m. Let S be a triangulation of D2 so that the tis are subcomplexes of S. Let A=Èi=1m(1-skeleton ti) and let B=D2-Èi=1mtEi. By the 0-connectivity of H5, we have a homotopy (rel M(Èi=1mti)), (since f-1(X)dINT(Èi=1mti)) of the 0-skeleton of A in H5 to H5-X. We use the homotopy extension property for polyhedral pairs [14, Corollary 5, p. 118] to obtain a homotopy from AH[0, 1] into H5 with the restriction to the 0-skeleton agreeing with the original homotopy. We now use the special case of the proof of the 1-connectivity of , (since the 0-skeleton of AH{1} is carried into H5-X), to obtain a homotopy φ:AH[0, 1]àHE3 such that φ0=f|A, φ1(A)dHE4-X and φt|AÇB=f|AÇB (AÇB=M(Èi=1mti)). Using the homotopy extension property again, we extend φ to Φ:(Èi=1mti)HI with Φ0=f|Èi=1mti, Φ1(A)dHE4-X and Φt|AÇB=f|AÇB, t0[0, 1]. We now extend Φ to all of D2HI by defining Φt|B=f|B, for each t0[0, 1]. Let r be a 2-simplex of Èi=1mti, then Φ|r is a map from (r, Mr) to (H3, H4-X). By applying the special case, we see that Φ is homotopic (rel AÇB) in HE2 to a map G:D2àHE2-X. Thus f is homotopic (rel B, implying rel MD2) to G in HE2, and p is 2-connected. We have, from Stallings' Engulfing Theorem, an ambient isotopy e2:HE2H[0, 1]àH2 and a compact set EdHE2 such that Lde12(e'1(HE2-X)) and et2|(HE2-E)È(HE2ÇR2-L)=1|(HE2-E)È(HE2ÇR2-L), so that we may extend et2 by the identity on M-HE2. Now define K to be the complex H2ÇR2de12(e'1(HE2-X)). Let J be the complementary complex of K in H2 and let N* be a quasi-cell in HE1 with JdNE*. As in the proof of Theorem 3.9, let e3:MH[0, 1]àM be an ambient isotopy, fixed outside of H1, which engulfs e12(e'1(X)) with e13(NE*). Define N to be [(e'1)-1B(e12)-1Be13](N*). Prior to proving the third of the theorems about strong quasi-cellularity, we give the definition of a property used in the second of Rushing's infinite engulfing theorems. This property is used to reduce the ULC conditions. 4.6 DEFINITION. Let M be a PL n-manifold, U an open subset of M, and P a PL subspace of dimension k in M. We say that most of P can be pulled through M into U by a short homotopy H:(P-A)HIàM, where cl(A) is a compact subset of P, if (1) H(p, 0)=p, for all p0P-A, (2) H(p, 1) is in U, for all p0P-A, and (3) given e>0, there is a compact set BdP such that DIAM(H(pH[0, 1]))<e, for p0P-B. 4.7 THEOREM. Let M be a PL n-manifold, n\$5, and let X be a subset of ME which tends to M-X. Suppose that for each open neighborhood U of X there is an open neighborhood V of X lying in U such that V is ULC6-n, V-X is ULC5-n, and for each locally finite triangulation R of M, most of the 2-skeleton R2ÇV can be pulled through V into V-X by a short homotopy. Then X is strongly quasi-cellular if and only if X is closed, has SUV4 and satisfies the strong quasi-cellularity criterion. Proof. Repeat the proof of Theorem 4.5, replacing the use of the first of Rushing's infinite engulfing theorems with the second (5.5, below). 4.8 REMARK. If n\$7 in Theorem 4.7 then the ULC conditions disappear. The resulting theorem is very much like Theorem 3.9. The 0-ULC at X condition for M is replaced by the short homotopy condition. It is possible that the short homotopy can be obtained from SUV4 and strong quasi-cellularity in somewhat the same manner as the homotopy pulling locally finite 2-complexes into M-X in U is obtained in Lemma 3.8. However, some other condition, such as 0-ULC at X may be required. We now state and prove three theorems for sufficiency conditions for weak quasi-cellularity. 4.9 THEOREM. Let M be a PL n-manifold, n\$6, and let X be a closed, connected subset of ME such that M is 0-ULC at X and X tends to M-X. Then X is weakly quasi-cellular if X satisfies the weak quasi-cellularity criterion. Proof. By the definition of the weak quasi-cellularity criterion (2.10), there exists a quasi-special sequence {Hi-1,Ti,Fi}i=14 for X in M having the null-homotopy property. Then the proof of sufficiency in Theorem 3.9 gives a quasi-cell N with XdNEdNdHE0dME. We need only show that for each quasi-cell Q containing X in its interior, there is a sequence of quasi-cells {Qi}i=14 lying in Q with XdQEi+1dQi+1dQEi and X=Çi=14Qi. Since we already have the existence of the quasi-cell N, this will complete the proof. Write M as the union of a countable number of compact PL n-manifolds, M1dME2dM2dME3d ... dÈi=14Mi=M. Let {ei}i=14 be a decreasing sequence of positive numbers converging to zero. There exists a quasi-special sequence for X in Q having the null-homotopy property, so we may choose Q1dQE as N was chosen for M. Inductively, suppose Qj has been defined. Let {Hi-1,Ti,Fi}i=14 be a quasi-special sequence for X in Qj with the null-homotopy property. Let Hi have large enough subscript so that HiÇMjdN(X;ej), and as before, there exists a quasi-cell Qj+1 containing X in its interior with Qj+1dHEi. Let {Qi}i=14 be the sequence of quasi-cells so defined. It is clear that XdQEi+1dQi+1dQEi, for all i. Let p be a point in M-X, and let e=DIST(p, X). Let ei<e and Mj be such that p0Mj. Then p is not in Qi+j. Thus X=Çi=14Qi, and the proof is complete. 4.10 THEOREM. Let M be a PL n-manifold, n\$5, and let X be a closed, connected subset of ME which tends to M-X. Suppose that for each open neighborhood U of X there is an open neighborhood V of X lying in U such that V is ULC2 and V-X is ULC1. Then X is weakly quasi-cellular if X satisfies the weak quasi-cellularity criterion. Proof. Let {Hi-1,Ti,Fi}i=14 be a quasi-special sequence for X in M having the null-homotopy property. The proof of sufficiency in Theorem 4.5 gives a quasi-cell N with  XdNEdNdHE0dME. The argument in the proof of Theorem 4.9 finishes the proof. 4.11 THEOREM. Let M be a PL n-manifold, n\$5, and let X be a closed, connected subset of ME which tends to M-X. Suppose that for each open neighborhood U of X there is an open neighborhood V of X lying in U such that V is ULC6-n, V-X is ULC5-n, and for each locally finite triangulation R of M, most of the 2-skeleton R2ÇV can be pulled through V into V-X by a short homotopy. Then X is weakly quasi-cellular if X satisfies the weak quasi-cellularity criterion. Proof. Repeat the proof of Theorem 4.10, replacing the use of the first of Rushing's infinite engulfing theorems with the second (5.5, below). 4.12 REMARK. If n\$7 in Theorem 4.11, we have a theorem without the ULC conditions, as in Remark 4.8. It should be noted that some of our results are not as strong as they might be. For instance, the conditions X tends to M-X, V is ULC2, and V-X is ULC1 in Theorems 3.9, 4.5, 4.7, 4.9, 4.10, and 4.11 are conditions on the metric of the ambient manifold M. Since quasi-cellularity is a topological concept, it seems natural to suspect that these are conditions which allow us to prove our results in this particular manner, not necessarily conditions required for proof of these results. Obviously, the theorems can be strengthened by weakening the hypotheses by requiring only that there be an ambient isotopy of the ambient manifold so that the image of X under the ambient isotopy has the required conditions. This is a somewhat clumsy requirement, so it is stated here as a remark, rather than being included in each theorem. For example, consider X in Figure 1, illustrated in ú3, and its analogs in ún. The bumps have constant height and their indentations have constant depth (x3-direction) and a constant width (x2-direction), but have decreasing length (x1-direction). X does not tend to ú3-X and there is no V that is 0-ULC. If we alternate the bumps with bumps that have decreasing widths, there is no V that is 1-ULC, with V-X either 0-ULC or 1-ULC. Despite these problems, there is an ambient isotopy between either version of X and X' shown in Figure 2. X' does tend to ú3-X and for each closed neighborhood U of X, there is a closed neighborhood V of X in U such that V is ULC2 and V-X is ULC0. Clearly both X and X' are strongly quasi-cellular, yet only for analogs of X' may we apply any of our results to show this. Figure 1. Strongly quasi-cellular, but not provable by these theorems Figure 2. Strongly quasi-cellular, with ambient isotopy to Figure 1 There are other questions that could lead to further research. For example, the number of ends of a quasi-cell is well-defined and equals the number of ends of a tree of which it is a regular neighborhood. Is the number of ends of a quasi-cell well defined? If so, is there a relation between the number of ends of the defining quasi-cells and the number of ends of the quasi-cellular set? Figures 3 and 4 give examples of strongly quasi-cellular sets and quasi-cells that might appear in sequences used to define the sets. The symbol ei(Y) refers to the ith end of Y, for whatever space Y is. Figure 3. Strongly quasi-cellular set and containing quasi-cell Figure 4. A different strongly quasi-cellular set and containing quasi-cell A question arises concerning compactification. If a quasi-cellular set is compactified by adding a point at each end, this same compactification of the quasi-cells defining the set gives rise to a "pinched" cell definition of the set. When can these cells be "unpinched" so that the set is cellular? On the other hand, suppose a set is cellular and certain points are removed. If the resulting set and cells minus points are embedded as closed subsets of some manifold, when can quasi-cellularity be achieved? Notice, in Figures 5 and 6, that the cells minus points are neighborhoods of the ends, not necessarily quasi-cells. Figure 5. A cellular set and containing cell Figure 6. A cellular set minus a point and containing neighborhood In this example, X is cellular in S2 and N is a cell in S2 containing X in its interior. Removing an endpoint of X yields X' as a closed subset of ú2 and N' as a neighborhood of infinity containing X' in its interior. 5 APPENDIX OF ENGULFING THEOREMS. Our first engulfing theorem is the Infinite Radial Engulfing Theorem. Bing defines radial engulfing and proves a series of theorems [2]. He also mentions the possibility of infinite engulfing and points out techniques which might be useful [2, Modification 5, p. 7]. Our theorem is a generalization of his radial engulfing in codimension four from section 3 of [2]. We first give a modification of Definition 3.3 which increases the generality of the Infinite Radial Engulfing Theorem in some situations. Our use here of the theorem does not require this generality, however. The proof of our theorem and many ancillary lemmas and definitions are found in [6]. 5.1 DEFINITION. Suppose M is a PL n-manifold, U is an open subset of M, and {Aa} is a collection of sets in M. We say locally finite k-complexes in Mn can be pulled into U along {Aa} in M if, for each closed PL subspace Pk of M and closed set QdPk such that QdU, there is a proper homotopy H:PH[0, 1]àM such that H0 is the inclusion, H1(P)dU, Ht is the inclusion on Q, for each p0P, H(pH[0, 1]) lies in an element of {Aa}, and there is a map d:Pà(0, 4) with DIST (H1(p), M-U)>d(p), for each p0P. 5.2 THEOREM. Suppose U is an open subset of a PL n-manifold M, P is a closed subspace of M, Q is a closed PL subspace of P lying in U and R=cl(P-Q) is r-dimensional, r£n-4. If {Aa} is a collection of subsets of M such that locally finite r-complexes in M can be pulled into U along {Aa}, then for each map e:Rà(0, 4), there exists an engulfing isotopy G:MH[0, 1]àM such that G0=1, Gt|Q=1|Q, RdG1(U) and there is a function z:Mà(0, 4), depending on e, such that for each y0M, there exist r+1 or fewer elements of {Aa} such that G(yH[0, 1]) lies in the z(y)-neighborhood of the union of these r+1 elements. Our next three engulfing theorems are all to be found in Rushing [11]. They are stated for PL manifolds without boundary; however, if a PL manifold has boundary, its interior is a manifold without boundary and the conclusions of these theorems allow for the engulfing isotopies to be extended by the identity on the boundary of such a manifold. The first of these theorems is Rushing's version of Stallings' Engulfing Theorem. 5.3 THEOREM. Let M be a PL n-manifold without boundary, U an open subset of M, Pk a finite polyhedron in M of dimension k£n-3 and QqdU a (possibly infinite) polyhedron of dimension q£n-3 such that (cl(Q)-Q)ÇP=f. Let (M, U) be k-connected. Then there is a compact set EdM and an ambient isotopy et of M such that Pdet(U) and et|(M-E)ÈQ=1|(M-E)ÈQ. In the statements of Rushing's Infinite Engulfing Theorems, Mn is a connected PL n-manifold without boundary, U is an open set in M, Pk is an infinite polyhedron of dimension k£n-3, which is contained in M (P is not necessarily closed in M) and Qq is a (possibly infinite) polyhedron of dimension q£n-3 such that (cl(Q)-Q)ÇP=f and (cl(P)-P)ÇQ=f. The symbol "4~" denotes a closed subset of M-(PÈQ) containing (cl(P)-P)È(cl(Q)-Q). Our next theorem appears in [11] as the corollary to the Infinite Engulfing Theorem 1. 5.4 THEOREM. Suppose that M-4~ is ULCk, U-4~ is ULCk-1 and P tends to U. Then most of Pk can be engulfed by U staying fixed on Q, in the following sense: Given a compact set CdP and e>0, there exists and ambient isotopy et of Mn such that et|(M-N(P-C; e))ÈQ=1|(M-N(P-C; e))ÈQ and such that e1(U) contains all of P except some compact subset. Furthermore, for each d>0, there exists a compact subset KdM-4~ such that et|M-K is a d-isotopy. The last theorem is Rushing's Infinite Engulfing Theorem 1 of [11]. 5.4 THEOREM. Suppose that M-4~ is ULCmax(k,q)+k-n+2, U-4~ is ULCmax(k,q)+k-n+2. Also suppose that most of Pk can be pulled through M-4~ into U-4~ by a short homotopy. Then most of Pk can be engulfed by U in the same sense as in Theorem 5.4 CONTINUATION: "More about Property SUV4 and Strong Quasi-Cellularity." REFERENCES 1. Ball, B. J. and R. B. Sher, "A Theory of Proper Shape for Locally Compact Metric Spaces," Fund. Math., 86 (1974), 164-192. 2. Bing, R. H., "Radial Engulfing," in Conference on the Topology of Manifolds, Prindle, Weber, and Schmidt (Boston), (1968), 1-18. 3. Borsuk, K., "Fundamental Retracts and Extensions of Fundamental Sequences," Fund. Math., 64 (1969), 55-85. 4. Dugundji, J., Topology, Allyn and Bacon, Inc., (Boston), 1966. 5. Hartley, D. S., III, "Fundamentals of Quasi-Cellularity," unpublished. 6. _____________, Quasi-Cellularity in Manifolds, Dissertation, University of Georgia, Athens, GA, 1973 (Ref # A 515428, Dec 27 1973, University Microfilms, 300 North Zeeb Rd, Ann Arbor, MI 48106). 7. Hudson, J. F. P., Piecewise Linear Topology, W. A. Benjamin, Inc., (New York), 1969. 8. McMillan, D. R., Jr., "A Criterion for Cellularity in a Manifold," Ann. of Math. (2) 79(1964), 327-337. 9. ___________, "UV Properties and Related Topics," Mimeographed notes. 10. Mardesic, S., "Retracts in Shape Theory," Glasnik Mat. Ser. III 6 (26), (1971), 153-163. 11. Rushing, T. B., "Infinite Engulfing," preprint. Possibly published as "A summation of results of infinite engulfing," Proceedings of the University of Oklahoma Topology Conference Dedicated to Robert Lee Moore (Norman, Okla., 1972), Univ. of Oklahoma, Norman, Oklahoma, 1972, pp. 284–293. 12. Scott, A., "Infinite Regular Neighborhoods," J. London Math. Soc. 42 (1967), 245-253. 13. Sher, R. B., "Property SUV4 and Proper Shape Theory," Trans. Amer. Math. Soc. 190 (1974), 345-356. 14. Spanier, E. H., Algebraic Topology, McGraw-Hill Book Company, (San Francisco), 1966. 15. Stallings, J., "The Piecewise-Linear Structure of Euclidean Space," Proc. Cambridge Philos. Soc. 58 (1962), 481-488.
My Math Forum 0^0 User Name Remember Me? Password Algebra Pre-Algebra and Basic Algebra Math Forum November 11th, 2017, 07:52 AM #1 Senior Member   Joined: Nov 2011 Posts: 173 Thanks: 2 0^0 Why 0^0 is not define? November 11th, 2017, 09:50 AM #2 Math Team   Joined: Dec 2013 From: Colombia Posts: 7,118 Thanks: 2369 Math Focus: Mainly analysis and algebra Because $0^x=0$ for all $x \ne 0$ and $x^0=1$ for all $x \ne 0$, so there is no way to pick an answer. In reality, if a function heads towards $0^0$, it can have any one of a range of values in the limit. Thanks from Loren and JeffM1 Thread Tools Display Modes Linear Mode Contact - Home - Forums - Cryptocurrency Forum - Top
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> Dependent Events Two outcomes both occurring dependently. Estimated17 minsto complete % Progress Practice Dependent Events MEMORY METER This indicates how strong in your memory this concept is Progress Estimated17 minsto complete % Dependent Events Antonio is responsible for washing the dishes.  There are four bowls, three plates, and six cups in the sink.  If Antonio washes a bowl first, what is the probability that he will grab another bowl next? In this concept, you will learn how to calculate the probability of two dependent events occurring. Guidance The Probability Rule is the probability that two independent events and , will both occur and is written as the formula: If you pull two socks from a bag full of the following blue and red socks, what is the probability of both socks being red? The probability of the first sock being red is: What about the second sock?  Having removed the first sock from the bag, we have changed the number of red socks and the total number of socks in the bag. So now instead of there being 3 red socks out of 6 total socks, there are only 2 red socks left out of 5 total socks: The probability of two red socks being pulled is: Let's consider another example. A stack of 8 cards has 4 Jacks and 4 Queens. What is the probability of picking 2 Jacks from the stack at random? Use the Probability Rule to find the probability of the two dependent events. First, determine the probability of the 1st Jack: Next, determine the probability of the 2nd Jack: Once the 1st Jack is taken, the probability of 2nd Jack is only 3 of 7 because there are only 4 Jacks left out of 7 total cards: Then, substitute the values into the Probability Rule formula: The answer is the probability of picking two Jacks is . Guided Practice There are five girls and eight boys in a group. Mrs. Marsh is going to choose two students randomly to lead the line. What is the probability that she will choose two boys? First, figure out the total number of students: Next, write a ratio to show the probability of her picking the first boy and then the second boy: probability of picking first boy = probability of picking second boy = Then, multiply the two values together: The answer is the probability of Mrs. Marsh choosing two boys is . Examples A bag has three red marbles, two yellow marbles and four blue marbles. Example 1 What is the probability of pulling two red marbles out of the bag? First, figure out the total number of marbles: Next, write a ratio to show the probability of pulling the first red marble and the second red marble out of the bag: the ratio of the probability of pulling the first red marble: the ratio of the probability of pulling the second red marble: Then, multiply the two values together: = The answer is the probability of choosing two red marbles is . Example 2 What is the probability of pulling out two blue marbles? First, figure out the total number of marbles: Next, write a ratio to show the probability of pulling the first blue marble and the second blue marble out of the bag: the ratio of the probability of pulling the first blue marble: the ratio of the probability of pulling the second blue marble: Then, multiply the two values together: The answer is the probability of choosing two blue marbles is  . Example 3 What is the probability of pulling out two yellow marbles? First, figure out the total number of marbles: Next, write a ratio to show the probability of pulling the first yellow marble and the second yellow marble out of the bag: the ratio of the probability of pulling the first yellow marble: the ratio of the probability of pulling the second yellow marble: Then, multiply the two values together: = The answer is the probability of choosing two yellow marbles is . Remember Antonio washing the dishes?  What is the probability that he will wash a bowl first and then grab another bowl next when there are four bowls, three plates, and six cups in the sink? First, figure out the total number of dishes in the sink: Next, write a ratio to show the probability of picking the first bowl and then the second bowl: probability of picking first bowl = probability of picking second bowl = Then, multiply the two values together: = The answer is the probability of Antonio choosing two bowls is  . Explore More Use this description to figure out the probability of each dependent event. A box has eight kittens in it. Three calico, two white and three black. 1. What is the probability of choosing two white kittens? 2. What is the probability of choosing three black kittens? 3. What is the probability of choosing two black kittens? 4. What is the probability of choosing two calico kittens? 5. What is the probability of choosing three calico kittens? 6. What is the probability of choosing one white and then one black kitten? 7. What is the probability of choosing two calico and one black? 8. What is the probability of choosing one calico and one white kitten? 9. What is the probability of choosing a striped kitten? 10. What is the probability of choosing one white kitten and two black kittens? Solve each problem. 11. A clothes dryer contains 5 black socks and 1 white sock. What is the probability of taking two socks, one after another, out of the dryer and having them both be black? 12. A clothes dryer contains 4 black socks and 2 white socks. What is the probability of taking two socks out of the dryer and having them both be black? 13. A clothes dryer contains 3 black socks and 3 white socks. What is the probability of taking two socks out of the dryer and having them both be black? 14. A clothes dryer contains 3 black socks and 3 white socks. What is the probability of taking two socks out of the dryer and having the first one be black and the second one be white? 15. Bob bought two theater box tickets. The computer randomly assigns the tickets in one of 5 seats: end seat A, middle seat B, middle seat C, middle seat D, or end seat E. What is the probability that the first ticket is A and the second ticket is seat B? To view the Explore More answers, open this PDF file and look for section 12.18. Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes Vocabulary Language: English Dependent Events In probability situations, dependent events are events where one outcome impacts the probability of the other. Independent Events Two events are independent if the occurrence of one event does not impact the probability of the other event. Probability Rule The probability of two independent events A and B both occurring is P( A and B) = P(A) $\cdot$ P(B).
# Array of strings as text for nodes in Tikz I have a rather large number of nodes n1, n2,...,n41, all of them containing expresions like \textcolor{c}{blabla} and \large etc. It seem when storing all these texts in an array of the form described in List of strings for tikzpicture namely \documentclass{article} \usepackage{tikz} \begin{document} \newcommand\johnlist{{"ala","bla","cla"}} \begin{tikzpicture} \node[shape=circle,draw=black] at (0,0) {\pgfmathparse{\johnlist[1]}\pgfmathresult}; \end{tikzpicture} \end{document} does not work if for example "bla" is replaced by \Large bla. So the question is how to create lists or arrays of TeX-compatible strings? • Is \large\pgfmathparse acceptable? – Sigur Jan 29 '18 at 13:19 • Instead of \Large bla you should use font=\Large key on the node. – percusse Jan 29 '18 at 14:51 My impression is that the handling of arrays in PGF is quite fragile. \documentclass{article} \usepackage{tikz} \usepackage{xparse} \ExplSyntaxOn \NewDocumentCommand{\setlist}{mm} { \clist_clear_new:c { l_jens_#1_array_clist } \clist_set:cn { l_jens_#1_array_clist } { #2 } } \NewExpandableDocumentCommand{\listitem}{mm} { \clist_item:cn { l_jens_#1_array_clist } { #2 } } \ExplSyntaxOff \begin{document} \setlist{john}{\textbf{ala},\Huge bla,\large cla} \begin{tikzpicture} \node[shape=circle,draw=black] at (0,0) {\listitem{john}{1}}; \node[shape=circle,draw=black] at (1,0) {\listitem{john}{2}}; \node[shape=circle,draw=black] at (2,0) {\listitem{john}{3}}; \end{tikzpicture} \bigskip \begin{tikzpicture} \foreach \x in { 0,1,2 } { \node[shape=circle,draw=black] at (\x,0) {\listitem{john}{\x+1}}; } \end{tikzpicture} \end{document} Note that you can do arithmetic operations on the second argument to \listitem. The indexing starts from 1. Something like this? \documentclass{article} \usepackage{tikz} \newcommand{\green}[1]{\textcolor{green}{#1}} \begin{document} \newcommand\johnlist{{"\noexpand\Huge ala","\noexpand\green{{bla}}","\noexpand\bfseries\noexpand\Large cla"}} \begin{tikzpicture} \foreach \i in {0,1,2} { \node[shape=circle,draw=black] at (\i,0) {\pgfmathparse{\johnlist[\i]}\pgfmathresult}; } \end{tikzpicture} \end{document} • This works, but it seems that 1) \noexpand\textbf{bla} gives no bold text and 2) \noexpand\textcolor{green}{bla} gives error messages. – Jens Schwaiger Jan 30 '18 at 17:07 • @JensSchwaiger I modified the answer. And sorry, your point 2) did not show. (This happened already before and is really annoying.) Anyway, sorry for my comment. – user121799 Jan 30 '18 at 17:24 • I see: lower level commands will do it. – Jens Schwaiger Jan 31 '18 at 5:14 • @JensSchwaiger No, the difference are the brackets. \noexpand\textbf{{...}} also works. You can either use a command that does not require brackets such as \bfseries (with syntax {\bfseries ...}, or commands like \textbf, but then you have to put two braces, \textbf{{...}} instead of \textbf{...}. This is because the \noexpand makes TeX to skip over the nex command. – user121799 Jan 31 '18 at 5:22 You could hack the random list stuff, although specifying items requires braces rather than commas, and indexing starts at 1: \documentclass[tikz,border=5]{standalone} \def\pgfmathrandomlistitem#1#2{\pgfmathparse{int(#2)}% \csname pgfmath@randomlist@#1@\pgfmathresult\endcsname} \pgfmathdeclarerandomlist{list1}{% {\tiny\textcolor{red}{ala}} {\small\textcolor{green}{bla}} {\large\textcolor{blue}{cla}} } \begin{document} \begin{tikzpicture} \foreach \i in {0,...,2} \node [circle, draw] at (0, \i) {\pgfmathrandomlistitem{list1}{\i+1}}; \end{tikzpicture} \end{document}
## Note on Concept and Types of Elasticity of Demand • Note • Things to remember ### CONCEPT OF ELASTICITY OF DEMAND The elasticity of demand is a measure of a degree of responsiveness of quantity of a product to the change in its determinants. If the demand is more elastic, then a small change in price will cause a large change in quantity consumed. If the demand is less elastic, then it will take large changes in price to make a change in quantity consumed. The concept of elasticity of demand shows how much or to what rate the quantity demanded of a commodity will change as a result of a change in the price. According to K.E. Boulding, "The elasticity of demand may be defined as the percentage change in the quantity demanded which would result from one percent change in its price". According to Prof. Meyers, "Elasticity of demand is a measure of the relative change in the amount purchased in response to any change in price or a given demand curve". According to Lipsey, "Elasticity of demand may be defined as the ratio of the percentage change in demand to the percentage change in price". According to Mrs. John Robbins, "The elasticity of demand at any price or at any output is the proportional change to the amount purchased response to a small change in price, divided by the proportional changes of price". In brief, the elasticity of demand is defined as the proportionate change in quantity demanded divided by the proportionate change in its determinants like price, income, etc. Symbolically, Elasticity of demand (Ed) = {percentage change in quantity demand / percentage change in determinants of demand ### TYPES OF ELASTICITY OF DEMAND Elasticity of demand has been divided into three parts: 1. Price Elasticity of Demand (Ep) 2. IncomeElasticity of Demand (Ey) 3. CrossElasticity of Demand (Exy) #### Price Elasticity of Demand (EP) When the price of goods changes, its demand also changes. Price elasticity of demand measures by how much quantity demand of goods changes with a given change in the price of it. So, the price elasticity of demand is the measure of the responsiveness of quantity demanded of a product to the change in its price, being other things constant. The term other things refers to the income of the consumer, price of related goods, etc. The price of elasticity of demand is symbolized by the letter 'Ep' and it is written as: Ep = percentage change in quantity demand/ percentage change in price = (ΔQ / ΔP)*(P / Q) Where, Ep = Price elasticity of demand ΔQ = Change in quantity demand ΔP = Change in price Q = Initial quantity demand P = Initial price DEGREE / TYPES OF PRICE ELASTICITY OF DEMAND Price elasticity of demand can be discussed under the following five types: i) Perfectly Elastic Demand (Ep =∞) If very small changes or negligible change in the price of a good lead to an infinite change in quantity demanded that good, then the demand is known as perfectly elastic demand. In this type of demand, the value of price elasticity of demand reaches infinite. The demand curve indicates the change in price is insignificant; however the change in quantity demanded is infinite. In the given figure, the price is measured in Y-axis and quantity demanded is measured along the X-axis. The point 'P' is the price where the consumer can buy any quantity of demand like Q1, Q2, Q3 and so on. Hence, DD is the perfectly elastic demand curve sloping upward. ii) Perfectly Inelastic Demand (Ep=0) If the quantity demanded is totally irresponsive to the change in the price of a good, then the demand is known as perfectly inelastic demand. In such type of demand, whatever be the change in price, the quantity demanded remains same or unchanged. This type of elasticity is found in the case of basic necessary goods such as salt, medicine, etc. Therefore, the numerical value of elasticity becomes 0. In the given figure, the price is measured in Y-axis and quantity demanded is measured along the X-axis. QD is the perfectly inelastic demand curve which remains constant even the price of a commodity increase from P1 to P2 to P3. iii) Relatively Elastic Demand (Ep>1) If the change in demand is greater than the change in the price of good, then the demand is known as relatively elastic demand. At that time percentage change in price leads to more than percentage change in quantity demanded. In this type of demand, the absolute value of price elasticity of demand remains greater than unity. In the given figure, the price is measured in Y-axis and quantity demanded is measured along the X-axis. The curve is more flat which shows that demand is more elastic. The small fall in price from P2 to P1 effects majorly on quantity demand from Q1 to Q2 i.e. percentage change in demand is more than the percentage change in price. iv) Relatively Inelastic Demand (Ep<1) If the percentage change in demand is less than the percentage change in the price of a good, then the demand is known as relatively inelastic demand. At that time, one percentage change in price leads to less than one percentage change in quantity demand. In this type of demand, the absolute value of price elasticity of demand remains less than unity. In the given figure, the price is measured in Y-axis and quantity demanded is measured along the X-axis. There is a huge difference in price from P1 to P2 but the quantity demand has no vast difference from Q1 to Q2. It means there is a small change in quantity demand even the price change with a huge amount. The demand curve DD1 in the figure seems steeper. v) Unitary Elastic Demand (Ep=1) If the percentage change in demand is equal to the percentage change in the price of a good, then the demand is known as unitary elastic demand. In such case percentage change in price equals to the percentage change in quantity demand. At that time, the absolute value of elasticity of demand remains just equal to 1. In the given figure, the price is measured in Y-axis and quantity demanded is measured along the X-axis. An initial point of price (P1) and quantity demanded (Q1) is shown as related and when there is a change in price from P1 to P2 then it results in an equal change in quantity demand from Q1 to Q2. The percentage change in price and the percentage change in quantity demand is equal. DD1 is the unitary elastic demand curve smoothly sloping downwards to the right. (Karna, Khanal, and Chaulagain)(Khanal, Khatiwada, and Thapa)(Jha, Bhusal and Bista) Bibliography Jha, P.K., et al. Economics II. Kalimati, Kathmandu: Dreamland Publication, 2011. Karna, Dr.Surendra Labh, Bhawani Prasad Khanal and Neelam Prasad Chaulagain. Economics. Kathmandu: Jupiter Publisher and Distributors Pvt. Ltd, 2070. Khanal, Dr. Rajesh Keshar, et al. Economics II. Kathmandu: Januka Publication Pvt. Ltd., 2013. Types and subtypes of elasticity of demand Price Elasticity of Demand (Ep) 1. Perfectly Elastic Demand (Ep =∞) 2. Perfectly Inelastic Demand (Ep=0) 3. Relatively Elastic Demand (Ep>1) 4. Relatively Inelastic Demand (Ep<1) 5. Unitary Elastic Demand (Ep=1) Income Elasticity of Demand (Ey) 1. Zero Income Elasticity of Demand (Ey=0) 2. Positive Income Elasticity (Ey>0) 3. Negative Income Elasticity (Ey<0) Cross Elasticity of Demand (Exy) 1. Positive Income Elasticity (Exy>0) 2. Negative Income Elasticity (Exy<0) . 0%
# Category Archives: Foundations of Mathematics ## Are Real Numbers Real? Undoubtedly, real numbers are the most fundamental things to describe literally everything we know about our physical world, so questioning if real numbers are real may appear to be as silly as questioning if our world is real. But seriously, … Continue reading ## Can $1+1$ be $1$? $1+1=2$ is often quoted by laypeople (in mathematics) as an epitome of the absolute truth. Those who know a bit of mathematics know that that is not the case. For example, there is a number system where $1+1=0$. It is … Continue reading ## $1+2+3+4+\cdots=-\frac{1}{12}$? No, folks! I am not drunk nor I am pot-headed. Yet, I am about to discuss the crazy identity $$1+2+3+4+\cdots=-\frac{1}{12}.$$ No, I am not joking either. This is actually pretty serious mathematics and is also pretty serious stuff even to … Continue reading
Courses Courses for Kids Free study material Free LIVE classes More # Solve the given inequality for real $x$, $3x-7>5x-1$. Last updated date: 24th Mar 2023 Total views: 306.9k Views today: 7.84k Verified 306.9k+ views Hint: We will have to solve for $x$ simply by solving the above equation, not just as an equation but with keeping inequality in mind, i.e., addition of variables and constants on both sides rather than simply transposing them. Here, we have the given inequality of the form $\Rightarrow 3x-7>5x-1...\text{ }\left( 1 \right)$ In the above equation (1), we could have simply transposed the RHS variables to LHS but that is inappropriate for an inequality-based equation So, we have to add or subtract the variables on both sides to cancel out on one side, i.e., $\Rightarrow 3x-7>5x-1$ By adding $1$ on both sides, we get \begin{align} & \Rightarrow 3x-7>5x-1 \\ & \Rightarrow 3x-7+1>5x-1+1 \\ & \Rightarrow 3x-6>5x+0 \\ & \Rightarrow 3x-6>5x \\ \end{align} Now, for the variable term, we can subtract $5x$ on both sides, we get \begin{align} & \Rightarrow 3x-6>5x \\ & \Rightarrow 3x-6-5x>5x-5x \\ & \Rightarrow -2x-6>0 \\ \end{align} Now, since variable has a negative sign with it, we have to multiply $\left( -1 \right)$ on both the sides, i.e., \begin{align} & \Rightarrow -2x-6>0 \\ & \Rightarrow -\left( 2x+6 \right)>0 \\ & \Rightarrow -\left( 2x+6 \right)\left( -1 \right)<\left( 0 \right)\left( -1 \right) \\ \end{align} Here, in the above equation’s last step, the inequality sign got reversed with multiplication of $\left( -1 \right)$ on both sides, thus \begin{align} & \Rightarrow -\left( 2x+6 \right)\left( -1 \right)<\left( 0 \right)\left( -1 \right) \\ & \Rightarrow 2x+6<0 \\ \end{align} Subtracting $6$ on both sides, we get \begin{align} & \Rightarrow 2x+6<0 \\ & \Rightarrow 2x+6-6<0-6 \\ & \Rightarrow 2x+0<-6 \\ & \Rightarrow 2x<-6 \\ \end{align} Now, dividing $2$ on both sides, we get \begin{align} & \Rightarrow 2x<-6 \\ & \Rightarrow \dfrac{2x}{2}<\dfrac{-6}{2} \\ & \Rightarrow x<-3,\text{ i}\text{.e}\text{.,} \\ & x\in \left( -\infty ,-3 \right) \\ \end{align} Hence, we can say that the value of real $x$ with given inequality is $x\in \left( -\infty ,-3 \right)$ Note: A simple mistake which is very common in this kind of problem is, students generally transpose the variables across the inequality like a normal equation which is not preferred, especially in case of multiplications and divisions.
# What was the rate of increase for these automobiles between the two time periods In the early 2000s, selected automobiles had an average cost of $15,000. The average cost of those same automobiles is now$18,000. What was the rate of increase for these automobiles between the two time periods? ## Related Questions in Managerial Accounting • ### Assignment 3: Professional Obligations Task Firstly, you are required to devise a series of... (Solved) 4 weeks ago areas? Then reflect on what this means for you in your career and what actions you should take to ensure that you are always acting professionally and ethically. You should also reflect on the similarities and differences between the professional body and the individual working in a profession. Do Introduction The report is a reflection of the accounting principles and ethics. For this purpose, some questions are prepared to investigate how the ethical and processional behaviours... • ### Question 1: Case study You are a financial adviser and the following information is an extract of... (Solved) 4 weeks ago ,000 Cars (Joint) 35,000 Credit cards (Joint) Includes the annual interest cost 6,000 Bank Account: ANZ Savings Account (Joint) 15,000 Investments: ANZ Bond Fund- (Chloe) Caltex Shares 430- (Chloe) Rio Please find the attached complete solution of the Case study. The tax rates use for doing the solution is of 2018. Required: A. Calculate David and Chloe’s after-tax income for the year... • ### Thinking of the Middle East, identify where you encounter products, services, or food items that... (Solved) 4 weeks ago Thinking of the Middle East, identify where you encounter products, services, or food items that should be relatively uniform and are based on standard inputs or costs . Identify the items, their standard components, and what the standards would probably be for each item. For example In middle East countries, we can consider the product refining of crude oil for discussion. The refining of crude oil is having standard components and standard process. The crude oil is... • ### Applying Managerial Accounting Concepts to the Service Industry (100 Points) Many of the concepts in... (Solved) 4 weeks ago Applying Managerial Accounting Concepts to the Service Industry (100 Points) Many of the concepts in managerial accounting were first developed for the manufacturing environment. Do you think the same concepts, such as variable costs , fixed costs , mixed costs , and job order costing , can also Managerial accounting is also known as management accounting or cost accounting. Generally, the managerial accounting involves collecting, analyzing the accounting information about the... • ### Refer to information in QS 9-13. If the Windshield division (Solved) October 15, 2013 Refer to information in QS 9-13. If the Windshield division has excess capacity, what is the range of possible transfer prices that could be used on transfers between the Windshield and Assembly divisions? Explain. In the event that the Windshield division has overabundance capacity, a range of acceptable transfer prices ends up possible. The Windshield division won't acknowledge a transfer price...
# Making a custom angle symbol I want to create a mathematical symbol as below. This symbol should be exactly the same as the usual \angle command (with the amsfonts package) in terms of dimensions: it should not be wider or taller. It should have the same 'arc' as in \measuredangle (but not its dimensions, since this is taller). And of course, it should be filled with gray: \color{gray} (with the xcolor package). I want to be able to use it in an equation like $$\filledangle ABC$$, identical to how commands like \angle are used. How can I do this? I have a sense that I would perhaps need to define a macro perhaps using tikz; but I have no clue where to start. I can draw the figure by itself quite easily. Consider the following MWE: \documentclass{article} \usepackage{tikz} \begin{document} \begin{tikzpicture} \draw[fill=gray] (0,0) -- +(45:2) arc (45:0:2) -- (0,0); \draw (0,0)--(45:3) (0,0)--(0:3); \end{tikzpicture} \end{document} This produces something similar to what I want, but I am not sure that this is the correct scale used in \angle, nor do I know how to get this into the form of a command in math mode or how to get it to scale correctly. P.S. This symbol should scale properly and be of the correct relative size to the math beside it; e.g. if I type $$x^{\filledangle ABC}$$ for whatever reason, the symbol should turn smaller accordingly. Thanks in advance! • Nice problem to play with, but could you share what you have tried so far as an MWE. This will help us to have an head-start rather than starting from scratch!! – Raaja_is_at_topanswers.xyz Aug 20 '18 at 8:45 I hope to have understood your question. By changing the coordinates you can adjust the angle as you wish. \documentclass{article} \usepackage{tikz,xcolor} \usetikzlibrary{quotes,angles} \newcommand{\comangle}[1]{% \begin{tikzpicture} \draw coordinate (a) at (0.3,0); \draw coordinate (b) at (0,0); \draw coordinate (c) at (.2,0.25); \draw (a) -- (b) -- (c) pic [draw=black,fill=gray!50,angle radius=.2cm] {angle=a--b--c}; \end{tikzpicture}% } \begin{document} \end{document} VARIATION \documentclass{article} \usepackage{tikz,xcolor} \usetikzlibrary{quotes,angles} \newcommand{\comangle}{\kern.08em% \begin{tikzpicture} \draw coordinate (a) at (0.15,0); \draw coordinate (b) at (0,0); \draw coordinate (c) at (.14,0.25); \draw (a) -- (b) -- (c) pic [draw=black,fill=gray!50,angle radius=.11cm] {angle=a--b--c}; \end{tikzpicture}% \kern.08em% } \begin{document} $\angle A$ $\comangle A$ \end{document} • There were two spurious spaces in the solution. I fixed them for you by ending two lines with a % character. Hope you don't mind. ;-) – Harald Hanche-Olsen Aug 20 '18 at 10:57 • Another suggested improvement: Use \begin{tikzpicture}[x=1em,y=1em] and multiply all coordinates by 3 (more or less). Then it will scale with font size. Possibly, one can argue that using 1ex instead of 1em is better. Or you can just use some other multiple of em or ex and leave the numbers inside alone. You choose. PS. If you want to use this in math mode, possibly some more work is needed, like wrapping the whole construct in \mathop{}. – Harald Hanche-Olsen Aug 20 '18 at 11:00 • It seems the most versatile approach. Bravo! – Steven B. Segletes Aug 20 '18 at 11:47 • To make it similar dimensions to \angle, you could try something like \newcommand{\comangle}{\kern.08em% \begin{tikzpicture} \draw coordinate (a) at (0.15,0); \draw coordinate (b) at (0,0); \draw coordinate (c) at (.14,0.25); \draw (a) -- (b) -- (c) pic [draw=black,fill=gray!50,angle radius=.11cm] {angle=a--b--c}; \end{tikzpicture}% \kern.08em } Then, compare $\angle A$ $\comangle A$ – Steven B. Segletes Aug 20 '18 at 11:55 • @Teyyf Since it is based on Sebastiano's code, I think it should be incorporated into his answer. He deserves the credit! I just added a tweak. – Steven B. Segletes Aug 20 '18 at 13:26
# A complete bipartite graph is a tree only if the order of one of partite sets is 1 Let $G=\left(V,E\right)$ be a complete bipartite graph of an order at least two with partite sets X and Y. Then either $|X|=1$ or $|Y|=1$. ## Proof Assume G is a tree and $|Y|>1$. We prove the statement by contradiction. Suppose $|X|>1$. Pick any pair of distinct vertices ${x}_{1}$ and ${x}_{2}$ in X, and follow any edge from ${x}_{1}$ to some vertex ${y}_{1}$ in Y. There must be such an edge $\left\{{y}_{1},{x}_{2}\right\}\in E\left(G\right)$ since G is a complete bipartite graph. Furthermore, there must be another vertex ${y}_{2}$ in Y since $|Y|>1$. Because G is a complete bipartite graph, we have $\left\{x2,y2\right\}\in E\left(G\right)$ and $\left\{y2,x1\right\}\in E\left(G\right)$. Now consider a path ${x}_{1}{y}_{1}{x}_{2}{y}_{2}{x}_{1}$, which is a cycle. But G is a tree so G is acyclic and this is a contradiction. Thus our supposition must be false and $|X|=1$. A similar argument shows that $|Y|=1$ if $|X|>1$. Q.E.D.
# How to create a link to my C: drive in ExpressionEngine 1.6.4 I'm trying to create a link on my webpage to a file on my C: drive, but for some reason ExpressionEngine switches my backslashes into forward slashes. For example < a href="file:\\C\data\text.doc" > will be turned into < a href="file://C/data/text.doc" > And therefore the link doesn't work. What to to? • Why are you trying to link to your c: drive? You need to upload files to your website if you want your visitors to be able to access them. – Adrian Macneil Jan 23 '13 at 7:20 • Because all my visitors will be having access to the same C: drive (we are using the site at the same office). – user878 Jan 23 '13 at 7:53 • On the same computer? C: drive is local. Also, you need to explain where you are putting that link. Are you putting it directly in your template? – Adrian Macneil Jan 23 '13 at 8:46 • I only used C: drive as an example, I mean a common server that we all have access to at my work. The site will be used as an intranet so I will need many links like this. Links would be put mostly on weblog entries. – user878 Jan 23 '13 at 9:05 • Ok I see. So you are adding links in a wysiwyg text field? Are you using built in EE Rich Text Editor, or Wygwam, or one of the other plethora of plugins? – Adrian Macneil Jan 23 '13 at 10:49 Even for an intranet, you want the files to reference the internet domain, such as http://office.mydomain.org You can upload files from the C: drive, but Adrian mention ExpressionEngine and probably every content management system (CMS) will expect the files to be stored in the web area of your machine. Pointing to the generic C: drive, is pretty much like saying, these files are located on every employee's computer who attempt to use them. The C:drive is local machine reference. You want to reference the server. This can be done by IP address, as well, but is best done by a domain name, even if its internal. If your website is hosted elsewhere from the intranet, just call your ISP, and ask them to point a subdomain, to the IP address of the machine you are using for the intranet. Anything like, office., staff., internal., in front of your current domain name should work. Then just place the files in a folder, within the web server area. So if your structure was like: public_html -- /system/ -- /images/ -- /themes/ -- /files/ The reference to the files will then become: http://office.mydomain.org/files/{file_name} or you can use the reference, just /files/{file_name} If you have lots of files, I also recommend grouping them under the files or downloads folder. Just create sub-folders. This will make uploading files using the file manager or something like 'Assets' by Pixel & Tonic easier, when they don't have to try to display 1000s of files at a time.
# ISEE Upper Level Quantitative : How to find the length of the side of an equilateral triangle ## Example Questions ### Example Question #1 : How To Find The Length Of The Side Of An Equilateral Triangle Which of the following could be the three sidelengths of an equilateral triangle? Explanation: By definition, an equilateral triangle has three sides of equal length. We can identify the equilateral triangle by converting the given sidelengths to the same units and comparing them. We can eliminate the following by showing that at least two sidelengths differ. 2 yards =  feet. Two sides have lengths 6 feet and 7 feet, so we can eliminate this choice. 4 feet =  inches Two sides have lengths 48 inches and 50 inches, so we can eliminate this choice. 5 feet =  inches Two sides have lengths 48 inches and 60 inches, so we can eliminate this choice. yards =  feet Two sides have lengths 4 feet and 5 feet, so we can eliminate this choice. yards =  feet =  inches All three sides have the same length, making this the triangle equilateral. This choice is correct.
# What's the distributional derivative of a Banach space valued almost surely continuous stochastic process? Let • $(\Omega,\mathcal A,\operatorname P)$ be a probability space and $\lambda$ be the Lebesgue measure on $[0,\infty)$ • $(H,\left\|\;\cdot\;\right\|)$ be a Banach space over the field $\mathbb F\in\left\{\mathbb R,\mathbb C\right\}$ • $(X_t)_{t\ge 0}$ be a $H$-valued almost surely continuous stochastic process on $(\Omega,\mathcal A,\operatorname P)$ Let $\omega\in\Omega$ with $X(\omega)\in C^0([0,\infty);H)$. Then $$X_{[a,b]}(\omega)\text{ is compact in }(H,\left\|\;\cdot\;\right\|)\;\;\;\text{for all }0\le a\le b<\infty\;$$ Especially, $X(\omega)\in\mathcal L_{\text{loc}}^p(\lambda;H)$ and hence $$X(\omega)\varphi\in\mathcal L^p(\lambda;H)\;\;\;\text{for all }\varphi\in C_c^\infty([0,\infty);H)$$ for all $p\in [1,\infty)$. Thus, we can view $X(\omega)$ as being a distribution $$C_c^\infty([0,\infty);H)\to H\;,\;\;\;\varphi\mapsto\int X(\omega)\varphi\;{\rm d}\lambda\;.\tag 1$$ Is there anything I'm missing? Are some of my conclusions wrong? And what's the distributional derivative of $X$? [I've read the Wikipedia article about the distributional derivative, but I don't know how we need to translate the definition to the setting described here]. EDIT: The question arose as I saw people talking about the distributional derivative of a (cylindrical) Brownian motion on a Hilbert space without giving a definition of what they mean. As @charlestoncrabb pointed out, my attempt in $(1)$ seems to be wrong, since the product $X(\omega)\varphi$ is not defined. Maybe we need to let $H$ be a Hilbert space with inner product $\langle\;\cdot\;,\;\cdot\;\rangle$ and replace $X(\omega)\varphi$ by $\langle X(\omega),\varphi\rangle$. By the same argumentation as before, the integral would exist. But honestly, I'm only guessing. Until now, I failed to generalize the usual notion to this scenario, but since people are using it, there must be some notion of distributional derivative of $X$. The main issue I see here is that since $X,\varphi$ are $H-$valued, how do we define the product $X \varphi$? It seems you need additional structure to even get past this point (i.e., $H$ needs to be a Banach algebra). Assuming an algebra structure, how are you defining the space $\mathcal{L}^p(\lambda;H)$? In particular, what is the norm of this space? Finally, for a distributional derivative to make sense (by which I mean, align in some way with the usual usage), we need integration by parts to hold: i.e., "$Y$ is the distributional derivative of $X$ if": $$\int \nabla\varphi(\omega)X(\omega)d\lambda(\omega)=-\int\varphi(\omega)Y(\omega)d\lambda(\omega)$$ for all $\varphi\in C^\infty_c(\lambda;H)$ (noting here $\nabla$ refers to the Fréchet derivative). Again, for this to make sense, all of the prior issues need to make sense as well. • The question arose as I saw people talking about the distributional derivative of a (cylindrical) Brownian motion on a Hilbert space without giving a definition of what they mean. You're right, my attempt seems to be wrong. Maybe we need to let $H$ be a Hilbert space with inner product $\langle\;\cdot\;,\;\cdot\;\rangle$ and replace $X(\omega)\varphi$ by $\langle X(\omega),\varphi\rangle$. By the same argumentation as before, the integral would exist. – 0xbadf00d Jan 25 '16 at 13:01 • But honestly, I'm only guessing. Until now, I failed to generalize the usual notion to this scenario, but since people are using it, there must be some notion of distributional derivative of $X$. – 0xbadf00d Jan 25 '16 at 13:01
• # question_answer A man on tour travels first 160 km at 64 km/hr and the next 160 km at 80 km/hr. The average speed for the first 320 km of the tour is? A) 35.55 km/hr                  B) 36 km/hrC) 71.11 km/hr                  D) 71 km/hr Total time taken $=2.5+2$ $=4.5\,\,hours$ Now, the required average speed $=\frac{320}{4.5}$ $=71.11\,\,km/hr$
Free Version Difficult Solving Trigonometric Equations (Sec and Sin) TRIG-YL07P4 Solve the following equation on the interval $[0, 2\pi)$: $$\sec{x}\sin{x}-\sec{x}=0$$ A This equation is not solvable. B $\cfrac{\pi}{2}$ C $\cfrac{3\pi}{2}$ D $\cfrac{\pi}{2}, \cfrac{3\pi}{2}$ E $0, \pi, \cfrac{\pi}{2}, \cfrac{3\pi}{2}$
# Sea Ice ## Essential for northern survival This article is the fourth part in a continuing series on chemistry and Inuit life and culture. Inuit culture is passed down from generation to generation orally, and the use of storytelling is an important function. For this particular article, we consider that such a digression here not only illustrates this, but also provides a linguistically-rich way of showing how essential sea ice is to the very survival of the peoples. ## Sikuk (Sea Ice): A gateway to freedom FIGURE 1 Chaim Andersen going to imittak (fetch water), drinking surface fresh water out of a tin bowl on the sea ice. Credit: Mary Andersen The Inuit are geographically located in the circumpolar Arctic where temperatures reach extreme cold. Temperatures do not start to climb back up until late spring while the summer months are shorter and cooler than what are experienced in southern Canada. Essentially, then, Inuit live half of their lives in subzero temperatures. So, along with the sometimes elegant and sometimes aggressive snowfalls, sea ice is part of Inuit life for a substantial portion of the year (Fig. 1). In terms of utilization, Inuit use the sea ice mainly for transportation. For a people who live in isolated Arctic communities, access to the outside is only by very expensive seats on small Twin Otter aircraft or, in the brief summer, by the weekly coastal boat. The ability to travel from their permanent homes to their home on the land is crucial: to go hunting and fishing; to carry out their culture/traditions; and to rid themselves of cabin fever. These are major privileges, among many others, provided by the sea ice. In this way, Inuit can not only provide for themselves, their families and their community, but also heal their mind, their body and their soul. Thus, the sea ice is a crucial infrastructure in Inuit culture, and it is a gateway to freedom across their lands and (frozen) seas. ## A small personal tangent FIGURE 2 Maria Merkuratsuk and Chaim Andersen. Credit: Maria Merkuratsuk “There are two times throughout the year where we are stranded in our communities, in the spring when the ice is starting to melt and between fall and winter when the ice isn’t formed enough. I have a friend, Maria Merkuratsuk (Fig. 2); she is also an Inuit from Nain, Nunatsiavut (our hometown) and my elder. She has taught me many things about our culture/language and continues to do so through stories of her life growing up. One of my favourites is the story of the “first freeze up”. Every year when the ice is finally thick enough to travel on, she and her family (a big one at that) and friends get ready early in the morning to go fishing for iKaluk (Arctic char) in a place we call Anaktalik. If you’ve ever seen a bunch of Inuit ice-fishing for iKaluit (plural of iKaluk) you might think there was a spiritual ritual happening of some sort. It is the most exciting event and one of my most favourite things to do! Everyone is either screaming out “Woo, I got a fish” or “Bugger, I dropped it” although I cannot capture Maria’s beautiful and vivid accounts in Inuttitut (our own language1). The ice allows us to engage in activities that bring us together, to break free of our town-boundaries. Even just the smell of the air while travelling makes me happy. I always say, and I’ve always felt in my heart, I am a person of the land and sea, both when it is frozen and when it is not, and I am sure all other Inuit can relate.” ## Oxygen solubility in sea water FIGURE 3 Worldwide concentration of dissolved oxygen in oceans5 It may seem odd to discuss the topic of oxygen solubility in an article on Inuit life and sea ice, but it is a crucial component. There are very few food resources on land in the far north, while the sea is rich in fish and mammals. Access to the sea is essential to survival, hence nearly all of the Canadian Inuit communities are on the coast. The prolific sea life is largely dependent upon the high oxygen concentration necessary for the gills of the fish. In fact some species, such as Arctic char (see part two of our series2), thrive only in high dissolved- oxygen levels.3 Oxygen solubility is inversely proportional to temperature;4 thus the near-zero temperature of Arctic waters provides this marine bounty (see Fig. 3). ## Why does ice float? Outside of the brief warm months, to access the marine food resources, the surface sea ice is essential. Thus the fact that ice floats on liquid water is key to survival in northern Canada. Yet this behaviour is unique to water – or nearly so. For ‘normal’ compounds, the solid form is denser than the liquid form. We explain this observation in terms of the Kinetic-Molecular (K-M) Model of Matter: that is, in the solid phase, molecules are locked in fixed locations in the crystal lattice. The molecules vibrate about these fixed locations, with the vibrational energy increasing with increasing temperature. Melting occurs when the vibrational energy exceeds the intermolecular forces between neighbouring molecules. As the molecules become free to move, spaces open up, reducing the bulk density in the liquid phase. FIGURE 4 Space-filling representations of the structure of liquid water and solid water (ice)7 Why is water so different? We must compare the molecular structure of liquid water and of ice (Fig. 4). In liquid water, the molecules are free to move over each other. However, in ice, the water molecules are held in their crystal location by hydrogen bonds at fixed angles to the neighbouring water molecules. How can it expand to form such an open structure? The answer lies with the high strength of the hydrogen bonds (21 kJ·mol-1).6 This rigid open structure means that there are molecule-size channels through the ice, reducing the bulk density to about 0.9 that of liquid water. ## Sea water and northern ice formation Of course, sea water is not pure H2O: it contains several dissolved ions, as shown in Fig. 5. The presence of these ions lowers the temperature of the solid-liquid transition. We can explain this behaviour in terms of the electrostatic interaction between each ion and the neighbouring polar water molecules (Fig. 6). These interactions inhibit the formation of the hydrogen-bonding network amongst the water molecules themselves, lowering the liquid-solid transition temperature to about -2 °C. FIGURE 5 The ion composition of sea water — quantities in relation to 1 kg or 1 litre of sea water8 FIGURE 6 Clustering of polar water molecules around a sodium ion9 In the freezing process for salt water, the molecular vibrations of the water molecules decrease until they are less than the strength of the intermolecular hydrogen bonds. As the water molecules lock into fixed positions, the ions are ‘pushed out’. Thus frozen sea ice is essentially fresh water while the water beneath the ice becomes richer in ions (more saline) and denser, thus sinking to the bottom of the Arctic Ocean and driving the deep water currents. In fact, in spring, freshwater pools form on the surface, as shown by Chaim drinking fresh water in Fig. 1. ## The worrying future for the Inuit To southern Canadians, the dramatic reduction in months of ice coverage of Arctic waters is usually discussed in terms of shipping access to the Arctic. It is rarely (if ever) mentioned that the loss of Arctic ice will be catastrophic for the Inuit. If Inuit are unable to travel widely over the sea ice, traditional way of life and the marine harvesting, which provides the basis of their healthy diet, will be difficult, if not impossible, to continue. The resources themselves (particularly Arctic char) are also likely to diminish as warming water will reduce the oxygen concentration of the seas. PHOTO Chaim’s three-year-old daughter, Avery Andersen, ice fishing. ## Footnote Chaim Andersen has also been involved in the development of the Labrador Inuit Settlement Marine Management Plan – Imappivut. ## References 1. Inuttitut is the distinct Labrador Inuit variant of the Inuktituk language. See: Wikipedia. Inutttitut. https://en.wikipedia.org/wiki/ Inuttitut. 2. C.C. Andersen and G. Rayner-Canham, “Soy Sauce — An essential Inuit condiment,” Chem 13 News, October 2018. 3. Wikipedia. Arctic char. https://en.wikipedia.org/wiki/Arctic_char. 4. The Engineering Toolbox: Oxygen — Solubility in Fresh Water and Sea Water, https://www.engineeringtoolbox.com/oxygen-solubility-water-d_841.html. 5. Wikimedia Commons. Sea-surface oxygen [mol O^2 m^-3]. commons. https://commons.wikimedia.org/wiki/File:WOA09_sea-surf_O2_AYool.png. 6. Wikipedia. Hydrogen bonds. https://en.wikipedia.org/wiki/ Hydrogen_bond. 7. Wikimedia Commons. Liquid water and ice. https://commons. wikimedia.org/wiki/File:Liquid-water-and-ice.png. 8. Wikipedia. Magnesium chloride. https://en.wikipedia.org/wiki/ Magnesium_chloride#/%20media/File:Sea_salt-e-dp_hg.svg. 9. Wikipedia. Solvation shell. https://en.wikipedia.org/wiki/ Solvation_shell. ## For Future Reading, A Selection • Aporta, C.; Taylor, D.R.F.; Laidler, G.J. Geographies of Inuit sea ice use: introduction. Canadian Geographer 2011, 55 (1), 1-5. • Bravo, M.T. Voices from the Sea Ice: the reception of climate change narratives. Journal of Historical Geography 2009, 35, 256-278. • Durkalec, A. et al. Climate change influences on environment as a determinant of Indigenous health: Relationships to place, sea ice, and health in an Inuit community. Social Science & Medicine 2015, 136-137, 17-26. • Ford, J.D. Dangerous climate change and the importance of adaptation for the Arctic’s Inuit population. Environmental Research Letters 2009, 4, 1-9. • Laidler, G.J. et. al. Travelling and hunting in a changing Arctic: Assessing Inuit vulnerability to sea ice change in Igloolik, Nunavut. Climatic Change 2009, 94, 363-397. • Riew, R. Inuit use of the sea ice. Arctic and Alpine Research 1991, 23, 3-10. Publisher's note: This article is a reprint from the February 2019 issue of Chem 13 News.
# Difference between revisions of "2007 AIME II Problems/Problem 10" ## Problem Let $S$ be a set with six elements. Let $P$ be the set of all subsets of $S.$ Subsets $A$ and $B$ of $S$, not necessarily distinct, are chosen independently and at random from $P$. The probability that $B$ is contained in at least one of $A$ or $S-A$ is $\frac{m}{n^{r}},$ where $m$, $n$, and $r$ are positive integers, $n$ is prime, and $m$ and $n$ are relatively prime. Find $m+n+r.$ (The set $S-A$ is the set of all elements of $S$ which are not in $A.$) ## Solution 1 Use casework: • $B$ has 6 elements: • Probability: $\frac{1}{2^6} = \frac{1}{64}$ • $A$ must have either 0 or 6 elements, probability: $\frac{2}{2^6} = \frac{2}{64}$. • $B$ has 5 elements: • Probability: ${6\choose5}/64 = \frac{6}{64}$ • $A$ must have either 0, 6, or 1, 5 elements. The total probability is $\frac{2}{64} + \frac{2}{64} = \frac{4}{64}$. • $B$ has 4 elements: • Probability: ${6\choose4}/64 = \frac{15}{64}$ • $A$ must have either 0, 6; 1, 5; or 2,4 elements. If there are 1 or 5 elements, the set which contains 5 elements must have four emcompassing $B$ and a fifth element out of the remaining $2$ numbers. The total probability is $\frac{2}{64}\left({2\choose0} + {2\choose1} + {2\choose2}\right) = \frac{2}{64} + \frac{4}{64} + \frac{2}{64} = \frac{8}{64}$. We could just continue our casework. In general, the probability of picking B with $n$ elements is $\frac{{6\choose n}}{64}$. Since the sum of the elements in the $k$th row of Pascal's Triangle is $2^k$, the probability of obtaining $A$ or $S-A$ which encompasses $B$ is $\frac{2^{7-n}}{64}$. In addition, we must count for when $B$ is the empty set (probability: $\frac{1}{64}$), of which all sets of $A$ will work (probability: $1$). Thus, the solution we are looking for is $\left(\sum_{i=1}^6 \frac{{6\choose i}}{64} \cdot \frac{2^{7-i}}{64}\right) + \frac{1}{64} \cdot \frac{64}{64}$ $=\frac{(1)(64)+(6)(64)+(15)(32)+(20)(16)+(15)(8)+(6)(4)+(1)(2)}{(64)(64)}$ $=\frac{1394}{2^{12}}$ $=\frac{697}{2^{11}}$. The answer is $697 + 2 + 11 = 710$. ## Solution 2 we need $B$ to be a subset of $A$ or $S-A$ we can divide each element of $S$ into 4 categories: • it is in $A$ and $B$ • it is in $A$ but not in $B$ • it is not in $A$ but is in $B$ • or it is not in $A$ and not in $B$ these can be denoted as $+A+B$, $+A-B$,$-A+B$, and $-A-B$ we note that if all of the elements are in $+A+B$, $+A-B$ or $-A-B$ we have that $B$ is a subset of $A$ which can happen in $\dfrac{3^6}{4^6}$ ways similarly if the elements are in $+A-B$,$-A+B$, or $-A-B$ we have that $B$ is a subset of $S-A$ which can happen in $\dfrac{3^6}{4^6}$ ways as well but we need to make sure we don't over-count ways that are in both sets these are when $+A-B$ or $-A-B$ which can happen in $\dfrac{2^6}{4^6}$ ways so our probability is $\dfrac{2\cdot 3^6-2^6}{4^6}= \dfrac{3^6-2^5}{2^{11}}=\dfrac{697}{2^{11}}$. so the final answer is $697 + 2 + 11 = 710$.
Quantum teleportation (Bennett et al., 1993) has already been covered on this blog. This was the first tutorial I have written on this website. Right now we are going to re-visit this concept again from a slightly different perspective. Recently we have been talking a lot about the Majorana zero modes, creating them using the topological phase transitions and performing quantum computation by braiding them defining the Majorana qubits. The question to ask now is, would it be possible to perform the quantum teleportation on such states defined on the Kitaev chains? In other words, could we teleport a state from one Kitaev chain onto another using nothing else but braids of their edges? Turns out yes! It was June 5th, 2019. Very fun time as Jonathan Dowling visited us in Shanghai to be a speaker at ITU Workshop on Quantum Information Technology (QIT) for Networks conference which we all attended. During the break time I was chatting with my PhD advisor Tim Byrnes, we discussed teleporting states in the Kitaev chains and we got quite excited with this idea. It took us quite few months to develop a necessary theory to make it work, which eventually resulted the PRL publication (Huang et al., 2021) which additionally has been highlighted on the NYU Shanghai website. The key to perform the quantum teleportation is the ability to perform the entangling operations. As we have few entangling gates described in (Narozniak et al., 2021) and also in this tutorial we found that the inner braid equivalent to $\sqrt{X_1 X_2}$ operation does produce the effect of quantum teleportation with slightly different classical correction. We start by producing the entangled state. Working in the logical space we begin by creating entanglement between second and third logical qubits using inner braid which corresponds to $\sqrt{X_2 X_3}$ operation \begin{aligned} \left \vert E \right>_{23} & = \sqrt{X_2 X_3} \left \vert 0 0_L \right>_{23} \\ & = \frac{1}{\sqrt{2}} ( \left \vert 0 0_L \right>_{23} + i \left \vert 1 1_L \right>_{23}) . \end{aligned} The first logical qubit contains the state to be teleported $\left \vert \psi \right>$, which can be written as \begin{aligned} \left \vert \psi \right>_1 = \alpha \left \vert 0_L \right>_1 + \beta \left \vert 1_L \right>_1 . \end{aligned} Now after applying the $\sqrt{X_1 X_2}$ gate we have \begin{aligned} & \sqrt{X_1 X_2} \left \vert \psi \right>_1 \left \vert E \right>_{23} \\ = & \frac{1}{2}(\alpha \left \vert 000_L \right> + i \alpha \left \vert 011_L \right> + \beta \left \vert 100_L \right> + i \beta \left \vert 111_L \right> \\ & + i\alpha \left \vert 110_L \right> - \alpha \left \vert 101_L \right> + i \beta \left \vert 010_L \right> - \beta \left \vert 001_L \right> ) \\ = &\frac{1}{2}(\left \vert 00_L \right> (\alpha \left \vert 0_L \right> - \beta \left \vert 1_L \right>) + i \left \vert 01_L \right> (\beta \left \vert 0_L \right> + \alpha \left \vert 1_L \right>) \\ & -\left \vert 00_L \right>(\beta \left \vert 0_L \right> - \alpha \left \vert 1_L \right>) + i \left \vert 01_L \right> (\alpha \left \vert 0_L \right> + \beta \left \vert 1_L \right>)) \end{aligned} from which we can deduce what corrections have to be applied depending on the measurements of the first two logical qubits. Under the form of logical circuit those $X$ and $Z$ corrections depending on measurements are The rightmost logical qubit is prepared in the logical $\left \vert +_L \right>$ state to avoid getting entangled with the rest of the system in case of the $X$-correction getting applied. The teleportation itself is performed entirely using the braiding operations, however classical correction as requires the logical $X$-operation needs an extra qubit. I did this tutorial slightly different than we described in the paper. Here we do not simply assume the existence of such ancilla Kitaev chain, we also prepare it. If we include the state preparation an extra ancilla topological qubit is required and one more inner braid The above diagram contains classical correction operators which of course do not need to always be applied. To more formally define the measurement we will prepare appropriate projection operators. But first, a topological qubit in logical $Z$-basis could be written as \begin{aligned} \left \vert m_L \right> &= (1-m) \left \vert 0_L \right> + m \left \vert 1_L \right> \end{aligned} where $m \in \{0, 1\}$. For the final state after the teleportation $\left \vert \psi_f \right>$ if the first topological qubit has been measured to be $m^{(1)}$ and second $m^{(2)}$ we could define the projection operator \begin{aligned} \Pi_{m^{(1)}, m^{(2)}} &= \left \vert m^{(1)}_L \right> \left< m^{(1)}_L \right \vert \otimes \left \vert m^{(2)}_L \right> \left< m^{(2)}_L \right \vert \otimes I \otimes I \otimes I \end{aligned} which gives us the post-teleportation and post-measurement state \begin{aligned} \left \vert \psi_{m^{(1)}, m^{(2)}} \right> &= \Pi_{m^{(1)}, m^{(2)}} \left \vert \psi_f \right> \end{aligned} we could trace out logical qubits apart from the teleported logical qubit on the third side and get fidelity for each of the outcomes as \begin{aligned} f_{m^{(1)}, m^{(2)}} &= \left < \psi \right \vert \text{Tr}_{1, 2, 3, 4, 7, 8, 9, 10}(\left \vert \psi_{m^{(1)}, m^{(2)}} \right> \left < \psi_{m^{(1)}, m^{(2)}} \right \vert) \left \vert \psi \right> . \end{aligned} This was for the case of $5$ topological qubits each of length $L = 2$. The partial trace arguments would need to be adjusted for different system size. I have numerically simulated the above approach and feel free to review the Python source code. The bar plot comparing fidelities for the cases with and without classical correction are as follows The exact distribution depends on the random state. There would always be one outcome with perfect fidelity as there is one outcome that does not require any classical correction. Today we have simulated the topological teleportation by applying sequences of unitary braids on topological states. The full source code of this simulation is published under MIT licence on GitHub. If you find errors please tweet me and let me know. 1. Bennett, C. H., Brassard, G., Crépeau, C., Jozsa, R., Peres, A., & Wootters, W. K. (1993). Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels. Phys. Rev. Lett., 70(13), 1895–1899. https://doi.org/10.1103/PhysRevLett.70.1895 2. Huang, H.-L., Narozniak, M., Liang, F., Zhao, Y., Castellano, A. D., Gong, M., Wu, Y., Wang, S., Lin, J., Xu, Y., Deng, H., Rong, H., Dowling, J. P., Peng, C.-Z., Byrnes, T., Zhu, X., & Pan, J.-W. (2021). Emulating Quantum Teleportation of a Majorana Zero Mode Qubit. Phys. Rev. Lett., 126(9), 090502. https://doi.org/10.1103/PhysRevLett.126.090502 3. Narozniak, M., Dartiailh, M. C., Dowling, J. P., Shabani, J., & Byrnes, T. (2021). Quantum gates for Majoranas zero modes in topological superconductors in one-dimensional geometry. Phys. Rev. B, 103(20), 205429. https://doi.org/10.1103/PhysRevB.103.205429
# How can I calculate the specific heat of aluminum? May 17, 2014 Design and conduct an experiment in which you can calculate the specific heat of aluminum by creating a thermal equilibrium system in which two different with different initial temperatures reach a final temperature that is the same for both. First examine the design of this experiment. First heat a 10 gram aluminum metal in beaker of boiling water for at least 10 minutes so that the metal's initial temperature is 100 degrees Celsius. Using tongs transfer the metal to beaker with 100 grams of water at temperature of 20 degrees Celsius. Measure the final temperature the water. The final temperature is 21.6 Celsius. The temperature should rise slightly and the metal's temperature should drop dramatically. Why? First, the water has a higher specific heat so it's temperature will slight compared to a metal which has a specific heat less than one. Secondly, the mass of the water is 10 times greater. Now consider the calculation. $q = m c \left({t}_{f} - {t}_{i}\right)$ q represents heat energy in joules c represents the specific with units joules/(grams x Celsius) ${t}_{f}$ is final temperature and ${t}_{i}$ is initial temperature. The q for the metal is negative because it loses heat (exothermic) The q for the water is positive because it absorbs heat (endothermic) Since the system will reach thermal equilibrium we will make these equal - q metal = q water -(10g)(x)(21.6 C- 100.0 C)= (100 g)(4.184 J/gC)(21.6 C - 20.0 C) $$ x= 0.90 J/gC I hope that this example clarifies the question.
## example 9.16 [ENDORSED] $\Delta G^{\circ}= \Delta H^{\circ} - T \Delta S^{\circ}$ $\Delta G^{\circ}= -RT\ln K$ $\Delta G^{\circ}= \sum \Delta G_{f}^{\circ}(products) - \sum \Delta G_{f}^{\circ}(reactants)$ Liam Maxwell 2E Posts: 53 Joined: Fri Sep 29, 2017 7:07 am ### example 9.16 the question asks you to estimate the temperature at which it is thermodynamically possible for a reaction to occur. In the explanation it says when temp is increased there is a point where T= standardH/standardS. However in order for this to be true following the equation StandardG=StandardH-TstandardS wouldn't that mean G is 0 and therefore the equation is at equilibrium and therefore the reaction won't have a net occurence? Chem_Mod Posts: 18400 Joined: Thu Aug 04, 2011 1:53 pm Has upvoted: 435 times ### Re: example 9.16  [ENDORSED] Solving for the point at which $\Delta G=0$ is indeed the condition for equilibrium. To be precise, you are solving for this point so you can describe the inequality. It is the > or < that we are interested in. However, to know this, we must first determine the equilibrium temperature as you wrote.
To install click the Add extension button. That's it. The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time. 4,5 Kelly Slayton Congratulations on this excellent venture… what a great idea! Alexander Grigorievskiy I use WIKI 2 every day and almost forgot how the original Wikipedia looks like. Live Statistics English Articles Improved in 24 Hours What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better. . Leo Newton Brights Milds # Kemeny–Young method The Kemeny–Young method is an electoral system that uses preferential ballots and pairwise comparison counts to identify the most popular choices in an election. It is a Condorcet method because if there is a Condorcet winner, it will always be ranked as the most popular choice. This method assigns a score for each possible sequence, where each sequence considers which choice might be most popular, which choice might be second-most popular, which choice might be third-most popular, and so on down to which choice might be least-popular. The sequence that has the highest score is the winning sequence, and the first choice in the winning sequence is the most popular choice. (As explained below, ties can occur at any ranking level.) The Kemeny–Young method is also known as the Kemeny rule, VoteFair popularity ranking, the maximum likelihood method, and the median relation. ## Description The Kemeny–Young method uses preferential ballots on which voters rank choices according to their order of preference. A voter is allowed to rank more than one choice at the same preference level.[citation needed] Unranked choices are usually interpreted as least-preferred. Another way to view the ordering is that it is the one which minimizes the sum of the Kendall tau distances (bubble sort distance) to the voters' lists. Kemeny–Young calculations are usually done in two steps. The first step is to create a matrix or table that counts pairwise voter preferences. The second step is to test all possible rankings, calculate a score for each such ranking, and compare the scores. Each ranking score equals the sum of the pairwise counts that apply to that ranking. The ranking that has the largest score is identified as the overall ranking. (If more than one ranking has the same largest score, all these possible rankings are tied, and typically the overall ranking involves one or more ties.) In order to demonstrate how an individual preference order is converted into a tally table, it is worth considering the following example. Suppose that a single voter has a choice among four candidates (i.e. Elliot, Meredith, Roland, and Selden) and has the following preference order: Preference order Choice First Elliot Second Roland Third Meredith or Selden (equal preference) These preferences can be expressed in a tally table. A tally table, which arranges all the pairwise counts in three columns, is useful for counting (tallying) ballot preferences and calculating ranking scores. The center column tracks when a voter indicates more than one choice at the same preference level. The above preference order can be expressed as the following tally table:[citation needed] All possible pairs of choice names Number of votes with indicated preference Prefer X over Y Equal preference Prefer Y over X X = Selden Y = Meredith 0 +1 vote 0 X = Selden Y = Elliot 0 0 +1 vote X = Selden Y = Roland 0 0 +1 vote X = Meredith Y = Elliot 0 0 +1 vote X = Meredith Y = Roland 0 0 +1 vote X = Elliot Y = Roland +1 vote 0 0 Now suppose that multiple voters had voted on those four candidates. After all ballots have been counted, the same type of tally table can be used to summarize all the preferences of all the voters. Here is an example for a case that has 100 voters: All possible pairs of choice names Number of votes with indicated preference Prefer X over Y Equal preference Prefer Y over X X = Selden Y = Meredith 50 10 40 X = Selden Y = Elliot 40 0 60 X = Selden Y = Roland 40 0 60 X = Meredith Y = Elliot 40 0 60 X = Meredith Y = Roland 30 0 70 X = Elliot Y = Roland 30 0 70 The sum of the counts in each row must equal the total number of votes. After the tally table has been completed, each possible ranking of choices is examined in turn, and its ranking score is calculated by adding the appropriate number from each row of the tally table. For example, the possible ranking: 1. Elliot 2. Roland 3. Meredith 4. Selden satisfies the preferences Elliot > Roland, Elliot > Meredith, Elliot > Selden, Roland > Meredith, Roland > Selden, and Meredith > Selden. The respective scores, taken from the table, are • Elliot > Roland: 30 • Elliot > Meredith: 60 • Elliot > Selden: 60 • Roland > Meredith: 70 • Roland > Selden: 60 • Meredith > Selden: 40 giving a total ranking score of 30 + 60 + 60 + 70 + 60 + 40 = 320. ### Calculating the overall ranking After the scores for every possible ranking have been calculated, the ranking that has the largest score can be identified, and becomes the overall ranking. In this case, the overall ranking is: 1. Roland 2. Elliot 3. Selden 4. Meredith with a ranking score of 370. If there are cycles or ties, more than one possible ranking can have the same largest score. Cycles are resolved by producing a single overall ranking where some of the choices are tied.[clarification needed] ### Summary matrix After the overall ranking has been calculated, the pairwise comparison counts can be arranged in a summary matrix, as shown below, in which the choices appear in the winning order from most popular (top and left) to least popular (bottom and right). This matrix layout does not include the equal-preference pairwise counts that appear in the tally table:[1] ... over Roland ... over Elliot ... over Selden ... over Meredith Prefer Roland ... - 70 60 70 Prefer Elliot ... 30 - 60 60 Prefer Selden ... 40 40 - 50 Prefer Meredith ... 30 40 40 - In this summary matrix, the largest ranking score equals the sum of the counts in the upper-right, triangular half of the matrix (shown here in bold, with a green background). No other possible ranking can have a summary matrix that yields a higher sum of numbers in the upper-right, triangular half. (If it did, that would be the overall ranking.) In this summary matrix, the sum of the numbers in the lower-left, triangular half of the matrix (shown here with a red background) are a minimum. The academic papers by John Kemeny and Peyton Young[2][3] refer to finding this minimum sum, which is called the Kemeny score, and which is based on how many voters oppose (rather than support) each pairwise order: Method First-place winner Kemeny–Young Roland Condorcet Roland Instant runoff voting Elliot or Selden (depending on how the second-round tie is handled) Plurality Selden ## Example Imagine that Tennessee is having an election on the location of its capital. The population of Tennessee is concentrated around its four major cities, which are spread throughout the state. For this example, suppose that the entire electorate lives in these four cities and that everyone wants to live as near to the capital as possible. The candidates for the capital are: • Memphis, the state's largest city, with 42% of the voters, but located far from the other cities • Nashville, with 26% of the voters, near the center of the state • Knoxville, with 17% of the voters • Chattanooga, with 15% of the voters The preferences of the voters would be divided like this: 42% of voters (close to Memphis) 26% of voters (close to Nashville) 15% of voters (close to Chattanooga) 17% of voters (close to Knoxville) 1. Memphis 2. Nashville 3. Chattanooga 4. Knoxville 1. Nashville 2. Chattanooga 3. Knoxville 4. Memphis 1. Chattanooga 2. Knoxville 3. Nashville 4. Memphis 1. Knoxville 2. Chattanooga 3. Nashville 4. Memphis This matrix summarizes the corresponding pairwise comparison counts: ... overMemphis ... overNashville ... overChattanooga ... overKnoxville PreferMemphis ... - 42% 42% 42% PreferNashville ... 58% - 68% 68% PreferChattanooga ... 58% 32% - 83% PreferKnoxville ... 58% 32% 17% - The Kemeny–Young method arranges the pairwise comparison counts in the following tally table: All possible pairs of choice names Number of votes with indicated preference Prefer X over Y Equal preference Prefer Y over X X = Memphis Y = Nashville 42% 0 58% X = Memphis Y = Chattanooga 42% 0 58% X = Memphis Y = Knoxville 42% 0 58% X = Nashville Y = Chattanooga 68% 0 32% X = Nashville Y = Knoxville 68% 0 32% X = Chattanooga Y = Knoxville 83% 0 17% The ranking score for the possible ranking of Memphis first, Nashville second, Chattanooga third, and Knoxville fourth equals (the unit-less number) 345, which is the sum of the following annotated numbers. 42% (of the voters) prefer Memphis over Nashville 42% prefer Memphis over Chattanooga 42% prefer Memphis over Knoxville 68% prefer Nashville over Chattanooga 68% prefer Nashville over Knoxville 83% prefer Chattanooga over Knoxville This table lists all the ranking scores: First choice Second choice Third choice Fourth choice Ranking score Memphis Nashville Chattanooga Knoxville 345 Memphis Nashville Knoxville Chattanooga 279 Memphis Chattanooga Nashville Knoxville 309 Memphis Chattanooga Knoxville Nashville 273 Memphis Knoxville Nashville Chattanooga 243 Memphis Knoxville Chattanooga Nashville 207 Nashville Memphis Chattanooga Knoxville 361 Nashville Memphis Knoxville Chattanooga 295 Nashville Chattanooga Memphis Knoxville 377 Nashville Chattanooga Knoxville Memphis 393 Nashville Knoxville Memphis Chattanooga 311 Nashville Knoxville Chattanooga Memphis 327 Chattanooga Memphis Nashville Knoxville 325 Chattanooga Memphis Knoxville Nashville 289 Chattanooga Nashville Memphis Knoxville 341 Chattanooga Nashville Knoxville Memphis 357 Chattanooga Knoxville Memphis Nashville 305 Chattanooga Knoxville Nashville Memphis 321 Knoxville Memphis Nashville Chattanooga 259 Knoxville Memphis Chattanooga Nashville 223 Knoxville Nashville Memphis Chattanooga 275 Knoxville Nashville Chattanooga Memphis 291 Knoxville Chattanooga Memphis Nashville 239 Knoxville Chattanooga Nashville Memphis 255 The largest ranking score is 393, and this score is associated with the following possible ranking, so this ranking is also the overall ranking: Preference order Choice First Nashville Second Chattanooga Third Knoxville Fourth Memphis If a single winner is needed, the first choice, Nashville, is chosen. (In this example Nashville is the Condorcet winner.) The summary matrix below arranges the pairwise counts in order from most popular (top and left) to least popular (bottom and right): ... over Nashville ... ... over Chattanooga ... ... over Knoxville ... ... over Memphis ... Prefer Nashville ... - 68% 68% 58% Prefer Chattanooga ... 32% - 83% 58% Prefer Knoxville ... 32% 17% - 58% Prefer Memphis ... 42% 42% 42% - In this arrangement the largest ranking score (393) equals the sum of the counts in bold, which are in the upper-right, triangular half of the matrix (with a green background). ## Characteristics In all cases that do not result in an exact tie, the Kemeny–Young method identifies a most-popular choice, second-most popular choice, and so on. A tie can occur at any preference level. Except in some cases where circular ambiguities are involved, the Kemeny–Young method only produces a tie at a preference level when the number of voters with one preference exactly matches the number of voters with the opposite preference. ### Satisfied criteria for all Condorcet methods All Condorcet methods, including the Kemeny–Young method, satisfy these criteria: Non-imposition There are voter preferences that can yield every possible overall order-of-preference result, including ties at any combination of preference levels. Condorcet criterion If there is a choice that wins all pairwise contests, then this choice wins. Majority criterion If a majority of voters strictly prefer choice X to every other choice, then choice X is identified as the most popular. Non-dictatorship A single voter cannot control the outcome in all cases. The Kemeny–Young method also satisfies these criteria: Unrestricted domain Identifies the overall order of preference for all the choices. The method does this for all possible sets of voter preferences and always produces the same result for the same set of voter preferences. Pareto efficiency Any pairwise preference expressed by every voter results in the preferred choice being ranked higher than the less-preferred choice. Monotonicity If voters increase a choice's preference level, the ranking result either does not change or the promoted choice increases in overall popularity. Smith criterion The most popular choice is a member of the Smith set, which is the smallest nonempty set of choices such that every member of the set is pairwise preferred to every choice not in the Smith set. Independence of Smith-dominated alternatives If choice X is not in the Smith set, adding or withdrawing choice X does not change a result in which choice Y is identified as most popular. Reinforcement If all the ballots are divided into separate races and the overall ranking for the separate races are the same, then the same ranking occurs when all the ballots are combined.[4] Reversal symmetry If the preferences on every ballot are inverted, then the previously most popular choice must not remain the most popular choice. ### Failed criteria for all Condorcet methods In common with all Condorcet methods, the Kemeny–Young method fails these criteria (which means the described criteria do not apply to the Kemeny–Young method): Independence of irrelevant alternatives Adding or withdrawing choice X does not change a result in which choice Y is identified as most popular. Invulnerability to burying A voter cannot displace a choice from most popular by giving the choice an insincerely low ranking. Invulnerability to compromising A voter cannot cause a choice to become the most popular by giving the choice an insincerely high ranking. Participation Adding ballots that rank choice X over choice Y never cause choice Y, instead of choice X, to become most popular. Later-no-harm Ranking an additional choice (that was otherwise unranked) cannot displace a choice from being identified as the most popular. Consistency If all the ballots are divided into separate races and choice X is identified as the most popular in every such race, then choice X is the most popular when all the ballots are combined. The Kemeny–Young method also fails these criteria (which means the described criteria do not apply to the Kemeny–Young method): Independence of clones Offering a larger number of similar choices, instead of offering only a single such choice, does not change the probability that one of these choices is identified as most popular. Invulnerability to push-over A voter cannot cause choice X to become the most popular by giving choice Y an insincerely high ranking. Schwartz The choice identified as most popular is a member of the Schwartz set. Polynomial runtime[5] An algorithm is known to determine the winner using this method in a runtime that is polynomial in the number of choices. ## Calculation methods and computational complexity An algorithm for computing a Kemeny-Young ranking in time polynomial in the number of candidates is not known, and unlikely to exist since the problem is NP-hard[5] even if there are just 4 voters.[6][7] It has been reported[8] that calculation methods based on integer programming sometimes allowed the computation of full rankings for votes on as many as 40 candidates in seconds. However, certain 40-candidate 5-voter Kemeny elections generated at random were not solvable on a 3 GHz Pentium computer in a useful time bound in 2006.[8] Note that the complexity of computation scales linearly to the number of voters so the time needed to process a given set of votes is dominated by the number of candidates[9] rather than the number of votes, limiting the importance of this constraint to elections where voters are able to effectively consider significantly more than the common seven items of working memory. The Kemeny–Young method can be formulated as an instance of a more abstract problem, of finding weighted feedback arc sets in tournament graphs.[10] As such, many methods for the computation of feedback arc sets can be applied to this problem, including a variant of the Held–Karp algorithm that can compute the Kemeny–Young ranking of ${\displaystyle n}$ candidates in time ${\displaystyle O(n2^{n})}$, significantly faster for many candidates than the factorial time of testing all rankings.[11][12] There exists a polynomial-time approximation scheme for computing a Kemeny-Young ranking,[13] and there also exists a parameterized subexponential-time algorithm with running time O*(2O(OPT)) for computing such a ranking.[10] ## History The Kemeny–Young method was developed by John Kemeny in 1959.[2] In 1978 Peyton Young and Arthur Levenglick showed[3] that this method was the unique neutral method satisfying reinforcement and a version of the Condorcet criterion. In other papers,[14] [15] [16] [17] Young adopted an epistemic approach to preference-aggregation: he supposed that there was an objectively 'correct', but unknown preference order over the alternatives, and voters receive noisy signals of this true preference order (cf. Condorcet's jury theorem.) Using a simple probabilistic model for these noisy signals, Young showed that the Kemeny–Young method was the maximum likelihood estimator of the true preference order. Young further argues that Condorcet himself was aware of the Kemeny-Young rule and its maximum-likelihood interpretation, but was unable to clearly express his ideas. In the papers by John Kemeny and Peyton Young, the Kemeny scores use counts of how many voters oppose, rather than support, each pairwise preference,[2][3] but the smallest such score identifies the same overall ranking. Since 1991 the method has been promoted under the name "VoteFair popularity ranking" by Richard Fobes.[18] ## Comparison table The following table compares the Kemeny-Young method with other preferential single-winner election methods: Comparison of preferential electoral systems Sys­tem Mono­tonic Condorcet winner Majo­rity Condorcet loser Majority loser Mutual majority Smith ISDA LIIA Independence of clones Reversal symmetry Participation, consistency Later-no‑harm Later-no‑help Polynomial time Resol­vability Schulze Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes No No No Yes Yes Ranked pairs Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No No No Yes Yes Split Cycle Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes No No No Yes No Tideman's Alternative No Yes Yes Yes Yes Yes Yes Yes No Yes No No No No Yes Yes Kemeny–Young Yes Yes Yes Yes Yes Yes Yes Yes Yes No Yes No No No No Yes Copeland Yes Yes Yes Yes Yes Yes Yes Yes No No Yes No No No Yes No Nanson No Yes Yes Yes Yes Yes Yes No No No Yes No No No Yes Yes Black Yes Yes Yes Yes Yes No No No No No Yes No No No Yes Yes Instant-runoff voting No No Yes Yes Yes Yes No No No Yes No No Yes Yes Yes Yes Smith/IRV No Yes Yes Yes Yes Yes Yes Yes No Yes No No No No Yes Yes Borda Yes No No Yes Yes No No No No No Yes Yes No Yes Yes Yes Geller-IRV No No Yes Yes Yes Yes No No No No No No No No Yes Yes Baldwin No Yes Yes Yes Yes Yes Yes No No No No No No No Yes Yes Bucklin Yes No Yes No Yes Yes No No No No No No No Yes Yes Yes Plurality Yes No Yes No No No No No No No No Yes Yes Yes Yes Yes Contingent voting No No Yes Yes Yes No No No No No No No Yes Yes Yes Yes Coombs[19] No No Yes Yes Yes Yes No No No No No No No No Yes Yes MiniMax Yes Yes Yes No No No No No No No No No No No Yes Yes Anti-plurality[19] Yes No No No Yes No No No No No No Yes No No Yes Yes Sri Lankan contingent voting No No Yes No No No No No No No No No Yes Yes Yes Yes Supplementary voting No No Yes No No No No No No No No No Yes Yes Yes Yes Dodgson[19] No Yes Yes No No No No No No No No No No No No Yes ## Notes 1. ^ The numbers in this example are adapted from Sample election used in Wikipedia Archived 2017-03-30 at the Wayback Machine. 2. ^ a b c John Kemeny, "Mathematics without numbers", Daedalus 88 (1959), pp. 577–591. 3. ^ a b c H. P. Young and A. Levenglick, "A Consistent Extension of Condorcet's Election Principle", SIAM Journal on Applied Mathematics 35, no. 2 (1978), pp. 285–300. 4. ^ Giuseppe Munda, "Social multi-criteria evaluation for a sustainable economy", p. 124. 5. ^ a b J. Bartholdi III, C. A. Tovey, and M. A. Trick, "Voting schemes for which it can be difficult to tell who won the election", Social Choice and Welfare, Vol. 6, No. 2 (1989), pp. 157–165. 6. ^ C. Dwork, R. Kumar, M. Naor, D. Sivakumar. Rank Aggregation Methods for the Web, WWW10, 2001 7. ^ Biedl, Therese; Brandenburg, Franz J.; Deng, Xiaotie (2005-09-12). Healy, Patrick; Nikolov, Nikola S. (eds.). Crossings and Permutations. Lecture Notes in Computer Science. Springer Berlin Heidelberg. pp. 1–12. doi:10.1007/11618058_1. ISBN 9783540314257. 8. ^ a b Vincent Conitzer, Andrew Davenport, and Jayant Kalagnanam, "Improved bounds for computing Kemeny rankings" (2006). 9. ^ 10. ^ a b Karpinski, M. and Schudy, W., "Faster Algorithms for Feedback Arc Set Tournament, Kemeny Rank Aggregation and Betweenness Tournament", in: Cheong, O., Chwa, K.-Y., and Park, K. (Eds.): ISAAC 2010, Part I, LNCS 6506, pp. 3-14. 11. ^ Lawler, E. (1964), "A comment on minimum feedback arc sets", IEEE Transactions on Circuit Theory, 11 (2): 296–297, doi:10.1109/tct.1964.1082291 12. ^ Bodlaender, Hans L.; Fomin, Fedor V.; Koster, Arie M. C. A.; Kratsch, Dieter; Thilikos, Dimitrios M. (2012), "A note on exact algorithms for vertex ordering problems on graphs", Theory of Computing Systems, 50 (3): 420–432, doi:10.1007/s00224-011-9312-0, hdl:1956/4556, MR 2885638 13. ^ "How to Rank with Few Errors". http://cs.brown.edu/~claire/stoc07.pdf 14. ^ H. P. Young, "Condorcet's Theory of Voting", American Political Science Review 82, no. 2 (1988), pp. 1231–1244. 15. ^ H. P. Young, "Optimal ranking and choice from pairwise comparisons", in Information pooling and group decision making edited by B. Grofman and G. Owen (1986), JAI Press, pp. 113–122. 16. ^ H. P. Young, "Optimal Voting Rules", Journal of Economic Perspectives 9, no.1 (1995), pp. 51–64. 17. ^ H. P. Young, "Group choice and individual judgements", Chapter 9 of Perspectives on public choice: a handbook, edited by Dennis Mueller (1997) Cambridge UP., pp.181 –200. 18. ^ Richard Fobes, "The Creative Problem Solver's Toolbox", (ISBN 0-9632-2210-4), 1993, pp. 223–225. 19. ^ a b c Anti-plurality, Coombs and Dodgson are assumed to receive truncated preferences by apportioning possible rankings of unlisted alternatives equally; for example, ballot A > B = C is counted as ${\displaystyle {\tfrac {1}{2}}}$ A > B > C and ${\displaystyle {\tfrac {1}{2}}}$ A > C > B. If these methods are assumed not to receive truncated preferences, then later-no-harm and later-no-help are not applicable. Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.
0 Research Papers: Gas Turbines: Structures and Dynamics # Numerical Investigations on the Leakage and Rotordynamic Characteristics of Pocket Damper Seals—Part II: Effects of Partition Wall Type, Partition Wall Number, and Cavity Depth [+] Author and Article Information Zhigang Li Institute of Turbomachinery, Xi'an Jiaotong University, School of Energy & Power Engineering, Xi'an Jiaotong University, Xi'an 710049, China Jun Li Institute of Turbomachinery, School of Energy & Power Engineering, Xi'an Jiaotong University, Xi'an 710049, China Collaborative Innovation Center of Beijing 100191, China e-mail: [email protected] Zhenping Feng Institute of Turbomachinery, School of Energy & Power Engineering, Xi'an Jiaotong University, Xi'an 710049, China 1Corresponding author. Contributed by the Structures and Dynamics Committee of ASME for publication in the JOURNAL OF ENGINEERING FOR GAS TURBINES AND POWER. Manuscript received July 11, 2014; final manuscript received July 17, 2014; published online September 30, 2014. Editor: David Wisler. J. Eng. Gas Turbines Power 137(3), 032504 (Sep 30, 2014) (13 pages) Paper No: GTP-14-1374; doi: 10.1115/1.4028374 History: Received July 11, 2014; Revised July 17, 2014 ## Abstract Effects of partition wall type, partition wall number and cavity depth on the leakage and rotordynamic characteristics of the pocket damper seal (PDS) were numerically investigated using a presented 3D transient computational fluid dynamics (CFD) method based on the multifrequency elliptical whirling orbit model. The accuracy and availability of this transient CFD method and the multifrequency elliptical whirling orbit model were demonstrated with the experimental data of the experimental eight-bladed fully partitioned pocket damper seal (FPDS). The leakage flow rates and frequency-dependent rotordynamic coefficients of PDS were computed for two types of partition wall (namely conventional PDS and fully partitioned PDS), four partition wall numbers including the labyrinth seal (no partition wall) and six cavity depths including the plain smooth seal (zero cavity depth) at operational conditions with or without inlet preswirl and 15,000 rpm rotational speed. The numerical results show that the FPDS has the similar leakage performance and more superior stability capacity than the conventional PDS. The FPDS possesses slightly larger leakage flow rate (∼2.6–4.0% larger) compared to the labyrinth seal. Eight is a preferable value for the partition wall number to gain the best leakage performance of the FPDS with the least manufacturing cost. The FPDS possesses significantly larger stiffness and damping than the labyrinth seal. Increasing partition wall number results in a significant increase in the direct stiffness but limited desirable effect on the effective damping. The FPDS possesses the lowest leakage flow rate when the cavity depth is about 2.0 mm. Compared to the plain smooth seal, the FPDS possesses larger positive direct stiffness and significantly less direct damping and effective damping. Increasing cavity depth results in a significant decrease in the stabilizing direct damping and the magnitude of the destabilizing cross-coupling stiffness. $H=$ 3.175 mm is a preferable value of the cavity depth for which the effective damping of the FPDS is largest, especially for the concerned frequencies (80–120 Hz) where most multistage high-pressure centrifugal compressors have stability problem. <> ## Figures Fig. 1 Fig. 2 Eight-bladed, eight-pockets, conventional PDS versus fully partitioned PDS Fig. 3 Labyrinth seal versus eight-bladed fully partitioned PDS with different partition wall numbers Fig. 4 Plain smooth seal versus eight-bladed fully partitioned PDS with different cavity depths Fig. 5 Computational models of the fully partitioned PDS Fig. 6 Rotordynamic coefficients versus vibration frequency at different partition wall types (NPS = zero preswirl, PS = 60 m/s preswirl) Fig. 7 Cavity dynamic pressure and seal response force of the conventional PDS versus the fully partitioned PDS (xexcitation, zero preswirl) Fig. 8 Seal leakage flow rate versus partition wall number Fig. 9 Rotordynamic coefficients versus vibration frequency at different partition wall numbers (with zero preswirl) Fig. 10 Rotordynamic coefficients versus vibration frequency at different partition wall numbers (with 60 m/s preswirl) Fig. 11 Static pressure contours on the cross section through the middle of cavity 3 and the phasor diagram of the response force for the labyrinth seal and the fully partitioned PDS at different partition wall numbers (x excitation, 60 m/s preswirl, T = 0.1s) Fig. 12 Seal leakage flow rate versus cavity depth Fig. 13 Rotordynamic coefficients versus vibration frequency at different cavity depths (with zero preswirl) Fig. 14 Static pressure contours on the cross section through the middle of cavity 3 and phasor diagram of the response force for the plain smooth seal and fully partitioned PDS at different cavity depths (zero preswirl, x excitation, T = 0.1s) ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
# Why we can’t divide by 0 Quite rarely do I ever do anything about mathematics on Tweaked for your Pleasure but today I got e-mailed a question and although I rarely do requests, I figured I might as well ask this. Basically, the e-mail was as follows: Hi Matt just wondered as you are good at maths why i am told 0 divided by anything is not possible. Mind explaing ? Thanks • $\frac 1 0 =5$ There is a rule within arithmetic that means $(\frac b a) = b$ and following this rule and what we defined a minute ago that if we do;
## highschoolmom2010 Group Title Find the value of each variable. If your answer is not an integer, express it in simplest radical form 11 months ago 11 months ago 1. highschoolmom2010 Group Title 2. Psymon Group Title Do you know your trig functions? Like what sin, cos, tan are in regards to a triangle? 3. highschoolmom2010 Group Title nope not really 4. Psymon Group Title Ah. Alright, hang on then :P Keep in mind as I write these out, they are all in reference to the angle you are using. 5. highschoolmom2010 Group Title that helps alot :DD 6. Psymon Group Title $\sin = \frac{ opposite side }{ hypotenuse }$ $\cos = \frac{ adjacent side }{ hypotenuse }$ $\tan = \frac{ opposite side }{ adjacent side }$ People are ususally taught Soh Cah Toa as a way to remember which sides the trig functions refer to. Now again, these are in reference to your angle, so I'll draw that real quick. 7. Psymon Group Title |dw:1375810320869:dw| 8. Psymon Group Title |dw:1375810420360:dw| 9. Psymon Group Title That kind of make sense so far? 10. highschoolmom2010 Group Title so far yes 11. Psymon Group Title Okay, cool. So this is your triangle that we have then: |dw:1375810542321:dw| 12. Psymon Group Title In order to solve this, we need to choose an angle (not the right angle) and then an appropriate trig function, sin, cos, or tan. The one we choose must include the side we know, 10 in this case, and then the value we want to find. that make sense? 13. highschoolmom2010 Group Title im not entirely sure how to use them though 14. Psymon Group Title Right, we're getting to that :P I just wanted to see if you were following me so far. 15. highschoolmom2010 Group Title oh well yes im following ya 16. Psymon Group Title Okay, cool. So next part: Let's say to find x I choose the 60 degree angle. Now in reference to the 60 degree angle, x is on the adjacent side of it. The value we know is the hypotenuse. Now remember, in reference to the angle we use, we want to choose either sin, cos, or tan. The one we choose now needs to include the adjacent side and the hypotenuse |dw:1375811355268:dw| 17. Psymon Group Title So which one of the 3, sin, cos, or tan has adjacent and hypotenuse? 18. highschoolmom2010 Group Title @Psymon cos? 19. Zale101 Group Title cos = adjacent/hypotenuse so i think @psymon meant that 20. highschoolmom2010 Group Title ive never used them so idk im used to using 30-60-90 21. Zale101 Group Title that's correct, because your triangle has 60, and we can predict the other angle is 90. And, 90+60+30=180 22. jdoe0001 Group Title in the 30-60-90 rule the hypotenuse is TWICE as long as the SHORTEST side the "other side" is the SHORTEST side times $$\bf \sqrt{3}$$ 23. highschoolmom2010 Group Title ok so how do i do this problem 24. jdoe0001 Group Title is really pretty much handed out in a silver plate, with cake and ice cream really 25. mathstudent55 Group Title In a 30-60-90 triangle, the three sides of the right triangle are in the ratio of: $$1 : \sqrt{3} : 2$$ That means that the shorter leg is 1/2 the length of the hypotenuse. The long leg is $$\sqrt{3}$$ times the length of the short leg. 26. jdoe0001 Group Title if "the hypotenuse is TWICE as long as the SHORTEST side" what do you think is the length of the shortest side? 27. mathstudent55 Group Title Here the hypotenuse is 10. The short leg is x. From the statement "the short leg is half the length of the hyopotenuse", what can you conclude about x? 28. highschoolmom2010 Group Title |dw:1375817625354:dw| 29. highschoolmom2010 Group Title short leg is 5 :DD 30. jdoe0001 Group Title so there, shortest leg is 5 and the "other leg" is THAT MUCH $$\bf \Large \times \sqrt{3}$$ 31. mathstudent55 Group Title Great. That is correct, the short leg, x = 5. The long leg, y, is $$\sqrt{3}$$ times longer than the short leg. $$y = \sqrt{3} \times 5$$ What is y 32. highschoolmom2010 Group Title so $5\sqrt{3}$ 33. highschoolmom2010 Group Title @mathstudent55 @jdoe0001 34. jdoe0001 Group Title yes 35. highschoolmom2010 Group Title was that all i need to do??? 36. jdoe0001 Group Title yeap 37. mathstudent55 Group Title correct $$x = 5$$ $$y = 5\sqrt{3}$$ That is it 38. highschoolmom2010 Group Title horray thanks @mathstudent55 & @jdoe0001 & @Psmon @Zale101 39. highschoolmom2010 Group Title wish i could give everyone a medal
# breakable in tcolorbox not breaking \documentclass[12pt, a4paper, UTF8, scheme = plain, twoside]{ctexart} \usepackage{amsmath} \usepackage{enumerate} \renewcommand{\labelenumi} {(\roman{enumi})} \renewcommand{\labelenumii}{(\alph{enumii})} \usepackage[top=0.5cm,left=0.5cm,right=0.5cm,bottom=2.28cm]{geometry} \usepackage{hyperref} \usepackage{graphicx} \pdfsuppressptexinfo=-1 \usepackage{lipsum} %%%%%%%%%%%%%%%%%%%%%%%%%%% %% - Fancyhdr - %% %%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{fancyhdr} \usepackage{totpages} %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{tcolorbox} \tcbuselibrary{skins,xparse,breakable} \tcbset{% colback = white, tikz = {opacity=0.1,transparency group}, colframe = black, title filled = false, colbacktitle = white, }%% \NewTColorBox[ ]{question}{ O{}mo }{ fonttitle = \bfseries, coltitle = black, title = #2, #1 }% %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%% %%% DOC Begins %%% %%%%%%%%%%%%%%%%%%%%% \begin{document} \thispagestyle{empty} \begin{question} % \lipsum[4] Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah \tcblower $\lambda = 2\times10^5 \qquad l_0 = 2$ % \lipsum[4] Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah \lipsum[4-10] \end{question} %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%% \end{document} %%%%%%%%%%%%%%%%%%%%%%%%%%% Why does the breakable not working in this case? • it is not enough to load the library, you must also use it. Check the documentation. Apr 5 at 9:55 • You load breakable library but the box (question) is not declared to be breakable. You have to include breakable in question options. Apr 5 at 9:55 As said in comments, you should not just load breakable, you must tell the box if it is allowed to be breakable. I commented the tikz = {opacity=0.1,transparency group} to make your example visible and I also added the {Title} to your {question} environment. As you created a \NewTColorBox with two arguments, the first optional but the second doesn't seem to be ({question}{ O{}mo } and title = #2,). A MWE follows. \documentclass[12pt, a4paper, UTF8, scheme = plain, twoside]{ctexart} \usepackage{amsmath} \usepackage{enumerate} \renewcommand{\labelenumi} {(\roman{enumi})} \renewcommand{\labelenumii}{(\alph{enumii})} \usepackage[top=0.5cm,left=0.5cm,right=0.5cm,bottom=2.28cm]{geometry} \usepackage{hyperref} \usepackage{graphicx} \pdfsuppressptexinfo=-1 \usepackage{lipsum} \usepackage{tcolorbox} \tcbuselibrary{skins,xparse,breakable} \tcbset{% colback = white, %tikz = {opacity=0.1,transparency group}, colframe = black, title filled = false, colbacktitle = white, breakable } \NewTColorBox[ ]{question}{ O{}mo }{ fonttitle = \bfseries, coltitle = black, title = #2, $\lambda = 2\times10^5 \qquad l_0 = 2$
# American Institute of Mathematical Sciences August  2018, 23(6): 2245-2263. doi: 10.3934/dcdsb.2018195 ## Boundedness and persistence of populations in advective Lotka-Volterra competition system Department of Mathematics, Southwestern University of Finance and Economics, 555 Liutai Ave, Wenjiang, Chengdu, Sichuan 611130, China * Corresponding author.QW is partially supported by NSF-China (Grant No. 11501460) and the Fundamental Research Funds for the Central Universities (Grant No. JBK1801062) Received  September 2016 Revised  April 2018 Published  June 2018 We are concerned with a two-component reaction-advection-diffusion Lotka-Volterra competition system with constant diffusion rates subject to homogeneous Neumann boundary conditions. We first prove the global existence and uniform boundedness of positive classical solutions to this system. This result complements some of the global existence results in [Y. Lou, M. Winkler and Y. Tao, SIAM J. Math. Anal., 46 (2014), 1228-1262.], where one diffusion rate is taken to be a linear function of the population density. Our second result proves that the total population of each species admits a positive lower bound, under some conditions of system parameters (e.g., when the intraspecific competition rates are large). This result of population persistence indicates that the two competing species coexist over the habitat in a long time. Citation: Qi Wang, Yang Song, Lingjie Shao. Boundedness and persistence of populations in advective Lotka-Volterra competition system. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2245-2263. doi: 10.3934/dcdsb.2018195 ##### References: show all references ##### References: [1] Wei Feng, Michael Freeze, Xin Lu. On competition models under allee effect: Asymptotic behavior and traveling waves. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5609-5626. doi: 10.3934/cpaa.2020256 [2] Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020432 [3] Haiyu Liu, Rongmin Zhu, Yuxian Geng. Gorenstein global dimensions relative to balanced pairs. Electronic Research Archive, 2020, 28 (4) : 1563-1571. doi: 10.3934/era.2020082 [4] Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020345 [5] Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081 [6] Touria Karite, Ali Boutoulout. Global and regional constrained controllability for distributed parabolic linear systems: RHum approach. Numerical Algebra, Control & Optimization, 2020  doi: 10.3934/naco.2020055 [7] Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020348 [8] Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $q$-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020440 [9] Hai-Feng Huo, Shi-Ke Hu, Hong Xiang. Traveling wave solution for a diffusion SEIR epidemic model with self-protection and treatment. Electronic Research Archive, , () : -. doi: 10.3934/era.2020118 [10] Sihem Guerarra. Maximum and minimum ranks and inertias of the Hermitian parts of the least rank solution of the matrix equation AXB = C. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 75-86. doi: 10.3934/naco.2020016 [11] Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 [12] Mengni Li. Global regularity for a class of Monge-Ampère type equations with nonzero boundary conditions. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020267 [13] Bo Chen, Youde Wang. Global weak solutions for Landau-Lifshitz flows and heat flows associated to micromagnetic energy functional. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020268 [14] Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020321 [15] Manil T. Mohan. First order necessary conditions of optimality for the two dimensional tidal dynamics system. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020045 [16] Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020273 [17] Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of a Sobolev type impulsive functional evolution system in Banach spaces. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020049 [18] Helmut Abels, Andreas Marquardt. On a linearized Mullins-Sekerka/Stokes system for two-phase flows. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020467 [19] Yichen Zhang, Meiqiang Feng. A coupled $p$-Laplacian elliptic system: Existence, uniqueness and asymptotic behavior. Electronic Research Archive, 2020, 28 (4) : 1419-1438. doi: 10.3934/era.2020075 [20] Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216 2019 Impact Factor: 1.27
# 9: Atomic Structure and The Periodic Law Quantum mechanics can account for the periodic structure of the elements, by any measure a major conceptual accomplishment for any theory. Although accurate computations become increasingly more challenging as the number of electrons increases, the general patterns of atomic behavior can be predicted with remarkable accuracy. ## Slater Determinants According to the orbital approximation, which was introduced in the last Chapter, an N-electron atom contains N occupied spinorbitals, which can be designated $$\phi$$a, $$\phi$$b . . . $$\phi$$n. In accordance with the Pauli exclusion principle, no two of these spinorbitals can be identical. Also, every electron should be equally associated with every spinorbital. A very neat mathematical representation for these properties is a generalization of the two-electron wavefunction (8.13) or (8.15) called a Slater determinant $\Psi (1,2 \ldots N ) = \frac{1}{\sqrt{N !}} \begin{vmatrix} \phi_a(1) & \phi_b(1) & \ldots & \phi_n(1) \\ \phi_a(2) & \phi_b(2) & \ldots & \phi_n(2) \\ & & \vdots & \\ \phi_a(N) & \phi_b(N) & \ldots & \phi_n(N) \\ \end{vmatrix} \label{1}$ Since interchanging any two rows (or columns) of a determinant multiplies it by $$-1$$, the antisymmetry property (8.15) is fulfilled, for every pair of electrons. The Hamiltonian for an atom with N electrons around a nucleus of charge Z can be written $\hat{H} = \sum_{i=1}^N \left\{-\frac{1}{2}\bigtriangledown^2_i - \frac{Z}{​r_i} \right\} + \sum_{i<j}^N \frac{1}{ r_{ij}}\label{2}$ The sum over electron repulsions is written so that each pair {i,,j} is counted just once. The energy of the state represented by a Slater determinant (Equation $$\ref{1}$$) can be obtained after a lengthy derivation. We give just the final result $\tilde{E} = \sum_{a} I_a+\frac{1}{2}\sum_{a,b} \left( J_{ab}-K_{ab} \right) \label{3}$ where the sums run over all occupied spinorbitals. The one-electron, Coulomb and exchange integrals have the same form as those defined for helium atom in Eqs (8.22-24). The only difference is that an exchange integral equals zero unless the spins of orbitals a and b are both $$\alpha$$ or both $$\beta$$. The factor 1/2 corrects for the double counting of pairs of spinorbitals in the second sum. The contributions with a = b can be omitted since Jaa = Kaa. This effectively removes the Coulomb interaction of an orbital with itself, which is spurious. The Hartree-Fock or self-consistent field (SCF) method is a procedure for optimizing the orbital functions in the Slater determinant (1), so as to minimize the energy (Equation $$\ref{3}$$). SCF computations have been carried out for all the atoms of the periodic table, with predictions of total energies and ionization energies generally accurate in the $$1-2\%$$ range. ## Aufbau Principles and Periodic Structure​ Aufbau means "building-up." Aufbau principles determine the order in which atomic orbitals are filled as the atomic number is increased. For the hydrogen atom, the order of increasing orbital energy is given by 1s < 2s = 2p < 3s = 3p = 3d, etc. The dependence of energy on n alone leads to extensive degeneracy, which is however removed for orbitals in many-electron atoms. Thus 2s lies below 2p, as already observed in helium. Similarly, 3s, 3p and 3d increase energy in that order, and so on. The 4s is lowered sufficiently that it becomes comparable to 3d. The general ordering of atomic orbitals is summarized in the following scheme: $1s < 2s < 2p < 3s < 3p < 4s \sim 3d < 4p < 5s \sim 4d\\< 5p < 6s \sim 5d \sim 4f < 6p < 7s \sim 6d \sim 5f \label{4}$ and illustrated in Figure 1. This provides enough orbitals to fill the ground states of all the atoms in the periodic table. For orbitals designated as comparable in energy, e.g., 4s $$\sim$$ 3d, the actual order depends which other orbitals are occupied. The sequence of orbitals pictured above increases in the order $$n+\frac{1}{2}$$l, except that l = 4 (rather than 3) is used for an f-orbital. The tabulation below shows the ground-state electron configuration and term symbol for selected elements in the first part of the periodic table. From the term symbol, one can read off the total orbital angular momentum L and the total spin angular momentum S. The code for the total orbital angular momentum mirrors the one-electron notation, but using upper-case letters, as follows: L = 0 1 2 3 4 S P D F G The total spin S is designated, somewhat indirectly, by the spin multiplicity 2S + 1 written as a superscript before the S, P, D. . . symbol. For example 1S (singlet S) ,1P (singlet P). . . mean S = 0; 2S (doublet S) ,2P (doublet P). . . mean S = 1/2; 3S (triplet S) ,3P (triplet P). . . mean S = 1, and so on. Please do not confuse the spin quantum number S with the orbital designation S. Atom Z Electron Configuration Term Symbol H 1 1s 2S1/2 He 2 1s2 1S0 Li 3 [He]2s 2S1/2 Be 4 [He]2s2 1S0 B 5 [He]2s22p 2P1/2 C 6 [He]2s22p2 3P0 N 7 [He]2s22p3 4S​3/2 O 8 [He]2s22p4 3P2 F 9 [He]2s22p5 2P3/2 Ne 10 [He]2s22p6 1S0 Na 11 [Ne]3s 2S1/2 Cl 17 [Ne]3s23p5 2P​3/2 Ar 18 [Ne]3s23p6 1S0 K 19 [Ar]4s 2S1/2 Ca 20 [Ar]4s2 1S0 Sc 21 [Ar]4s​23d 2D3/2 Ti 22 [Ar]4s23d2 3F2 V 23 [Ar]4s23d3 4F3/2 Cr 24 [Ar]4s​3d5 7S3 Mn 25 [Ar]4s23d5 6S5/2 Fe 26 [Ar]4s23d​6 5D4 Co 27 [Ar]4s23d7 4F9/2 Ni 28 [Ar]4s23d8 3F4 Cu 29 [Ar]4s3d10 2S1/2 Zn 30 [Ar]4s23d10 1S0 Ga 31 [Ar]4s23d104p 2P1/2 Br 35 [Ar]4s23d104p5 2P3/2 Kr 36 [Ar]3d104s24p6 1S0 The vector sum of the orbital and spin angular momentum is designated $\bf{J} = \bf{L} + \bf{S} \label{5}$ The possible values of the total angular momentum quantum number J runs in integer steps between |L - S| and L + S. The J value is appended as a subscript on the term symbol, eg, 1S0, 2P1/2, 2P3/2. The energy differences between J states is a result of spin-orbit interaction, a magnetic interaction between the circulating charges associated with orbital and spin angular momenta. For atoms of low atomic number, the spin-orbit coupling is a relatively small correction to the energy, but it can become increasingly significant for heavier atoms. We will next consider in some detail the Aufbau of ground electronic states starting at the beginning of the periodic table. Hydrogen has one electron in an s-orbital so its total orbital angular momentum is also designated S. The single electron has s = 1/2, thus S = 1/2. The spin multiplicity 2S + 1 equals 2, thus the term symbol is written 2S. In helium, a second electron can occupy the 1s shell, provided it has the opposite spin. The total spin angular momentum is therefore zero, as is the total orbital angular momentum. The term symbol is 1S, as it will be for all other atoms with complete electron shells. In determining the total spin and orbital angular moments, we need consider only electrons outside of closed shells. Therefore lithium and beryllium are a reprise of hydrogen and helium. The angular momentum of boron comes from the single 2p electron, with l = 1 and s = 1/2, giving a 2P state. To build the carbon atom, we add a second 2p electron. Since there are three degenerate 2p orbitals, the second electron can go into either the already-occupied 2p orbital or one of the unoccupied 2p orbitals. Clearly, two electrons in different 2p orbitals will have less repulsive energy than two electrons crowded into the same 2p orbital. In terms of the Coulomb integrals, we would expect, for example $J(2px, 2py) < J(2px, 2px) \label{6}$ For nitrogen atom, with three 2p electrons, we expect, by the same line of reasoning, that the third electron will go into the remaining unoccupied 2p orbital. The half-filled 2p3 subshell has an interesting property. If the three occupied orbitals are 2px, 2py and 2pz, then their total electron density is given by $\rho_{2p} = \psi^{2}_{2p_{x}} + \psi^{2}_{2p_{y}} + \psi^{2}_{2p_{z}} = \left(x^2 + y^2 + z^2\right) \times \text{function of r} = \text{function of r} \label{7}$ noting that $$x^2 + y^2 + z^2 = r^2$$. But spherical symmetry implies zero angular momentum, like an s-orbital. In fact, any half filled subshell, such as p3, d5, f7, will contribute zero angular momentum. The same is, of course true as well for filled subshells, such as p6, d10, f14. These are all S terms. Another way to understand this vector cancelation of angular momentum is to consider the alternative representation of the degenerate 2p-orbitals: 2p​-1; 2p0 and 2p1. Obviously, the z-components of angular momentum now add to zero, and since only this one component is observable, the total angular momentum must also be zero. Returning to our unfinished consideration of carbon, the 2p2 subshell can be regarded, in concept, as a half-filled 2p3 subshell plus an electron "hole." The advantage of this picture is that the total orbital angular momentum must be equal to that of the hole, namely l = 1. This is shown below: Thus the term symbol for the carbon ground state is P. It remains to determine the total spins of these subshells. Recall that exchange integrals Kab are non-zero only if the orbitals a and b have the same spin. Since exchange integrals enter the energy formula (3) with negative signs, the more nonvanishing K integrals, the lower the energy. This is achieved by having the maximum possible number of electrons with unpaired spins. We conclude that S = 1 for carbon and S = 3/2 for nitrogen, so that the complete term symbols are 3P and 4S, respectively. The allocation electrons among degenerate orbitals can be formalized by Hund's rule: For an atom in its ground state, the term with the highest multiplicity has the lowest energy. Resuming Aufbau of the periodic table, oxygen with four 2p electrons must have one of the 2p-orbitals doubly occupied. But the remaining two electrons will choose unoccupied orbitals with parallel spins. Thus oxygen has, like carbon, a 3P ground state. Fluorine can be regarded as a complete shell with an electron hole, thus a 2P ground state. Neon completes the 2s2p shells, thus term symbol 1S. The chemical stability and high ionization energy of all the noble-gas atoms can be attributed to their electronic structure of complete shells. The third row of the periodic table is filled in complete analogy with the second row. The similarity of the outermost electron shells accounts for the periodicity of chemical properties. Thus, the alkali metals Na and K belong in the same family as Li, the halogens Cl and Br are chemically similar to F, and so forth. The transition elements, atomic numbers 21 to 30, present further challenges to our understanding of electronic structure. A complicating factor is that the energies of the 4s and 3d orbitals are very close, so that interactions among occupied orbitals often determines the electronic state. Ground-state electron configurations can be deduced from spectroscopic and chemical evidence, and confirmed by accurate self-consisent field computations. The 4s orbital is the first to be filled in K and Ca. Then come 3d electrons in Sc, Ti and V. A discontinuity occurs at Cr. The groundstate configuration is found to be 4s3d5, instead of the extrapolated 4s23d4. This can be attributed to the enhanced stability of a half-filled 3d5-shell. All six electrons in the valence shells have parallel spins, maximizing the number of stabilizing exchange integrals and giving the observed 6S term. An analogous discontinuity occurs for copper, in which the 4s subshell is again raided to complete the 3d10 subshell. The order in which orbitals are filled is not necessarily consistent with the order in which electrons are removed. Thus, in all the positive ions of transition metals, the two 4s-electrons are removed first. The inadequacy of any simple generalizations about orbital energies is demonstrated by comparing the three ground-state electron configurations: Ni 4s23d8, Pd 5s04d10 and Pt 6s5d9. The periodic structure of the elements is evident for many physical and chemical properties, including chemical valence, atomic radius, electronegativity, melting point, density, and hardness. The classic prototype for periodic behavior is the variation of the first ionization energy with atomic number, which is plotted in in Figure 2.
# Simplification Questions for SBI Clerk Set-2 PDF 0 509 ## Simplification Questions for SBI Clerk Set-2 PDF Download SBI Clerk Simplification Questions & Answers PDF for SBI Clerk Prelims and Mains exam. Very Important SBI Clerk Maths Questions with solutions. Question 1: If 2x + 5y = 109 and 2x + 5 = y + 12 then y – x = ? a) 7 b) 6 c) 8 d) 9 e) None of these Question 2: $\frac{\sqrt{7744} \times 66}{203+149}=?$ a) 15 b) 18.5 c) 20 d) 16.5 e) None of these Question 3: What is the value of (x) in the following equation? $\frac{(x)^{0.7}}{36}=\frac{9}{(x)^{1.3}}$ a) 17 b) 19 c) 16 d) 14 e) None of these Question 4: Which value of x does satisfy the inequality $2x^{2} + x – 3 < 0$? a) -3/2<x<1 b) -1<x<2/3 c) x>1 d) x<-2/5 e) None of these Question 5: $7^{2} + 3^{4} – 4^{3} = ? – 11^{2}$ a) 55 b) 196 c) 172 d) 187 e) None of these Question 6: 1/5 of 2/7 of 8/3 of 4095 ? a) 642 b) 598 c) 648 d) 475 e) None of these Question 7: $\sqrt{3969}$ $\div$ 1.4 = ? $\times$ 2.5 a) 18 b) 112.5 c) 16 d) 24 e) None of these Question 8: If 3y + 2x = 47 and 11x = 7y then what is value of y – x ? a) 4 b) 6 c) 7 d) 5 e) None of these Question 9: If 2x + 3y + z = 55, x+ z- y = 4 and y – x + z = 12, then what is the value of y ? a) 7 b) 8 c) 12 d) 9 e) None of these Instructions In each of the questions a pair of equations is given. You have to find out the values of x and y and give answer. Question 10: I. $2x^2 – 7x + 6 = 0$ II. $4y^2 = 9$ a) if $x < y$ b) if $x\leq y$ c) if $x = y$ d) if $x > y$ e) if $x\geq y$ $2x + 5y = 109$ —————Eqn(1) $2x + 5 = y + 12$ => $2x – y = 7$ —————-Eqn(2) Subtracting eqn(2) from eqn(1), we get : => $6y = 102$ => $y = 17$ and $x = 12$ $\therefore$ $y – x = 17 – 12 = 5$ Expression : $\frac{\sqrt{7744} \times 66}{203+149}=?$ = $\frac{88 \times 66}{352}$ = $\frac{66}{4} = 16.5$ Expression : $\frac{(x)^{0.7}}{36}=\frac{9}{(x)^{1.3}}$ => $(x)^{0.7 + 1.3} = 9 \times 36$ => $(x)^2 = 324$ => $x = \sqrt{324} = 18$ Inequality : $2x^{2} + x – 3 < 0$ => $2x^2 – 2x + 3x – 3 < 0$ => $2x (x – 1) + 3 (x – 1) < 0$ => $(2x + 3) (x – 1) < 0$ => $\frac{-3}{2} < x < 1$ Let unknown quantity be ‘x’. $7^{2} + 3^{4} – 4^{3} =x- 11^{2}$. $49+81-64=x-121$. $x=187$. Hence, Option D is correct. 1/5 of 2/7 of 8/3 of 4095. =$\frac{1}{5}\times\frac{2}{7}\times\frac{8}{3}\times4095$. =$624$. Hence, Option E is correct. Let the unknown quantity be $’x’$. $\sqrt{3969}\div1.4 = x\times2.5$. $63\div1.4=x\times2.5$. $x=45\div2.5$. $x=18$. Hence, Option A is correct. This is a system of two equations with two unknowns. 3y + 2x = 47 and 11x = 7y Multiplying the first equation by 7, we get 21y + 14x = 329 And, multiplying the second equation by 3, we get 33x = 21y So, 21y + 14x = 329 or 33x + 14x = 329 Hence, 47x = 329 So, x = 7 and y = 11 Therefore, y-x = 4 and the correct answer is option (a) We have a group of three equations in three unknowns. 2x + 3y + z = 55, x+ z- y = 4 and y – x + z = 12 Adding the second and third equations together, we get 2z = 16 or z = 8 Adding the first equation and twice the third equation, we get (2x + 3y + z) + 2*(y-x+z) = 55 + 2*12 = 79 Hence, 5y + 3z = 79. As z=8, it implies that 5y = 55 or y=11 As this is not given in any of the options, the correct answer is option (e) I. $2x^2 – 7x + 6 = 0$ II. $4y^2 = 9$ 1 implies $2x^2 – 4x – 3x + 6 = 0$ So, (2x – 3)(x-2) = 0 ie x = 3/2 or x = 2 2 implies y = $\pm \frac{3}{2}$ So, $x \geq y$ We hope this Simplification Questions for SBI Clerk Exam is so helpful for your preparation.
Universal Properties of Anisotropic Dipolar Bosons in Two Dimensions Submission summary As Contributors: Ferran Mazzanti · Luis Peña Ardila Arxiv Link: https://arxiv.org/abs/2112.11094v3 (pdf) Date accepted: 2022-06-03 Date submitted: 2022-05-16 09:52 Submitted by: Mazzanti, Ferran Submitted to: SciPost Physics Academic field: Physics Specialties: Quantum Physics Approaches: Theoretical, Computational Abstract The energy of ultra-dilute quantum many-body systems is known to exhibit a universal dependence on the gas parameter $x=n a_0^d$, with $n$ the density, $d$ the dimensionality of the space ($d=1,2,3$) and $a_0$ the $s$-wave scattering length. The universal regime typically extends up to $x\approx 0.001$, while at larger values specific details of the interaction start to be relevant and different model potentials lead to different results. Dipolar systems are peculiar in this regard since the anisotropy of the interaction makes $a_0$ depend on the polarization angle $\alpha$, so different combinations of $n$ and $\alpha$ can lead to the same value of the gas parameter $x$. In this work we analyze the scaling properties of dipolar bosons in two dimensions as a function of the density and polarization dependent scattering length up to very large values of the gas parameter $x$. Using Quantum Monte Carlo (QMC) methods we study the energy and the main structural and coherence properties of the ground state of a gas of dipolar bosons by varying the density and scattering length for fixed gas parameter. We find that the dipolar interaction shows relevant scaling laws up to unusually large values of $x$ that hold almost to the boundaries in the phase diagram where a transition to a stripe phase takes place. Current status: Publication decision taken: accept Editorial decision: For Journal SciPost Physics: Publish (status: Editorial decision fixed and (if required) accepted by authors) Dear Editor, Please find attached the last version of our manuscript 'Universal Properties of Anisotropic Dipolar Bosons in Two Dimensions' by J.Sanchez-Baena, L.A. Peña Ardila, G.E. Astrakharchik and F. Mazzanti We hope this new version is now suitable for publication. Yours sincerely, J.Sanchez-Baena, L.A. Peña Ardila, G.E. Astrakharchik and F. Mazzanti List of changes Added a few sentences to the last paragraph of page 1, where we extend a bit the discussion on the Quantum Anomaly, providing new references including the one requested by the last referee.
# Modelling hyper priors for sigma with partially exponentiated components Hi everyone, I’m struggling to implement priors on sigma when exponentiating sone components of it for the following model y_i \sim \mathcal{N}(p_i,\sigma_i^2) with a mean equation: logit(p_i) = ..... and the variance equation, where n_i represents the sample size and x_1, x_2, x_3 are observed data: \sigma_i^2 = p_i*(1-p_i)/n_i +exp(\tau_{group[i]} + \beta_{1group[i]} * x_{1i} + \beta_2*x_{2i} + \beta_{3}*x_{3group[i]}) Previously, there was no exp() function in \sigma, hence the the distribution of the mean and standard deviation was simply set to \mathcal{N}(0,0.05). But exponentiating was necessary to secure \sigma being positive. Now I want to set priors on \tau,\beta_1, \beta_2,\ beta_3 such that they allow a 5% change, but I don’t know how to specify the priors. The model looks as follows: data{ ... } parameters{ ... // Hyper-parameters real mu_beta1; real<lower=0> sig_beta1; real mu_beta2; real<lower=0> sig_beta2; real mu_beta3; real<lower=0> sig_beta3; real<lower=0> sig_tau; // Scaled Parameters vector[group] beta1_sc; real beta2_sc; real beta3_sc; vector<lower=0>[SY] tau_sq_sc; } transformed parameters { ... // Parameters vector[group]beta1; realbeta2; real beta3; vector<lower=0>[SY] tau_sq; beta1 = mu_beta1 + sig_beta1 * beta1_sc; beta2 = mu_beta2 + sig_beta2 * beta2_sc; beta3 = mu_beta3 + sig_beta3 * beta3_sc; tau_sq = sig_tau * tau_sq_sc; } model { vector[N] logit_p; vector[N] p; ... // Hyper priors mu_beta1 ~ normal(0,0.05); sig_beta1 ~ normal(0,0.0.5) T[0,]; mu_beta2 ~ normal(0,0.0.5); sig_beta2 ~ normal(0,0.0.5) T[0,]; mu_beta3 ~ normal(0,0.0.5); sig_beta3 ~ normal(0,0.0.5) T[0,]; sig_tau ~ normal(0,0.2) T[0,]; // Priors beta1_sc ~ normal(0,1); beta2_sc ~ normal(0,1); beta3_sc ~ normal(0,1); for(i in 1:SY){ tau_sq_sc[i] ~ normal(0,1) T[0,]; } for(i in 1:N){ logit_p[i] = ... ; p[i] = inv_logit(logit_p[i]); y[i] ~ normal(p[i], sqrt(p[i] * (1 - p[i]) /n[i] + exp(tau_sq[group[i]] + beta1[group[i]] * x1[i] + beta2 * x2[i] + beta3 * x3[group[i]]))); } } Would be great, if you have suggestions on how to model this and thanks a lot in advance! Hey there! Sorry it took us some time to respond. So, I guess you are modelling proportions? Maybe you can take a look at the Beta distribution (proportion parameterization), where you could model \kappa with a log link (using exp). Also, I don’t really understand what the indexing of x3 is doing? And when \beta_2 is not varying, you don’t need to implement it using a NCP. In fact, you would not learn \sigma_{\beta_2} anyway (because it is not varying). Do you have a working version of your model already? In any case, the usual recommendation is to start with something simple (that works) and then increase complexity to see when/where stuff breaks. Let me know if the Beta proportion suggestion makes sense to you, or if you have more questions. Cheers, Max Thank you very much. Yes, I do model proportions. For now I sticked with the normal distribution but changed the variance equation to: log(\sigma_i^2 ) = log(p_i * (1-p_i)/n_i) + \tau_{group[i]} + \beta_{1group[i]}* x_{1[i]} + \beta_{2} * x_{2[i]} + \beta_{3} * x_{3group[i]} This ensures the total variance to be positive since \sigma_i^2=exp(log(sigma_i^2)). The model is working now. You are right, a NCP for \beta_2 is unnecessary. x_3 is measured at the group level, this is why I used the indexing. I’ll have a look into the Beta proportion. Thanks again, Sina 1 Like
## Student Harmonic Analysis and PDE Seminar (HADES): The Use of Bellman Functions in Holder-Brascamp-Lieb Inequalities Seminar | April 18 | 3:40-5 p.m. | 740 Evans Hall Kevin O'Neill, UC Berkeley Department of Mathematics Holder-Brascamp-Lieb inequalities represent a wide range of inequalities in analysis, including Young's convolution inequality, the Loomis-Whitney inequality, and more. In a 2015 paper, Ivanisvili and Volberg analyze the "rank one" case of such inequalities in which $L^p$ norms are subbed out for Bellman functions. This talk will contain introduction to all the above, as well as some of the speaker's own research in the area, including a necessary and sufficient condition for certain new inequalities. [email protected]
# Snipet to Insert HTML at Caret Here is a js snipte to insert html at caret. It is a cross-browser solution, so you don’t need to use the execCommand to call insertHTML or pasteHTML.
## Stream: Machine Learning for Theorem Proving ### Topic: GPT-f paper #### Stanislas Polu (Sep 09 2020 at 08:51): Hi! We (OpenAI) just released a paper describing GPT-f, a Transformer-based automated theorem prover. It covers a lot of work we've been doing with Metamath and that could be applied to Lean. Full abstract: We explore the application of transformer-based language models to automated theorem proving. This work is motivated by the possibility that a major limitation of automated theorem provers compared to humans – the generation of original mathematical terms – might be addressable via generation from language models. We present an automated prover and proof assistant, GPT-f, for the Metamath formalization language, and analyze its performance. GPT-f found new short proofs that were accepted into the main Metamath library, which is to our knowledge, the first time a deep learning based system has contributed proofs that were adopted by a formal mathematics community. #### Stanislas Polu (Sep 09 2020 at 08:52): Any feedback/suggestions or questions are obviously welcome. I also hope to meet some of you at AITP next week to discuss it! #### Johan Commelin (Sep 09 2020 at 08:55): Nice! I'll try to read it soon #### Johan Commelin (Sep 09 2020 at 08:55): Abstract looks promising! #### Jason Rute (Sep 09 2020 at 15:12): I also look forward to reading it. (I’m on honeymoon this week, so don’t expect much of a response soon.) #### Jason Rute (Sep 09 2020 at 15:13): However, one quick question: Are you working on applying it to Lean, or is that an exercise for the reader? #### Patrick Massot (Sep 09 2020 at 15:31): Oh great, I remember you had to postpone your wedding because of Covid, but it looks like it happened in the end, congratulations! #### Johan Commelin (Sep 09 2020 at 16:47): @Stanislas Polu I haven't read through everything in detail, because I don't know enough about ML. But I'm very impressed by the fact that GPT-f found several shorter proofs than those that were in set.mm at the time. #### Stanislas Polu (Sep 09 2020 at 18:18): @Jason Rute we're still at an exploratory stage, but short answer is yes, definitely! #### Stanislas Polu (Sep 09 2020 at 18:21): @Johan Commelin Thank you! (I have to admit we were ourselves super excited by this result as well :)) #### Jesse Michael Han (Sep 09 2020 at 20:02): during proof search, is the model conditioned on the proof tree / previously expanded goals? #### Stanislas Polu (Sep 10 2020 at 06:54): No it's not due to the size of the expressions involved being already quite large. That being said, we experimented with conditioning on the top goal and were not able to demonstrate a huge lift. But this was very exploratory so I wouldn't bet my money on it. #### Wojciech Nawrocki (Sep 11 2020 at 00:26): This is impressive! And it makes me quite excited for what the future holds. Two small corrections if you care about that sort of thing: Pg. 8, the ring footnote points to the class rather than the tactic Table 3: Matmath -> Metamath? #### Stanislas Polu (Sep 11 2020 at 09:43): Thanks a lot @Wojciech Nawrocki for the kind words as well as the feedback. Duly noted and factored in the next version of the paper :+1: Thanks! #### Jason Rute (Sep 13 2020 at 21:43): @Stanislas Polu What is considered a valid proof step? GPT-f will return both a theorem and substitutions, which then must unify with the goal. If the substitutions don't unify, then I'm sure it is marked invalid. However, what if the theorem isn't in the list of previously proved theorems? What does GPT-f do? 1. Try to prove that theorem, 2. consider this an invalid proof step, or 3. restrict the output to only known theorems? (I assume it is the first option, but I want to check.) #### Stanislas Polu (Sep 14 2020 at 07:52): @Jason Rute Good questions! • If the unification fails, the kernel rejects the proof step and it is not even considered in the proof tree search (not added to the tree or queue, nor valued by the value function). • If the theorem statement generated is not in the theorem database, currently and in the experiments reported in the paper, the kernel rejects it as well. That being said we're experimenting with letting the model prove such conjectures if they are considered interesting by the value function. In that case we simply add the theorem itself as a subgoal (with a special tag to make sure we re-check the distinct variables once a proof is found (DVs are a metamath technicality that is ok to abstract in your thinking and revisit later if you don't know how they work)) and the subgoal is valued and added to the queue accordingly. #### Jason Rute (Sep 14 2020 at 13:25): @Stanislas Polu Another follow up question: When validating or testing the model, do you have any kind of dependency restriction on theorems? For example, in most of these papers, to prove 0 = 0 (if it is in the test set), one must use theorems which came before 0 = 0. I believe the Holophrasm and MetaGen papers do this. The MetaGen paper calls these "background theorems". (This is not perfect however for tactic based provers, since 0 = 0 might be provable with a tactic which came along after the proof of 0=0. I have more thoughts on better ways to do this, but that is for another time.) Does your paper do this? I ask, because without a restriction like this, you could have a much higher percentage of theorems solved. #### Stanislas Polu (Sep 14 2020 at 13:26): Yep this is mentioned in the paper (I believe?). We enforce theorem ordering as we evaluate on set.mm :+1: #### Jason Rute (Sep 14 2020 at 13:37): Sorry. I had trouble finding it. I see it now, on the middle of page 4. Thanks! #### Stanislas Polu (Sep 14 2020 at 13:55): (FWIW the lift you generally get from waiving that constraint is 2-3% success rate) #### Jason Rute (Sep 15 2020 at 03:21): I should mention that there are a few threads talking about GPT-f on the Internet besides this one: #### Jason Rute (Sep 15 2020 at 03:21): As usual, I’m going to try to summarize the paper for an audience who knows more about theorem provers than neural networks. #### Jason Rute (Sep 15 2020 at 03:22): In many ways GPT-f is similar to other theorem provers which have come before, HOList/DeepMath, CoqGym/ASTTactic, TacticToe, etc. What all of these have in common is that they treat theorem proving as a tree search. What has been known for a long time is that one can avoid combinatorial explosion in tree (and graph) search by adopting smart heuristics. What AlphaGo and its successors has taught us is that these heuristics can be entirely learned either from examples or from bootstrapping and reinforcement learning. GPT-f is no different in this regard. (I’m not going to say much more about the specific tree search algorithm used by GPT-f, since I don’t think their approach is especially optimized more than any other similar paper.) #### Jason Rute (Sep 15 2020 at 03:22): Currently GPT-f is a system for MetaMath, but that is just a design choice. The MetaMath system is embarrassingly simple, but at the same time user-friendly enough to have developed a large library. That makes it a good candidate for a first experiment. Also, as we will see, GPT-f has no problem handling the most difficult part about MetaMath automation. #### Jason Rute (Sep 15 2020 at 03:22): From a certain point of view, one can think of MetaMath as a tactic-based prover with only one tactic. If one reads a MetaMath proof backward, at every point there is one or more goals, say ( 3 + 2 ) = 5. One can then apply a previously proved theorem, for example transitivity: { A = B, B = C } |- A = C and specify how to substitute the free variables. After substitution, the conclusion must equal the goal. Therefore, in this example the substitutions for A and C must be A = ( 3 + 2 ) and C = 5, while B could be anything. The trick, of course, is to substitute something useful for B. If you choose B = 4 + 1 , then after applying this theorem (backwards), one gets a new goal for each hypothesis: ( 3 + 2 ) = ( 4 + 1 ) and ( 4 + 1 ) = 5. The latter happens to be a theorem (the definition of 5 in MetaMath), which would close that particular goal. #### Jason Rute (Sep 15 2020 at 03:22): In most of the work applying deep learning to (tactic-based) theorem proving, there are four main tasks: • tactic selection: Given a goal, find the best tactic. The nice thing about tactic selection is that there is a fixed list of tactics. Choosing the best thing from a fixed list is easy for deep learning systems. For MetaMath, it is trivial, since there is only one tactic. • premise/lemma selection: Given a goal (and a tactic), find the best theorem to apply (assuming the tactic takes one or more theorems as parameters, e.g. a rewrite tactic). There are multiple ways to do this. Many systems assign each theorem a vector, a key, and assign the goal another vector, a query, and try to find the theorem(s) whose keys most closely match the query. Other systems, try to find the goal most similar in the training data and use whatever tactic and premises that goal used. GPT-f takes a unique approach here as we will see. • parameter selection: Besides any theorems, there are often other parameters that need to be selected for a tactic as well. HOList handles this by avoiding tactics with such parameters (or filling in the parameters with constant values). CoqGym has a fixed, limited grammar from which those parameters can be taken. Other provers allow using local variables and subterms. Still others find a similar example in the training data and use those parameters. For MetaMath, the choice of parameters is especially important and not at all trivial. Previous Metamath AIs (Holophrasm and MetaGen) both use recurrent neural networks to guess at the substitution. This is where GPT-f will shine, since transformers are especially good at “making stuff up”. • value estimation: Finally, to make the tree search more efficient, usually a score is applied to each goal to say how “provable” it is. #### Jason Rute (Sep 15 2020 at 03:23): One comment on the above four steps is that the first three are called a policy and can be done together or separately. Also, in, say, Lean, it is not uncommon to see tactics like apply (lem p) which don’t fit cleanly into the above paradigm. The “premise” is technically lem p but morally the premise is lem and one modifies lem by instantiating the universal quantifier with the term p. GPT-f (if applied to Lean) would show a lot of promise for handling situations like this as well. #### Jason Rute (Sep 15 2020 at 03:23): GPT-f is based on a transformer architecture. (See my notes on transformers here.) Without getting into the details, it basically is a (really good!) text generator. You give it a prompt and it completes the prompt. In this case, the system was trained to take prompts like this: GOAL [[]] |- (3 + 2) = 5 PROOFSTEP and then complete that prompt with something like the following (except without the new lines which I added for readability): GOAL [[]] |- ( 3 + 2 ) = 5 PROOFSTEP [[ |- A = B |- C = B ]] |- A = C {{ A : ( 3 + 2 ) }} {{ B : ( 4 + 1 ) }} {{ C : 5 }} <|endoftext|> #### Jason Rute (Sep 15 2020 at 03:23): The way the text is generated allows for generating multiple stochastic completions to the prompt. Each completion is scored and checked by MetaMath to see if it is a valid proof step. If so, it is plugged into the tree search. (The value function is generated similarly, but trained via reinforcement learning similar to HOList/DeepMath. See the GPT-f paper for details.) #### Jason Rute (Sep 15 2020 at 03:23): What separates this paper from other similar works is a few small but important details. Whereas other models might design a custom machine learning architecture for the exact logic at hand, transformers take the viewpoint: “As long as its text, I can handle it.” #### Jason Rute (Sep 15 2020 at 03:24): There have been a number of papers recently showing that transformers can mimic logical reasoning. (The most famous is the paper showing that transformers can solve integrals.) I hope it is very clear to any remaining skeptics that if we are willing to pay for the computer power (more on that in a second…), then we basically have a general purpose tool which can begin to mimic advanced human reasoning on novel tasks. I’m not saying it can come up with a proof of a Millennium Problem, but it can solve straightforward proofs in whatever ITP you want. There is nothing special here about tactic-mode proofs vs term mode vs Isar-style vs first-order logic. In the end, they can all be implemented by a tree search guided by a (transformer) neural network. The only thing stopping us is (1) engineering and (2) computer power. #### Jason Rute (Sep 15 2020 at 03:24): Getting back to the “it’s just text” theme, probably the most surprising thing about this paper is the way the model was pre-trained. GPT-style transformer models are most famously known for generating fake realistic text, be it a screen play or question answering. It is well known that to achieve the best results one must pre-train the model on a large corpus of text, usually public domain books and websites. GPT-f is no different. The model improved by 8 percentage points when pre-trained on such information. It even did a few points better when trained on more mathematical text, including arXiv, GitHub, and Math StackExchange. (I have so many questions about what the model is getting out of these datasets. It is just about recognizing math terminology and parentheses matching, or is it somehow memorizing common proofs?) #### Jason Rute (Sep 15 2020 at 03:24): Another thing which is fascinating about GPT-style transformers is that they don’t use a fixed vocabulary. This can really be a problem for other models. What if a user uses a new definition, or just picks a unique variable name? GPT uses something called byte pair encoding which scans the training data for common groupings of symbols. Those then become the tokens. A common word may be its own token, but an uncommon word may be made of multiple tokens, possibly just a token for each letter. This allows any new words at test time. #### Jason Rute (Sep 15 2020 at 03:25): Now, as for MetaMath, the transformer architecture provides a number of interesting possibilities. First, notice that when the transformer returned the theorem to apply, it didn’t call it by name or look it up from a list of theorems. It called it by the statement of the theorem. To be clear, this is a design choice, but it is a really interesting one. There are two related ways to look at this: (1) the transformer has memorized the theorems in the dataset. (2) the transformer says “hey, what I really need here is a theorem of the form …”. Of course, it is probably a little of both cases. However, the second case leads to the possibility of conjecturing. If the “theorem” isn’t in the dataset, then one could set out to prove it anyway. (As @Stanislas Polu said above, they don’t do this yet.) #### Jason Rute (Sep 15 2020 at 03:25): The second interesting thing is that unlike a lot of similar systems, the transformer has little or no problem filling in the variable substitutions. It can just “guess” a substitution. Of course it may not be useful or even type check, but MetaMath can check that the proof step is valid and the tree search will test it for usefulness. @Christian Szegedy has also said he thinks that GPT-style text generation is also a promising best path forward when a theorem prover needs to come up with new ideas. #### Jason Rute (Sep 15 2020 at 03:26): Before I get into the negatives, I want to really commend @Stanislas Polu and team on making this not only into a paper, but into a usable tool that the MetaMath community can use. I think this back-and-forth interaction with the community is what is going to drive AI for ITPs forward. #### Jason Rute (Sep 15 2020 at 03:26): Ok, now for the negatives. Basically, this is a great experiment and I’m glad OpenAI is fitting the bill for this system, but to be clear this is a quite expensive project. It shows what is possible, but it probably isn’t scalable to the average MetaMath user level right now. I doubt any of us could build a comparable system without the backing of a large research lab like Google, Facebook, OpenAI, or DeepMind. #### Jason Rute (Sep 15 2020 at 03:26): It’s well known that transformers are computationally expensive. They require O(n^2) computations to compute a sequence of length n (including the prompt). They also have more parameters that many other neural network types. #### Jason Rute (Sep 15 2020 at 03:27): To run the larger model once over the training data required 20,000 GPU-hours on a V100. Contrast this with the first HOList/DeepHOL paper. While the DeepHOL model took a lot of iterations to train via reinforcement learning (eight V100 GPUs running for an unspecified amount of time), the trained version is something I could run on my Macbook Air. When doing the “reinforcement learning”, the GPT-f model is only iterated twice since it is so expensive to run, compared to the thousands of iterations used by HOList/DeepHOL. #### Jason Rute (Sep 15 2020 at 03:27): To put it in dollars, a V100 GPU-hour costs on the order of $1 per GPU-hour, so this is$20,000 to run an already trained model once across the training data. I’m very curious what their MetaMath proof assistant web-tool is costing OpenAI. #### Jason Rute (Sep 15 2020 at 03:27): Nonetheless, there is a lot of room for optimization. I think I’ve seen five or so papers in the last few months suggesting how to make transformers behave closer to O(n). #### Jason Rute (Sep 15 2020 at 03:28): Also, I’m still of the opinion that since formulas have so much built-in structure, that using some of that structure as an inductive bias is still valuable. It’s been shown (more than once) that transformers trained with tree-based position encodings do much better at symbolic reasoning tasks. However, I also realize that such encodings would limit the pre-training options. I recall N2Formal (@Christian Szegedy) suggesting ideas for pre-training which may be helpful here. Also, it might be useful to try to gather a large dataset of formula-like data from the web parsed into a tree or graph structure. #### Jason Rute (Sep 15 2020 at 03:28): I can also think of a number of other possible optimizations. While having a transformer which guesses everything is a nice experiment, it might still be more efficient to fill in the constrained substitutions using a Prolog like system instead. It also might still be faster to use the theorem database more directly for lookup of theorems. For example, Google recently showed the possibility of using transformers to lookup data from a database. #### Jason Rute (Sep 15 2020 at 03:28): Finally, the holy grail is code generation. Why have an expensive black box when you can have an AI that generates code (custom tactics in this case)? This code would be reusable, fast, and interpretable. Of course, transformers are being used for code generation too. :) #### Jason Rute (Sep 15 2020 at 03:29): One last thought. It is so difficult to compare results since we don’t have standard datasets, but they report a success rate of 56% for their best model, which is much better than the previous SOTA of 21%. I’d love it if they try this out on the HOList dataset so that they can directly compare with Google’s state of the art. Even then however, I feel that the best judge is to put this in the hands of ITP users and to ask them what it does well on and doesn’t do well on. Again, I’m really glad for their engagement with the MetaMath community. #### Jason Rute (Sep 15 2020 at 03:29): Overall, I am really grateful for this paper. It is well-written (if you know a bit about transformers at least), and I think it shows that we have a lot of the tools at least to start building high powered experimental tools. Hopefully, we can then turn to making these systems useful to the average ITP user. I’m really looking forward to what the future brings. #### Jason Rute (Sep 15 2020 at 03:39): @Stanislas Polu can, of course, correct all my misconceptions. :) #### Christian Szegedy (Sep 15 2020 at 04:32): • For comparison, on the HOl-Light corpus, we can reach 70% proof success rate (67% without any imitation on exsisting proofs). I'm not sure how it compares, but my guess would be that HOL-Light is a bit harder the metamath. • In the beginning of the year, we have also tried the approach of using transformer in autoregressive manner to generate the whole tactic invocation together with all the substitutions, theorem labels, etc. and while the results seemed somewhat comparable, it was so much more expensive to run computationally, that it just did not seem to make a lot of sense to us. That's why we started to look for tasks where generative transformers would shine: conjecturing, filling in assumptions, etc. I don't say that GPT-f does not make sense, we have a very similar system for more than half a year, but it was simply not justifiable from a practicabality point of view at this point in time, especially that HOLst was also criticized for being slow, while we use a few minutes per proof attempt on CPU. #### Stanislas Polu (Sep 15 2020 at 07:23): @Jason Rute thank you for the thoughtful comments. This is a great summary and glad to see our work put in perspective this way. Commenting quickly on the 20k GPU.hours. They are required for the data generation/augmentation process (running proof searches on the train set) which is in turn used to train the value function you refer to in your post (same was true for alphago/zero, the data generation, aka exploration, is where you pay the price). So, the number is definitely accurate but just wanted to point out that the training of the model itself is less expensive (more like 500-1k GPU.hours). As for the user experience of the ITP, I'll be demonstrating it today at AITP'20. I'll gladly make another video for folks here if interested. As you'll see the model is fast enough for it to be a somewhat pleasing experience (once you've climbed the Metamath learning curve that is :p) (Also, I'm pretty confident OpenAI will be happy to foot the bill, for the foreseeable future, for usage of these systems once we manage to port them to Lean, as we do today with Metamath. The main challenge/problem I believe and as you point out is to make a useful system and share it effectively with the community :+1:) #### Stanislas Polu (Sep 15 2020 at 07:27): To attend the talk same as what Daniel said here. Ping me if interested, it's at ~15h CET (see AITP Program) #### Johan Commelin (Sep 15 2020 at 07:28): But do I understand correctly that for the time being we will depend on external computing power to be able to run GPT-f? You can't extract a trained artifact that I can run on a 16 GB RAM + modern desktop CPU/GPU, or can you? (And expect it to spit back results withing seconds instead of days.) #### Stanislas Polu (Sep 15 2020 at 07:31): We can extract a trained artifact that could run correctly on one modern GPU for inference. It's just that this trained artifact, today, is served through the OpenAI API. Ok, cool #### Johan Commelin (Sep 15 2020 at 07:32): I was expecting that simply executing the thing would already require > 20GB RAM Not yet :p #### Johan Commelin (Sep 15 2020 at 07:33): I mean, just fitting the parameters into memory is already a mild feat (-; #### Jason Rute (Sep 16 2020 at 13:03): Christian Szegedy said: • For comparison, on the HOl-Light corpus, we can reach 70% proof success rate (67% without any imitation on exsisting proofs). The HOList website currently lists 60% as the best success rate. Was there anything big you did to get it to 70%/67%, or is it just combining the two techniques from your last two HOList papers (GNNs and better reinforcement learning)? Is there another HOList paper coming? #### Joe Palermo (S2'17) (Oct 04 2020 at 22:15): Hi @Stanislas Polu - I’m wondering how you converted proof step substitutions generated by your model back into MetaMath (i.e. a sequence of labels). Of course in actual MetaMath proofs one needs to construct the terms that get substituted in. Constructing a complex term could require many steps in a MetaMath proof. Which specific steps (labels) are required is represented only implicitly in the substitutions. It’s not obvious to me how to write a verifier for proof steps written in your substitution format. Can you offer any pointers? Many thanks! #### Mario Carneiro (Oct 04 2020 at 23:47): @Joe Palermo (S2'17) I can't speak for Stanislas Polu 's implementation, but it is fairly common for metamath proof assistants to work directly with intermediate statements and infer the substitutions by unification. First order unification is not a particularly hard problem, it is decidable with a fast algorithm (in contrast to higher order unification, which lean has to deal with sometimes and is undecidable in general). #### Stanislas Polu (Oct 05 2020 at 07:09): Hi @Joe Palermo (S2'17) the language model generates the substitutions, then we operate at the term level in our kernel. To ensure correctness, we indeed have to prove that expressions are properly typed, as we work on proof we do by checking that the term we operate on comply to the Metamath grammar (which is encoded by the wff, class, setvar axioms). Only when we want to check the proof with another kernel, we dump the proof in native Metamath format, using our parse trees to generate the full proofs. Hope that answers your question? #### Gabriel Ebner (Oct 05 2020 at 07:10): If I read the paper correctly, the model doesn't generate the lemma names. Do you just try all lemmas in set.mm and see which one fits the statements produced by the model? #### Stanislas Polu (Oct 05 2020 at 07:13): @Gabriel Ebner indeed we generate the theorem terms and check that they exist in set.mm. We've observed that it helps the machine learning a lot (which kind of makes sense as it makes the distribution of theorems and the distribution of term substitutions more alike and therefore easier to fit together) #### Joe Palermo (S2'17) (Oct 05 2020 at 13:53): @Stanislas Polu Yes, thank you! #### Joe Palermo (S2'17) (Oct 07 2020 at 21:39): @Stanislas Polu Would I be correct in thinking that the axioms required to define this grammar are all the axioms of the form: <label> $a wff <expression>$. <label> $a class <expression>$. <label> $a setvar <expression>$. (seems that this particular pattern doesn't occur in the set.mm database) #### Jason Rute (Oct 07 2020 at 22:12): Since we are asking questions, as for @Gabriel Ebner’s question, can you look up the lemma directly from the output of GPT-f? In other words, can you plug the output into a hash map and get the lemma (maybe after standardizing variable names)? Or do you need to do the O(n) operation where you try to unify every earlier occurring lemma against the output of GPT-f. #### Mario Carneiro (Oct 07 2020 at 22:19): @Joe Palermo (S2'17) Yes, metamath terms are given by a CFG where each $a with a typecode other than |- contributes one production (and the nonterminals are wff, class, setvar). The$f variable declaration commands also contribute productions, so the setvar nonterminal is not empty, it only contains variables (as terminals). #### Joe Palermo (S2'17) (Oct 07 2020 at 23:42): @Mario Carneiro Thank you! #### Joe Palermo (S2'17) (Oct 07 2020 at 23:43): @Jason Rute I'm wondering the same thing... #### Mario Carneiro (Oct 08 2020 at 01:14): @Jason Rute This problem seems pretty similar to the problem of simp: Given a term and a collection of lemmas, find one that unifies. You can do it pretty efficiently with a discrimination tree, and in fact this process is fast enough that the mmj2 metamath proof assistant has a feature where every open goal is automatically unified against everything in the library any time you do anything, as if simp was constantly running in the background. It applies every lemma that makes the goal "smaller" in some sense, except for some blacklisted lemmas, and it's quite convenient and effective. #### Stanislas Polu (Oct 08 2020 at 06:34): @Jason Rute we're just looking up in a hash table constructed from set.mm (enforcing ordering here) :+1: #### Stanislas Polu (Oct 08 2020 at 06:35): The theorem statement generated by GPT-f is pre-unification so it's a simple lookup. GPT-f also generates substitutions that are then checked against the grammar, applied, and the fact that they unify is verified. #### Stanislas Polu (Oct 08 2020 at 06:37): @Joe Palermo (S2'17) As explained by @Mario Carneiro, yes :+1: #### Joe Palermo (S2'17) (Oct 14 2020 at 16:06): Hi @Stanislas Polu. I’m trying to replicate something similar to the MetaMath environment you developed for GPT-f. @Mario Carneiro mentioned to me that he described to you a “KLR(0)” parsing algorithm for MetaMath expressions. Was this the one you ended up implementing? In the paper you refer to a “modified LR(0) parser”. #### Stanislas Polu (Oct 14 2020 at 16:42): Yes we implemented an LR(0) parser with basic backtracking as the amount of backtracking necessary to parse the Metamath grammar is well behaved and limited in practice. It's somewhat different than what is implemented in mmj2, in case you looked into it, where the "backtracking" is done at parser construction time. #### Joe Palermo (S2'17) (Oct 16 2020 at 17:01): @Mario Carneiro would you mind sharing some of the documentation on that KLR(0) parser here? #### Mario Carneiro (Oct 16 2020 at 17:18): Sure, data dump coming right up. The following example walks through the parsing of { <. x , y >. } vs { <. x , y >. | ph } in the set.mm grammar, which yields a shift reduce conflict when parsed with LR(0). (This description assumes you know a bit about how LR(0) parse table generation works; see the wikipedia page for a primer.) #### Mario Carneiro (Oct 16 2020 at 17:18): This is the new part of the code that distinguishes the KLR parser from LR(0). A "conflict" is a place where an LR(0) parser would fail outright. #### Mario Carneiro (Oct 16 2020 at 17:18): During parse table generation each state is associated with a bunch of partially read productions that agree on a common prefix, and in this case you are stuck at the state: -> { o <. setvar , setvar >. | wff } -> { o class } This is known as a shift-reduce conflict, and usually shifting is the right answer, so that's built in as a heuristic, which is why lark takes the first option over the second. But neither choice is "correct", because this changes the grammar - you are now rejecting a string that should be valid to the grammar ({ <. x , y >. } in this case) - so essentially you are finding a "closest LALR(1) approximation" to the grammar when you use lark with this heuristic. To resolve this, the question is what to do from that state if you read a <.. We haven't actually hit the conflict yet. In the first production it's clear that we should step to -> { <. o setvar , setvar >. } | ph }, but the second production requires us to look at the class nonterminals that start with <.. (In fact, in the first state we also have all productions for the class nonterminal, like -> o 1 and -> o ( class u. class ) and many others. These are represented in a special way in LRParser.java to save space.) Let's step through the states that the example takes. The shift <. step takes us to: -> { <. o setvar , setvar >. | wff } -> <. o class , class >. all setvar -> o rules all class -> o rules and shift x takes us to: -> x o Since we are now at the end of a production, we can reduce with setvar -> x at this point, and there are no competing productions so this is safe. This reduce x edge pops the stack and acts like a shift setvar edge from the previous step, leading to: -> { <. setvar o , setvar >. | wff } -> setvar o The -> setvar o comes from the class -> setvar production. Now we are stuck, because we can both reduce with this production (which gives us a cv node) and shift a comma to continue with the first production. This is a shift-reduce conflict, and lark at this point will throw away the reduce option and shift here, leading to -> { <. setvar , o setvar >. | wff } all setvar -> o rules which is not correct, as we have lost the ability to parse { <. x , y >. }. #### Mario Carneiro (Oct 16 2020 at 17:18): What I do instead to resolve this is "pre-compositing" the rules. We first got in trouble at -> { <. setvar o , setvar >. | wff } -> setvar o which is a "bad state" because of the shift-reduce conflict. We want to remove the reduce node, and we do so by backing up to see how we got here. We obtained this state by shift setvar applied to -> { <. o setvar , setvar >. | wff } -> <. o class , class >. all setvar -> o rules all class -> o rules and we want to repair this state so that we don't hit the train wreck one step from here. So we delete the offending rule -> o setvar and add the composition of class -> setvar with class -> <. class , class >. as a new "composite rule" which looks like a production class -> <. setvar , class >., so that the "before" state instead looks like -> { <. o setvar , setvar >. | wff } -> <. o class , class >. -> <. o setvar , class >. all setvar -> o rules all class -> o rules except -> o setvar and we shift setvar from here instead, getting to -> { <. setvar o , setvar >. | wff } -> <. setvar o , class >. and we have safely cleared the conflict. (The modified "before" state is not a real state, it is only used to calculate this new state. This state is replacing the original shift-reduce bad state as the result of shift setvar applied to -> { <. o setvar , setvar >. | wff } -> <. o class , class >. all setvar -> o rules all class -> o rules .) #### Mario Carneiro (Oct 16 2020 at 17:19): To finish the example off, let's make it to the end. The next step is shift , which takes us to -> { <. setvar , o setvar >. | wff } -> <. setvar , o class >. all setvar -> o rules all class -> o rules (and shift y takes us to the simple -> y o state, so we plan to reduce there and come back here with shift setvar), and shift setvar from here takes us to -> { <. setvar , setvar o >. | wff } -> setvar o which is another shift reduce conflict. Again, we analyze the conflict to find out what to composite. We want to apply the class -> setvar *production here, which was considered one step ago because closure over -> <. setvar , o class >. required us to add the -> o setvar production to the state. So we composite class -> <. setvar , class >. and class -> setvar to get a new class -> <. setvar , setvar >. production, create a temporary modified version of the previous state -> { <. setvar , o setvar >. | wff } -> <. setvar , o class >. -> <. setvar , o setvar >. all setvar -> o rules all class -> o rules except -> o setvar and use it to calculate the new result of shift setvar, which is -> { <. setvar , setvar o >. | wff } -> <. setvar , setvar o >. and we have cleared another shift reduce conflict. There is still one more to go, though, since the next step is shift >. which takes us to -> { <. setvar , setvar >. o | wff } -> <. setvar , setvar >. o which is again a shift reduce conflict. Now we must backtrack a lot, because we have to go back 5 steps (the length of the current reduce candidate) to find out which production required us to put -> <. setvar , setvar >. into the mix. In fact, five steps ago this production was not even -> <. setvar , setvar >. at all but rather -> <. class , class >. but this makes no difference, we need two productions to do the compositing. This is the first state I posted, which looks like -> { o <. setvar , setvar >. | wff } -> { o class } all class -> o rules where among the class rules is -> o <. class , class >.. The reason the class rules are there is because of the -> { o class } rule, so we composite class -> { class } with class -> <. setvar , setvar >. to get the temporary state -> { o <. setvar , setvar >. | wff } -> { o class } -> { o <. setvar , setvar >. } all class -> o rules except -> o <. class , class >. and now shift 5 steps forward along <. setvar , setvar >. to get the repaired state -> { <. setvar , setvar >. o | wff } -> { <. setvar , setvar >. o } Now we have finally cleared the last hurdle, as we can clearly now either shift | or shift } depending on what we see next to parse both { <. x , y >. | ph } and { <. x , y >. }. For the purpose of the example let's say we shift } so we get to state -> { <. setvar , setvar >. } o and a reduce is unambiguous. #### Mario Carneiro (Oct 16 2020 at 17:19): But what are we reducing anyway? I've been talking about compositing rules, and what I haven't been showing here is that each production is associated to a partial expression tree. You can imagine them as lambda expressions. The original rules from the grammar will have a result like \e1 e2. (cpr e1 e2) for the production class -> <. class , class >., which is to say that we take the two class expressions in the brackets and put them into the two arguments of a cpr node. The arguments aren't always in parse order, for example I think wal takes its arguments in the order wal ph x (because the \$f variable declaration of vx comes after wph), so the production wff -> A. setvar wff has result \e1 e2. (wal e2 e1). Now compositing rules has the effect of a composition of these two expressions. In the first part we composited class -> <. class , class >. with class -> setvar, with associated results \e1 e2. (cpr e1 e2) and \e1. (cv e1), so we insert the cv expression in for e1 of the cpr expression to get a new result \e1 e2. (cpr (cv e1) e2) for the new production class -> <. setvar , class >.. Similarly, the production class -> <. setvar , setvar >. is formed by inserting cv in for e2 in this new production, resulting in \e1 e2. (cpr (cv e1) (cv e2)). And finally, we composited this expression with the class -> { class } production with result \e1. (csn e1), and this composition yields, for the composite rule class -> { <. setvar , setvar >. }, the result \e1 e2. (csn (cpr (cv e1) (cv e2))). This is what we reduce with. So for the example { <. x , y >. }, we first reduce using setvar -> x := vx, then setvar -> y := vy, then class -> { <. setvar , setvar >. } := \e1 e2. (csn (cpr (cv e1) (cv e2))) to get the final parse tree (csn (cpr (cv vx) (cv vy))). #### Joe Palermo (S2'17) (Oct 19 2020 at 12:55): @Mario Carneiro Thanks very much! Might take me some time to wrap my head around this since I don't know much about parsers. I'll get back to you if I have questions. #### Mario Carneiro (Oct 19 2020 at 12:55): It is a repost, so the context might not be exactly right for the venue. Feel free to ask if something is out of context #### Brando Miranda (May 24 2021 at 12:16): I'm curious @Stanislas Polu how different/similar is GPT-f vs the GPT used for coding (https://analyticsindiamag.com/open-ai-gpt-3-code-generator-app-building/)? #### Stanislas Polu (May 24 2021 at 13:33): @Brando Miranda the architecture is the same (smaller model similar in size to GPT-2) but the training objective is quite different (see section 4.2) :+1: #### Brando Miranda (May 24 2021 at 15:01): Stanislas Polu said: Brando Miranda the architecture is the same (smaller model similar in size to GPT-2) but the training objective is quite different (see section 4.2) :+1: Thanks Stanslas! I appreciate the message. I will check that out. Out of curiosity, is there a reason you prefered a transformer model e.g. GPT than an enumerator with a neural recognition model (e.g. as in Dreamcoder, Deepcoder, etc. related work in that path) #### Stanislas Polu (May 24 2021 at 16:27): Wellll... Dreamcoder and GPT-f objective are not too dissimilar. You could view proof search as the wake phase, the model conjecturing capabilities (generating theorem statements during proof search that are not part of the formal library) as the abstraction phase and attempting to prove them as the dream phase. #### Jason Rute (May 26 2021 at 01:45): Brando Miranda said: I'm curious Stanislas Polu how different/similar is GPT-f vs the GPT used for coding (https://analyticsindiamag.com/open-ai-gpt-3-code-generator-app-building/)? You probably know this, but my understanding of GPT-3 applications, is one doesn't retrain the model. Instead they come up with a prompt to extract information out of the model. For example to find the capital of England, you would give GPT-3 a prompt like France => Paris, China => Beijing, England => " and the model will complete the string with "London". It's crazy, but with good prompt engineering, you can take this really far. In this way, GPT-3 can code since there was a lot of code (using standard programming languages) in the training data, and all one has to do if find a good way to engineer prompts. GPT-f (as Stan said) uses a smaller model than GPT-3 (more like GPT-2) and therefore can be fine-tuned as is standard with GPT-2 (and BERT) applications. GPT-f was pretrained on web text and web math, and we fine tune it with tasks like GOAL P Q : Prop ⊢ ((P → Q) → P) → P PROOFSTEP apply or.elim (em P), following the pattern GOAL <TacticState> PROOFSTEP <Tactic>. When using it to predict tactics, you give it the prompt GOAL <TacticState> PROOFSTEP ` and it fills in the tactic. (The main insight in our paper is that we get a big boost by also co-training it with a number of other tasks. These tasks are not used for prediction, but nontheless, really help our model. The details are in the paper.) Last updated: Jan 31 2023 at 21:29 UTC
Followers 0 # Game Actor and System Architecture ## 4 posts in this topic Hey guys, I've been looking through some books and online on the topic of game engine architectures and how actors factor in. A big one was from this thread right here (http://www.gamedev.net/topic/617256-component-entity-model-without-rtti/page-2). The way I understood it is like this: Actor Component: Defines some relatively independent data that represents some isolated attribute of a larger being. For example, a actor for a gun might have a component for the gun's model, a component for the amount of ammo, and a component for the damage properties of the gun. Actor: A list of actor components. System: Runs the game logic, has a list of actors on which the system operates. An example, the physics system has a list of actors that have a physics object which it uses to check for collisions and notify's the actors and their components when a collision happens. This is where things get kind of shady. A system is supposed to carry out game logic but it doesn't make sense for all the game logic to be done in a system. Using the physics system example, it makes sense for the system to find collisions but when a collision happens, it doesn't always mean calculate the reflection of both objects. Sometimes, I might be colliding with ammo so I should be picking it up instead. Stuff like that doesn't make sense to be done in the system but rather in the actor/their components. This works nice but then it makes defining the components a bit more iffy. If the ammo actor is supposed to have some way of reacting to a collision, how does the physics system know which component it should be looking for? There might only be one type of component that is a physics collision model which could describe the collision model for the ammo, but that same component could be used for a rigid body on another actor which should react by physics laws to a collision. So the way I understand it, here is how it roughly looks right now: class Actor { std::vector <IActorComponent> m_ActorComponents; }; class IActorComponent { // will be overridden and will have some new properties virtual bool VInit (); virtual bool VDestroy (); }; class ISystem { virtual void VInit (); virtual void VUpdate (unsigned int deltaMs); virtual void VDestroy (); }; And here is a implementation: class CollisionModelComponent : public IActorComponent { std::vector <Vertices> m_VertexArray; }; class PhysicsSystem : public ISystem { std::list <Actor> m_Actors; void VUpdate () { for every actor { if actor collided { // What do we look for here? How do we know to run ammo collision response or rigid body response? } } } }; You could make a collision response actor component which tells the physics system how to respond to a collision but then you have a issue where the ammo collision response has to have access to the ammo component. In my code, the actors are created from xml files and each actor is created the same through a factory class. In it, I loop through all the nodes of a xml file and apply the properties to the given component at hand. All components override the virtual VInit function which takes no parameters. If I wanted to create a dependancy between ammo component and collision response component, I would need to somehow pass the ammo instance to the collision response through the init but not all components need a dependancy so it doesn't make sense to have it by default pass a pointer to some actor component through VInit. There could also be cases where we have multiple dependancies which complicates the process. Is there another way to do this or some way to restructure or apply constraints in order to make this architecture work? It's a really clean architecture if one where to be able to make everything separable. Any help? 0 ##### Share on other sites I haven't read (yet) the thread you've cited. The following is just the approach I'm following in my own implementation of CES (which uses sub-systems, and allows for both data components and behavior components, the latter as some kind of plug-in for sub-systems). Actor entities can be equipped with components that belong to perception. Besides the visual and aural perception (I'm playing with thoughts about olfactorial perception), there is also touch perception. All of them are used for AI purposes, but the latter one is also used to apply physical damage. Conceptually, the interaction of a stimulus (something that can be perceived) and a receptor (something that detects stimuli and assesses it against thresholds) result in an actual perception. So this concept distinguishes collision for touch perception purposes from general rigid body collision. It introduces a specific group of colliders (the geometry of the touch stimuli, usually line-strips) and collidees (the geometry of touch receptors, usually some bounding volumes), between which an unilateral collision detection has to be done. The immediate result of collision detection is just to hand over the stimulus to the receptor for further processing. The linked in damage sub-system isn't sufficiently parametrized with the boolean "there is a collision". Instead, it uses attributes of the stimulus together with "the stats" to compute the resulting damage (which may perhaps be zero). I'm running a sub-system that is called SpatialServices. This sub-system allows for uni-lateral collision detection (and proximity detection for e.g. the other senses) as is described above. The physics sub-system, perhaps a black box, isn't used for this purpose. Hope that helps a bit Edited by haegarr 0 ##### Share on other sites What I have done for collision-type work is, first, define what objects each entity can collide with (collision meaning cannot occupy the same space...think top-down 2d game). So, an entity can collide with a bullet, and it can collide with a wall.  It CAN'T collide with a teleporter, but it can use it if it's touching it. Also, I would create a collision component, and give it to each entity that is colliding with each other.  So, the collision component has a link to other entity that is being collided with.  If it's a bullet hitting a player, then the damage system would remove the bullet entity (thus, the collision entities), and damage the player entity. If it's a wall hitting a player, there isn't a system defined for it, so the physic system just handles it, and does the proper collision response.  Once the player stops touching the wall, the collision component is removed between the 2. That might not be the best way to do it, but it should give you an idea of other ways to do this. 0 ##### Share on other sites Hmm okay I'm starting to see a path through this. The perception idea is super cool but I probably don't need to implement something as complicated as that for my game in terms of touch, sound, smell etc with the stimuli. It is something I will want to look at in the future just because it sounds so cool and plays well with some other ideas I have bouncing around in my head. Okay so I think I figured out a nice way of organizing everything: To start off, actors are as always just a id and a collection of components except now they are a collection of component id's instead of pointers. Components of each type are instead stored in each of their own component managers which might be just a list, a vector or a map. This would allow for future endeavors to compress the components since we know each component manager only holds variables of one component type. As an example, if my component manager holds orientation components for all actors, it could apply some sort of spacial optimization to all the components or reuse similar coordinates if such compression where ever needed. It also has the capability of being more data oriented although since my project isn't going to be huge, I'll just leave them as maps. Each component manager has to only implement simple methods like add/destroy component and get component manager type. Components are same as always except now initialization will be a little different. So before, I just had a Init method for the actor to call Init on all components but now I added a post init which could be used to resolve all component dependencies. A orientation component won't have a dependency so its PostInit method will be empty. But something like a physics collision component could add in post init a dependency to the orientation by going through the owner actor pointer and searching for a component of type orientation. I could also have it add a collision response component/s which could be used to resolve collisions in the physics system when they happen. The benefit of post init is that we know all components are created, we are now just linking them together sort of like your program compiles everything, and then links. In PostInit, we could also attach our components to all the systems they have to be attached to. So a physics collision model could check to see if a orientation and collision response component exist on its current actor. If they do, it can link them up and attach itself to the physics system. Otherwise, it could signal an error or maybe run a default although I would rather signal an error. As for BeerNutts method of solving the collisions, I think with the system I described above you could implement it since I sort of wrapped my head around settling dependencies between components. I do have two options however. I could make multiple collision responses for collision with different types of actors (although this creates a issue since a actor doesn't really have a type, its just a bundle of components). Or could make one large collision response component that handles multiple collision types. Both are a bit weird since a actor doesn't have a type. Would you somehow grab it from the collision model component which could potentially hold a type or add a type to each actor by default? There is another thing that bothers me that might be a bit more physics related but still something I think needs to be understood by me. Lets say two teleporters collide (yes this shouldn't ever happen but there could be other similar situations). Both objects have equal priority in terms of resolving the collision so which one would take priority and teleport the other to the teleporting location? Since its very likely two colliding actors will try to somehow resolve a collision, if both collision responses want to modify the other actor, it has to be somehow decided which one gets priority over the other and applies the modification first. I was also of thinking of just having a collision response actor that implements rigid body responses to collision, instead of having it in the physics system. That way, the physics system only ever worries about finding collisions and calling the appropriate collision response calls based on whatever priority. By doing so, the actor components technically implement all the systems while the system manages them to have everything work as a group. Thanks for all the help so far!! EDIT: Hmm, looking at it now, I think its better to skip having the component managers in the first place since for now, they don't really make a difference other than take extra implementation time. Maybe in the future I should add them in but everything else should apply as long as id's are changed to pointers. Edited by D.V.D 0 ##### Share on other sites This is where things get kind of shady. A system is supposed to carry out game logic but it doesn't make sense for all the game logic to be done in a system. Using the physics system example, it makes sense for the system to find collisions but when a collision happens, it doesn't always mean calculate the reflection of both objects. Sometimes, I might be colliding with ammo so I should be picking it up instead. Stuff like that doesn't make sense to be done in the system but rather in the actor/their components. This works nice but then it makes defining the components a bit more iffy. If the ammo actor is supposed to have some way of reacting to a collision, how does the physics system know which component it should be looking for? There might only be one type of component that is a physics collision model which could describe the collision model for the ammo, but that same component could be used for a rigid body on another actor which should react by physics laws to a collision. Look at the situation from the opposing side and it might make a bit more sense.  First, your physics system simply steps your simulation and determines where collisions have happened.  It emits those collisions in some fashion for the related actors and it's done.  You then have other systems that listen or inspect the actors for these collisions and respond accordingly. Taking your example, the bullets which are to be collected upon collision are actors maintained in a pickup system because they have a pickup component.  Upon a collision situation with these actors, it looks at the collision component to see whom it collided with.  If the collision was with a player, it simply adds itself to the player's inventory in question and then signals the actor (bullet) to be removed.  In the case of shooting a bullet at another player, this bullet is managed by a damage system because it has a damage component.  Upon collision, the damage system determines the collision actor list, calculates the damage and emits the damage caused to the related actors.  The damage system then signals the actor (bullet) to be removed. Now you can add coins and other pickup items that go into the player's inventory by simply giving them the same pickup component.  The pickup component can have flags that determine what it does upon pickup (add to inventory, increment ammo counter, increment powerup level, etc).  But in the end, the reaction between the actors is identical, collision implies pickup. Similarly, you can add additional things that do damage such as melee attacks, castable spells, falling rocks, etc.  The damage system is then the place that inspects that collision list of actors for a given actor that can do damage, determines the damage done and emits those events to the related actors. Like with anything in programming, break things down into their smallest yet reasonable interactions and you'll see how quickly things can come together, particularly when you are working with an actor/entity system. 0 ## Create an account Register a new account
Is The Number 4 Irrational? Why is 4 not irrational number? Is the Square Root of 4 Rational or Irrational? A number that can be expressed as a ratio of two integers, i.e., p/q, q = 0 is called a rational number. Thus, √4 is a rational number. Is √ 4 a rational or irrational number? It is rational. Square root of 4 is 2 which is an integer. A square root who has endless number after decimal is considered as irrational. square root of 4 is rational. How do you find a irrational number 4? Related Question Is the number 4 irrational? Is negative 4 a whole number? The whole numbers are the numbers 0, 1, 2, 3, 4, and so on (the natural numbers and zero). Negative numbers are not considered "whole numbers." All natural numbers are whole numbers, but not all whole numbers are natural numbers since zero is a whole number but not a natural number. Is 4 a whole number or natural number? Natural numbers are all numbers 1, 2, 3, 4… They are the numbers you usually count and they will continue on into infinity. Whole numbers are all natural numbers including 0 e.g. 0, 1, 2, 3, 4… Integers include all whole numbers and their negative counterpart e.g. … What are irrational numbers between 3 and 4? Two irrational numbers between 3 and 4 are √11 and √13. Is a 4 a rational number? Every whole number is a rational number, because any whole number can be written as a fraction. For example, 4 can be written as 4/1, 65 can be written as 65/1, and 3,867 can be written as 3,867/1. Is 1 a irrational number? The number 1 can be classified as: a natural number, a whole number, a perfect square, a perfect cube, an integer. This is only possible because 1 is a RATIONAL number. Are all irrational numbers rational? In mathematics, the irrational numbers (from in- prefix assimilated to ir- (negative prefix, privative) + rational) are all the real numbers which are not rational numbers. That is, irrational numbers cannot be expressed as the ratio of two integers. What is Isreal number? Real numbers are numbers that include both rational and irrational numbers. Rational numbers such as integers (-2, 0, 1), fractions(1/2, 2.5) and irrational numbers such as √3, π(22/7), etc., are all real numbers. Is √ 4 is a real number? Not all square roots are whole numbers. Many square roots are irrational numbers, meaning there is no rational number equivalent. For example, 2 is the square root of 4 because \beginalign*2 \times 2 = 4\endalign*. Is 4 a terminating decimal? A terminating decimal, true to its name, is a decimal that has an end. For example, 1 / 4 can be expressed as a terminating decimal: It is 0.25. In contrast, 1 / 3 cannot be expressed as a terminating decimal, because it is a recurring decimal, one that goes on forever. Is the real number √ 625 is irrational? The real number square root of 625 is irrational. Can a negative number be rational? A number is considered a rational number if it can be written as one integer divided by another integer. Rational numbers can be positive, negative or zero. When we write a negative rational number, we put the negative sign either out in front of the fraction or with the numerator. Are irrational numbers real numbers? irrational number, any real number that cannot be expressed as the quotient of two integers. Each irrational number can be expressed as an infinite decimal expansion with no regularly repeating digit or group of digits. Together with the rational numbers, they form the real numbers. How many irrational numbers are there between 1 and 6? Between any two numbers, however large or small the difference between them may be, we have infinite rational as well as irrational numbers. As such between 1 and 6 too we have infinite irrational numbers. What is an irrational number between 4 and 5? The numbers between the squares of 4 and 5, i.e., between 16 and 25 are 17, 18, …23, 24. The square root of any of these numbers is always an irrational number. The square root of 19, i.e., √19 is an irrational number that lies between 4 and 5. What are the two irrational numbers between 2 and 3? Let us find the irrational numbers between 2 and 3. Therefore, the number of irrational numbers between 2 and 3 are √5, √6, √7, and √8, as these are not perfect squares and cannot be simplified further. Is 81 rational or irrational? Interactive Questions True The square root of 81 is a rational number. TrueTrue – The square root of 81 is a rational number. The third root of 81 is 9. TrueTrue – The third root of 81 is 9. 81 is the square of 9. TrueTrue – 81 is the square of 9. -9 is not a root of 81. TrueTrue – -9 is not a root of 81. What are some examples of rational and irrational numbers? Examples of rational numbers are ½, ¾, 7/4, 1/100, etc. Examples of irrational numbers are √2, √3, pi(π), etc. Is 0.7 rational or irrational? The decimal 0.7 is a rational number. It is read as seven tenths and is equivalent to the fraction 7/10. What is an irrational number give examples? An irrational number is any number that cannot be written as a fraction of whole numbers. The number pi and square roots of non-perfect squares are examples of irrational numbers. , however, is not a perfect square, and its square root, therefore, is irrational. What is a irrational number between 4 and 6? Answer: irrational no between 4 and 6 is 4.1 , 4.2.. What is the irrational number between 2 and 7? Answer: √5 , √6 , √7 , √8 , √10 , √11 , √12 , √13 , √14 , √15 , √17 till √48 except √9 , √16 , √25 and √36 all are irrational numbers. Step-by-step explanation: Given: Numbers are 2 and 7. Is 7 irrational? No. 7 is not an irrational number. Is negative 10 a rational number? -10 is a rational, integer and real number. Is √ 3 an irrational number? The square root of 3 is an irrational number. Is 2 rational or irrational? 2 is a rational number because it satisfies the condition for rational number and can be written in p/q form which is mathematically represented as 2/1, where 1≠0. What kind of numbers are irrational numbers? Irrational numbers are numbers that cannot be expressed as the ratio of two whole numbers. This is opposed to rational numbers, like 2, 7, one-fifth and -13/9, which can be, and are, expressed as the ratio of two whole numbers. Is the number 0 irrational? Why Is 0 a Rational Number? This rational expression proves that 0 is a rational number because any number can be divided by 0 and equal 0. Fraction r/s shows that when 0 is divided by a whole number, it results in infinity. Infinity is not an integer because it cannot be expressed in fraction form. Which is not an irrational number? Integers are rational numbers but not irrational. All the integers whether they are positive or negative or zero can be written in the form of p/q. Example: 2, 3 and 5 are rational numbers because we can represent them as 2/1, 3/1 and 5/1. Are fractions irrational numbers? The definition of rational numbers tells us that all fractions are rational. We will now look at the counting numbers, whole numbers, integers, and decimals to make sure they are rational. Since any integer can be written as the ratio of two integers, all integers are rational numbers. What is the square of the 4? The square root of any number can be figured out using a calculator too. When a number is multiplied by itself, it is said to be the square of the number. Here, the value of root 4 is exactly equal to 2. Square Root From 1 to 25. Number Square Root Value 3 1.732 4 2 5 2.236 6 2.449 What is the square of 4 answer? A number is a square root of 4 is, when multiplied by itself, the result is 4. There are two numbers that will work 2×2=4 and also −2×−2=4 so the numbers 2 and −2 are square roots of 4 . Posted in FAQ
## File Exchange Pick of the WeekOur best user submissions ### This is machine translation Translated by Mouseover text to see original. Click the button below to return to the English version of the page. # The HDR Toolbox Posted by Guest Picker, Sebastian‘s pick this week is The HDR Toolbox by Francesco Banterle. Sebastian is a Senior Customer Success Engineer at MathWorks’ Munich office. Hello everyone, My name is Sebastian and I work directly with our academic customers in research and teaching. As such, I have the pleasure to get to know many fascinating projects and meet interesting people. One of these projects is HDR Toolbox which was recently made available on File Exchange. However, the sources of the project have been growing since 2008. With High Dynamic Range (HDR) imaging you can create spectacular scenes by combining multiple pictures of the same scene. The pictures are usually create using different exposure times, so they contain varying information. This information can be remapped to ensure that all areas of the output image are properly visible. My first contact with this technology happened during my time as a researcher at the Institute of Image Processing and Computer Vision at RWTH Aachen University. My colleague Johannes had a research project in the area of Multispectral High Dynamic Range Imaging and I was naturally intrigued to look at the toolbox. Installation went smoothly for me. The toolbox comes with several demo files, so I changed into the ‘Demos’ directory and inspected the first example ‘demo_build_hdr.m’. It uses the images in the ‘stack’ directory. You can view them with this code: figure for i = 0:5 subplot(2,3,i+1) end The result of script ‘demo_build_hdr.m’ clearly illustrates the combined information in the output image. A lot more details are visible than in any of the individual images. While creating the source images with different exposure settings, you might introduce another challenge which is illustrated in a different dataset. I used a different dataset (‘stack alignment’) and created this image with the script ‘demo_build_hdr.m’. Now, the HDR part seems to have worked, but this looks all blurry. If you run: figure subplot(3,1,1) imshow(img1) subplot(3,1,2) imshow(img2) subplot(3,1,3) imshow(img3) you can take a look at the images. So far, they do not clearly indicate the problem. If, however, you combine parts of them, you can see it clearly. (This is not directly similar to what would happen if you used the HDR algorithm, but it has the same challenges.) figure img_cmp = [img1(1:360,:,:); img2(361:720,:,:); img3(721:1080,:,:)]; imshow(img_cmp) The images are not properly aligned. This will cause problems in the HDR algorithm. ‘demo_build_hdr_sift_alignment’ gives a demonstration of how to use alignment with SIFT (Scale Invariant Feature Transform) to overcome this challenge. This requires the installation of vl_feat (http://www.vlfeat.org/). So, running this, the output image is cleanly aligned and the HDR algorithm worked. There are several other demos to give you additional insight. If you want to know more about HDR, you should check out the HDR Toolbox authors’ book “Advanced High Dynamic Range Imaging: Theory and Practice” which features the toolbox and a lot more information. Or, you jump right in and try it. Why don’t you get your DSLR from the shelf and take a series of photos? I would love to hear from you how things are going. In any case, have a lot of fun. Sebastian
# Might I ever get Question Limit Banned? I have a hefty number of questions, some very popular and others growing. Do I ever need to fear a question ban on this particular site? • Maybe, if you posted a string of 30 poor questions averaging -5 in downvotes. I think it has something to do with the quality of one questions in the most recent n weeks or months, but that's a guess, and I have no clue as to what $n$ might be. I say this, because a person can have had $99.5$ percent of their flags deemed "helpful", but over a 7 day period, if declines on one's flages overwhelm those deemed helpful, one may be banned from flagging for a period of time. – amWhy Mar 18 at 17:54 • @amWhy Thanks just asking cause I know there are question bans I am almost at 100 if not at 100 questions. So I am just wondering about that. – EnlightenedFunky Mar 18 at 17:59 • Bottom line, the key here is consistency. Over the course of a week, One poor post with two downvotes and that others delete, plus 7 upvoted questions, is not a problem. It's when one reveals a pattern of posting low quality posts that one is in trouble. But indeed, higher rep users have more of a "pattern" established than do newer users. – amWhy Mar 18 at 17:59 • No need to worry, EnlightenedFunky. – amWhy Mar 18 at 18:09 • You can somewhat self-evaluate your quality-ban statistic: data.stackexchange.com/math/query/885476/… if you know your deleted questions. Based on what is visible you're way above the Q-ban so unless you now start posting utterly bad questions that all get downvoted into oblivion I doubt you can manage to get into a Q-ban. I also believe reputation is factored in (as in above a certain rep you can't get Q-banned) but that is only hearsay and I don't know the exact cut-off. – rene Mar 20 at 13:24
# broken window hi there ~ it’s a hint-of-rain sleepy wednesday but i must say my feet are a little sore. why, you ask? because my car was broken into saturday or sunday night ( i was happily homebound most of the weekend and didn’t discover what had happened until monday morning, running late to work. sigh.) and i’ve had to walk and take the bus basically everywhere. so, back to wearing jeans and not skirts (not a hassle i want to deal with right now) and back to hording quarters. but something pretty important and nice: i forgot how much i really like riding the bus! so that’s a definite yay. about the break-in: whoever did it is an idiot. a few years ago i had some major problems with the fancy-shmancy computer that ran the alarm system in my car (warning: vw cabrios are bad bad news), resulting in my being unable to turn the car on for hours at a time sometimes, or the doors locking and impossible to open even with the key. so the helpful slimebally dealer disabled my trunk, saying it was somehow affecting the whole alarm thing, and then, thoughtfully, took out my horn without telling me. but he did tell me that to fix the alarm and trunk would cost at least $700. now he thought since he had sneakily disabled my trunk without really telling me, and furthermore advising me not to lock my doors even manually so that the faultiness of the car alarm wouldn’t prevent me from even driving my car, that for sure i would be back,$700 in hand. i mean, after all, i was in l.a. with a pretty new car: how in the world could i manage to leave the car unlocked? and furthermore, since it was so new, only dealers could get the required parts to fix it and his dealership was the biggest and closest to me. ah, the sneaky jerk. anyhow, to make a long story short (too late): i have driven around l.a. for over three years now without once locking the driver’s side door or having seen the inside of my trunk. lady luck has been with me, indeed. so, back to the stupidity of whoever broke in.. one, i had nothing zip nada of value. so what was taken? a hamper full of dirty laundry. arggggh. two, the driver’s side door was, as explained, open as ever. so what did the robber do? break the passenger’s side door window to get into the car! arrrrrgh. there was no need for that at all: increased chance of getting caught for said robber, and more shock and sadness and for me. sux. and knowing someone had my dirty laundry, brrrr, a little scary. such a weird break-in, huh? .. but for a happy-ish ending that i also feel a little weird about: i found my clothes this morning! walking to the metro station, what do i see on the sidewalk a block or so from my apartment? a heap of clothes, wet from rain and covered with ants and leaves. so i pulled out my favs: a pair of brand new jeans, an old t-shirt of my mom’s that i love, a striped blue shirt of my brothers that’s just years old and some other stuff. i was (as always) late to work, but i now feel bad about leaving all the rest of the clothes there. at the very least, i should throw it away. it’s sad to think of my clothes lying there, unwanted and maybe stared at with disgust by the passerbys, evidence of the dirtiness of l.a. also, question: is it really really gross to wear clothing that had been lying out on the street for three days? is there a limit, i.e. undergarments? sigh. it’s hard to acheive balance when unplanned disturbing things happen. of course, if one is already at balance in their lives, then i think it would be easier to keep that balance, but i’m nowhere near that. i’ve been pretty socially isolated recently, notwithstanding school starting (which has been good so far), and that has given me lots of time to think. but also lots of time to vege: reading tawdry novels, bad television, and aimless web surfing. oh well. i do feel less emotionally frazzy and more centered on what i want and what is good for me, so that’s good. unfortunately, thinking isn’t the problem; implementation is always the real challenge.
## News 1. Home 2. Calculate The Percentage By Mass Of Iron In The Haematite Ore # Calculate The Percentage By Mass Of Iron In The Haematite Ore Calculate the mass percentage composition of iron in magnetite, an ore which has the molecular formula of Fe 3 O 4. ### How do you calculate the maximum mass of iron? … Apr 14, 2013 (b) A haematite ore contains 80% by mass of iron(III) oxide. Calculate the maximum mass of iron that can be extracted from each tonne of this ore. Show each step of … ### Calculate the mass percent composition of iron for the ... Oct 06, 2010 Calculate the percent Fe2O3 in the sample. the mass i get for fe2o3 is too big, but i don't know how . Chemistry. Iron ores have different amounts of iron per kilogram of ore. Calculate the mass percent composition of iron for each iron ore: Fe2O3{\rm Fe_2O_3} (hematite), Fe3O4{\rm Fe_3O_4} (magnetite), FeCO3{\rm FeCO_3} (siderite). ### [Solved] Calculate the mass percent composition of iron ... Answer to Calculate the mass percent composition of iron for each iron ore. Fe2O3 hematite, Fe3O4 magnetite, FeCO3 siderite. ### Solved: 1.Calculate The Mass Percent Composition Of Iron F ... 1.Calculate the mass percent composition of iron for the third one of these iron ores. Iron is mined from the earth as iron ore. Common ores include Fe2O3 (hematite), Fe3O4 (magnetite), and … ### FeCo3 Calculate the mass percent composition of iron for ... Sep 26, 2010 Calculate the density of iron. chemistry. A 46.9 g of iron ore is treated as follows. The iron in the sample is all converted by a series of chemical reactions to Fe2O3. The mass of Fe2O3 is measured to be 11.6 grams. What was the percent iron in the sample of ore? Answer ### how do i calculate the mass percent of iron for the iron ... Jul 11, 2010 Assume a 100-g sample of each ore. Convert that amount to moles by dividing by the ore's respective molecular weight, and then take the ratio of iron in the ore, and the ore itself. Then multiply by the molecular weight of iron to find the weight of iron in that 100 g sample. The mass percent would be massFe / massSample * 100%. ### How do i calculate the mass percent of iron of the iron ... Feb 11, 2012 Find the mass weight of an Iron atom (55,845). Then find the atom mass of FeCO3 (by adding the atom mass of iron to that of Carbon(12,0107) and three times that of Oxygen(3x16,00=48). Then divide the atom weight of Iron (55,845) by that total (115,8557) and multiply the answer by 100 to get a percentage. 55,845/115,8557=0.48202203258 ### In the preparation of iron from haematite (Fe2O3) by the ... In the preparation of iron from haematite (F e 2 O 3 ) by the reaction with carbon as given below: F e 2 O 3 + C → F e + C O 2 How much 8 0 % pure iron could be produced from 1 … ### A sample of iron ore haematite contains 80% of pure Fe2O3 ... A sample of iron ore haematite contains 80% of pure Fe2O3.Calculate the mass of iron in 40 tonnes of the iron ore. - 9630116 ### Average iron content in ore as mass percent Yeah Chemistry 1g of a sample of iron ore was dissolved in H2SO4 in a titration vessel yielding Fe2+. This solution was titrated using 0.1M KMnO4. The analysis was repeated five times with the following titration volumes: 5.0, 4.8, 5.1, 5.2 and 5.3ml. Calculate the average iron content in the ore as a mass percentage. Aw(Fe) = 55.84 gmol-1. Any help much ... ### Percentage composition of ores - Metals - National 5 ... Tenorite (Cu 2 O) is an ore of copper. Given that copper has a mass of 63.5 and oxygen a mass of 16, calculate the percentage by mass of copper in tenorite. Reveal answer. ### hematite ore mass quantities hematite ore mass quantities. hematite based low-grade iron ore containing 34 18 mass% iron 31 10 mass% of silica and 7 65 mass% alumina Wet high-intensity magnetic separation (WHIMS) and reverse flotation (RF) were investigated In WHIMS process 93 08% of iron was recovered with a grade of 53 22 mass how do i calculate the mass percent of iron for the iron Jul 11 2010 Iron is mined from the ... ### Iron from the earth is in the form of iron... Clutch Prep Iron from the earth is in the form of iron ore. Common ores include Fe 2 O 3 (hematite), Fe 3 O 4 (magnetite), and FeCO 3 (siderite). Calculate the mass percent composition of iron in each of these iron ores. Which one has the highest iron content? ### OneClass: Iron is mined from the earth as iron ore. Common ... Jun 24, 2020 Iron is mined from the earth as iron ore. Common ores include Fe2O3 (hematite), Fe3O4 (magnetite), and FeCO3 (siderite).Calculate the mass percent composition of iron for the first one of these iron ores.Calculate the mass percent composition of iron for the second one of these iron ores.Calculate the mass percent composition of iron for the third one of these iron ores.Which ore … ### Iron ores have different amounts of iron per kilogram of ... Answer to: Iron ores have different amounts of iron per kilogram of ore. Calculate the mass percent composition of iron for each iron ore: Fe2O3... ### 0.804 g sample of iron ore was dissolved in acid. Iron was ... 0. 8 0 4 g sample of iron ore was dissolved in acid. Iron was reduced to + 2 state and it required 4 7. 2 m L of 0. 1 1 2 N K M n O 4 solution for titration. Calculate the percentage of iron of F e 3 O 4 in the ore. ### Iron ores have different amounts of iron per kilogram of ... The mass of Fe2O3 is measured to be 19.5 g. What was the mass of iron in the sample of ore? General Chem. FeCo3 Calculate the mass percent composition of iron for the third one of these iron ores. chemistry. A 3.75-G SAMPLE OF IRON ORE IS TRANSFORMED TO A SOLUTION OF IRON(II)SULFATE, FeSO4,AND THIS SOLUTION IS TITRATED WITH 0.150 M K2Cr2O7. ### Types of Iron Ore: Hematite vs. Magnetite INN In hematite the percentage of iron by mass is 111.69/159.69 ~~ 69.9% , similarly in magnetite the percentage of iron by mass is approximately 72.3% Magnetite has a higher percentage of iron per ... ### Hematite (iron ore) volume to weight conversion About Hematite (iron ore); 1 cubic meter of Hematite (iron ore) weighs 5 150 kilograms [kg] 1 cubic foot of Hematite (iron ore) weighs 321.504 pounds [lbs] Hematite (iron ore) weighs 5.15 gram per cubic centimeter or 5 150 kilogram per cubic meter, i.e. density of hematite (iron ore) is equal to 5 150 kg/m .In Imperial or US customary measurement system, the density is equal to 321.504 pound ... ### Hematite: A primary ore of iron and a pigment mineral Uses of Hematite (Iron Ore) Hematite is the world’s most important ore of iron. Although magnetite contains a higher percentage of iron and is easier to process, hematite is the leading ore because it is more abundant and present in deposits in many parts of the world. Hematite is mined in some of the largest mines in the world. ### Oxidation – Reduction Titration: Determination of Iron ... Introduction: In this experiment, oxidation/reduction (or redox) will be used in the titration analysis of an iron compound. We will use potassium permanganate, KMnO4, as the titrant in the analysis of an unknown sample containing iron to determine the percent iron by mass in the sample. ### Redox Titration Percent iron (II) Example Graduateway Your sample mass is part of your data so you need to find the grams of iron from your titration data. You know the molarity and volume of the KMnO4 and you can determine the gram equivalent mass of iron and the atomic mass of iron. Thus you can calculate the grams for iron present in each of your samples and then the percent iron in the unknown. ### Redox titration (calculating percent iron in sample ... Apr 18, 2010 Homework Statement A mass of iron ore weighing 0.2792g was dissolved in dilute acid and all the iron was converted to Fe2+(aq). The iron II solution required 23.30ml of 0.0194M KMnO4 for titration. Calculate the percentage of iron in the ore. Homework Equations Wrote … ### Redox titration (calculating percent iron in sample ... Apr 18, 2010 0.0194M KMnO4 [0.02330L] [5mol Fe/2 mol MnO4] [55.85g/1mol Fe] = 0.0631g Fe. 0.0631g/0.2792g X 100 = 22.6%. Seems like an easy enough question, done several like it before but somethings not right, the correct answer should be 45.3%. ### Extraction of Iron Metallurgy Blast Furnace and Reactions This kind of iron is called Cast Iron and has a slightly lower carbon content 2 – 3 %. This is even harder than pig iron. Wrought Iron/ Malleable Iron. Wrought iron is the purest form of iron available commercially available and is prepared from cast iron by heating cast iron in a furnace lined with Haematite (Fe2O3). ### Hematite (iron ore) price conversions cost calculator About Hematite (iron ore) Hematite (iron ore) weighs 5.15 gram per cubic centimeter or 5 150 kilogram per cubic meter , i.e. density of hematite (iron ore) is equal to 5 150 kg/m . In Imperial or US customary measurement system, the density is equal to 321.504 pound per cubic foot [lb/ft ], or 2.98 ounce per cubic inch [oz/inch ] .
# Abstract The design and evaluation of parachute-payload systems is an important field of applications in which numerical analysis tools can make very important contributions. This work describes new numerical developments carried out at CIMNE in this field, which involve a coupled fluid-structural solver intended for the unsteady simulation of ram-air type parachutes and a set of complementary tools aimed at studying trajectory and control systems effects. For an efficient solution of the aerodynamic problem, an unsteady panel method has been chosen exploiting the fact that large areas of separated flow are not expected under nominal flight conditions of ram-air parachutes. Besides, a dynamic explicit solver based on a finite element technique is chosen for the structure. This approach yields a robust solution even when highly non-linear effects due to large displacements and material asymmetric behaviours are present. The numerical results show considerable accuracy and robustness. An added benefit of the proposed aerodynamic and structural techniques is that they can be easily vectored and thus suitable for use in parallel architectures. The main features of the developed computational tools are described in this work and several numerical examples are provided to illustrate the good performance and potential of the proposed techniques. Further improvements of the methodology being carried out and future lines of investigation are also presented. # 1. INTRODUCTION The numerical simulation of parachutes is a challenging problem mainly due to fact that geometry is complex in design and behaviour and, in addition, it is continuously varying in time. From the aerodynamic point of view several factors related to intricate unsteady flow processes must be taken into account. Among these it can be found massive flow separation, complex aerodynamic interactions between the structural components and the presence of large unsteady wakes. The structural analysis requirements are also important. It is necessary to simulate the behaviour of light structures which, in general, are not statically determinate for an arbitrary set of loads (i.e. they behave like a mechanism). Thus, drastic changes in geometry are needed in order to reach an equilibrium state; the structural response is highly non-linear and this may cause severe numerical convergence problems. In addition, complexity of the structural model is increased by the need to simulate ribbon reinforcements integrated into the fabric and cords for supporting and control purposes. Due to the lack of bending stiffness of the structural components, the materials are able to resist tensile stresses but buckle (wrinkle) under compressive loads. This asymmetric behaviour should also be accounted for. Finally, the nature of the applied forces and the structural response of the parachute add extra difficulties to the analysis. The magnitude and direction of the forces, which are mostly due to pressure loads, are not known in advance but are a function of the deformed parachute shape; therefore, they must be computed as part of the solution. Furthermore, the pressure field presents high sensitivity to changes in geometry requiring iterative solution procedures. The numerical simulation of parachutes must deal with all the issues listed above pursuing reliability and reasonable computational costs. The challenges to be faced are important and this explains why the current design of parachute systems relies mostly on empirical methods. As an example, 15 worldwide parachute manufacturers were recently surveyed about the use of computational tools in the design and evaluation of parachute systems [1]. All of ten responses received deny the employment of simulation software, with the exception of computer assisted design (CAD) tools. This is a clear indication that computational mechanics is hardly applied to parachute design. The numerical simulation tools described in this work are intended to address this shortcoming. ## 1.1 Previous developments on parachute simulation at CIMNE Most challenging situations involved in the numerical simulation of parachutes have been addressed at CIMNE and several analysis tools have been developed to tackle those problems. At present, CIMNE has gained experience in the field of parachute simulation, mainly throughout a consecution of research projects1 such as PARACIMSA [2] developed in cooperation with CIMSA Ingeniería en Sistemas (hereafter CIMSA), a leading parachute designer and manufacturer [3]. As a result, a coupled fluid-structural solver for parachute simulation was developed. The computational code, based on an stationary low-order panel method coupled with a membrane finite element (FE) implicit technique (including cords and ribbons models), allowed for the prediction of stationary aerodynamic loads acting on the structure of ram-air parachutes (also known as parafoils) as well as the approximate treatment of manoeuvres and reefing. Moreover, a simple model for analyzing parachute inflation processes was also implemented. PARACIMSA has been satisfactory applied to a variety of parachute models and flight conditions; however, the lack of robustness reported by the users and the need to broaden the range of applications called for upgrading. Several operational problems were found in PARACIMSA, such as its failure to achieve in many cases the equilibrium position of the deformed structure, serious convergence problems due to numerical misbehaviours at trailing edges (presumably due to body-wake intersections) and an elevated computational cost. In addition, the stationary approach followed in PARACIMSA pose restrictions to the range of problems to be studied as the real behaviour of a parachute is highly unsteady in most of the flight stages. Consequently, it was highly necessary to adopt a different approach in order to improve and extend the capabilities of the simulation code as well as to increase its robustness and computational efficiency. In view of the magnitude of the changes needed, the development of a new simulation code was considered to be the most adequate and feasible choice. This task was undertaken relaying on the experience gained in the field during all the previous developments. ## 1.2 Design guidelines for CIMNE new generation of parachute simulation tools The new developments were aimed at obtaining a versatile tool for the design, evaluation and analysis of general parachute systems. The guiding principles in the design of this tool were mainly three: • The analysis capabilities of PARACIMSA were to be maintained in the new developments. • The new solver should be able to deal with a wider range of problems and reliable enough to solve real flight flow situations with accuracy. • Code robustness and computational efficiency should be a priority. Following these guidelines, a full unsteady approach was adopted for solving the coupled aerodynamic-structural problem placing special emphasis on code modularity, expandability, robustness and computational efficiency. Additionally, in order to provide a complete set of implementations for parachute design and analysis, a series of complementary tools aimed at studying trajectory dynamics and effects of guidance control systems were developed which are planned to be integrated to the main analysis code. In the following, the main features of the parachute simulation tools currently being developed at CIMNE are described and several numerical applications are presented with the aim of illustrating the performance and potential of the proposed techniques. The rest of the document is organized into 7 main sections. The core features of the coupled fluid-structural solver are given in Section 2. The code user-friendly interface developed is presented in Section 3 and numerical applications are shown in Section 4. Preliminary additional tools intended to perform guidance navigation and control analyses are briefly described in Section 5. The conclusions of this work are presented in Section 6 and, finally, current work being conducted in the field and guidelines for future developments are outlined in Section 7. # 2. A COUPLED SOLUTION FOR PARACHUTE SYSTEMS In view of the important challenges involved in modeling parachute systems, the choice of the structural and aerodynamic solvers as well as the coupling methodology were thoroughly examined from two different points of view. Firstly, considering the capabilities of the techniques to deal with the prototypical situations encountered during the flight of parachutes; secondly, evaluating its robustness and the chances of achieving low computational costs through efficient numerical implementations. As regards structural modeling, it was decided to use a FE dynamic structural solver. An unsteady analysis is not affected by problems caused by the lack of a definite static equilibrium configuration. However, since for dynamic problems the structure is constantly in equilibrium with the inertial forces, the solution is unique. Even when only the long-term static response is sought, the dynamic approach offers clear advantages. Furthermore, the extension to transient dynamic problems becomes trivial. In spite of the fact that the structural solution approach is general and can be applied to any kind of parachute system, the computational cost of a general flow solution was not feasible from a practical point of view (at least during the first stages of the work) and a decision had to be taken regarding the scope of the aerodynamic solution. Consequently, following previous developments, the focus was initially placed on ram-air type parachutes for which a potential flow approach is valid as no extensive separation regions are present during nominal operation. The main advantage of the potential model is that boundary methods can be employed; hence, the computational cost is significantly reduced with respect to other techniques in which the fluid domain surrounding the parachute must be modeled (e.g. Finite Differences, Finite Volumes and Finite Elements). Even in cases in which extensive flow separation occurs, alternative potential approaches such as vortex methods could be used. In other cases, for problems going beyond the scope of potential methods, the modular approach adopted for the code allows changing the flow solver with minimal modifications. ## 2.1 The structural model The fundamental equations governing the dynamics of a solid can be obtained by relating the gradient of the stress field to the applied loads. This internal equilibrium statement yields ${\displaystyle {\begin{array}{c}\sum _{j}{\frac {\partial {\sigma }_{ij}}{\partial x_{j}}}{\mbox{ }}+{\mbox{ }}b_{i}{\mbox{ }}={\mbox{ }}0{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}\forall {\mbox{ }}{\boldsymbol {x}}\in \Omega {\mbox{ }}i=1,2,3\\{\mbox{ }}{\mbox{ }}\sum _{j}{\sigma }_{ij}\cdot n_{j}{\mbox{ }}={\mbox{ }}{\overline {t}}_{i}({\boldsymbol {x}}){\mbox{ }}{\mbox{ }}\forall {\mbox{ }}{\boldsymbol {x}}\in {\Gamma }_{N}\\{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}u_{i}{\mbox{ }}={\mbox{ }}{\overline {u}}_{i}({\boldsymbol {x}}){\mbox{ }}\forall {\mbox{ }}{\boldsymbol {x}}\in {\Gamma }_{D}\end{array}}}$ (1) where ${\textstyle {\overline {u}}_{i}}$ and ${\textstyle {\overline {t}}_{i}}$ stand for prescribed surface displacements and tractions. In the case of a dynamic problem the body forces (bi) must include the inertial loads given by ${\displaystyle {b_{i}\vert }_{inertial}=-\rho {\frac {d^{2}u_{i}}{dt^{2}}}}$ (2) being ρ the density of the solid. Note that a total derivative (i.e. tracking the material particles) is involved in Eq. (2). The weak formulation of the problem is obtained by considering an arbitrary set of test functions δui representing a virtual displacement field. Thus, adopting implicit summation to keep the notation compact, it is possible to write ${\displaystyle {\begin{array}{c}{\mbox{ }}\left({\frac {\partial {\sigma }_{ij}}{\partial x_{j}}}+b_{i}\right)\delta u_{i}=0{\mbox{ }}\forall {\mbox{ }}{\boldsymbol {x}}\in \Omega {\mbox{ }}i=1,2,3\\\left({\sigma }_{ij}\cdot n_{j}-{\overline {t}}_{i}\right)\delta u_{i}=0{\mbox{ }}\forall {\mbox{ }}{\boldsymbol {x}}\in {\Gamma }_{N}\\{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}\left(u_{i}-{\overline {u}}_{i}\right)\delta u_{i}=0{\mbox{ }}\forall {\mbox{ }}{\boldsymbol {x}}\in {\Gamma }_{D}\end{array}}}$ (3) Then, taking the weighted average and after some manipulation, the relevant domains yields ${\displaystyle \sum _{i,j}\int _{\Omega }{\sigma }_{ij}\delta {\epsilon }_{ij}{\mbox{ }}d\Omega =}$${\displaystyle \sum _{i}\int _{\Omega }b_{i}\delta u_{i}{\mbox{ }}d\Omega +}$${\displaystyle \sum _{i}\int _{{\Gamma }_{N}}{\overline {t}}_{i}\delta u_{i}{\mbox{ }}d\Gamma }$ (4) This equation, which is the basis for solving the structural problem, states that when the system is in equilibrium the change in strain energy caused by an arbitrary virtual displacement field equals the work done by the external forces (virtual work principle). In the following, the solution procedure is outlined. Further details are given by the authors in [4]. ### 2.1.1 Finite element discretization In order to obtain a discretized form of the governing equations (4), an approximate FE solution is build by interpolating the nodal values of the displacements. In a similar manner, a virtual displacement field can be obtained, thus ${\displaystyle {\tilde {u}}_{i}({\boldsymbol {x}})=N^{k}({\boldsymbol {x}}){\mbox{ }}{\tilde {u}}_{i}^{k}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}\delta u_{i}({\boldsymbol {x}})=}$${\displaystyle N^{l}({\boldsymbol {x}}){\mbox{ }}\delta u_{i}^{l}}$ (5) being ${\textstyle {\tilde {u}}}$ the approximate solution and ${\textstyle N^{k}}$ the interpolation (shape) function corresponding to the ${\textstyle k^{th}}$ node of an element (from now on supra-indexes will indicate nodal values). Then, as the virtual strain field is a linear function of the virtual displacement field, it is also a linear function of ${\textstyle \delta u_{i}^{l}}$. Therefore, it is possible to write ${\displaystyle \delta {\epsilon }_{ij}=A_{ij,m}^{l}\delta u_{m}^{l}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}A_{ij,m}^{l}=}$${\displaystyle {\frac {\partial {\epsilon }_{ij}}{\partial u_{m}^{l}}}=L(N^{l})}$ (6) Introducing the interpolated solution into Eq. (4) (including the inertial term) and taking into account that the virtual nodal displacements are arbitrary the next discretized form is achieved (1) Additional information can be found at www.cimne.com / research projects / aerospace. ${\displaystyle \int _{\Omega }\rho N^{b}N^{k}d\Omega {\frac {d^{2}u_{a}^{k}}{dt^{2}}}=}$${\displaystyle \int _{\Omega }b_{a}N^{b}d\Omega +\int _{{\Gamma }_{N}}{\overline {t}}_{a}N^{b}d\Gamma -}$${\displaystyle \int _{\Omega }{\sigma }_{kj}A_{kj,a}^{b}d\Omega }$ (7) where ${\textstyle a=1,2,3}$, ${\displaystyle b=1,...,n_{nod}}$ and summation is assumed over the j and k indices. These equations can also be written in matrix form as ${\displaystyle {\boldsymbol {M{\ddot {u}}}}={\boldsymbol {b}}+{\boldsymbol {t}}-}$${\displaystyle {\boldsymbol {I}}}$ (8) being M the mass matrix of the system, b and t the external nodal generalized forces and I the internal force vector. The system of ordinary differential equations given by Eq. (8) along with suitable initial conditions ${\displaystyle {\begin{array}{c}{{\boldsymbol {u}}\vert }_{t=o}={\boldsymbol {u}}_{\boldsymbol {0}}\\{{\boldsymbol {\dot {u}}}\vert }_{t=o}={\boldsymbol {\dot {u}}}_{\boldsymbol {0}}\end{array}}}$ (9) can be advanced in time to yield the displacements field at every instant time. To speed up the computations without significant loss of accuracy, the mass matrix M is usually replaced by its lumped (diagonal) counterpart given by ${\displaystyle M_{ij}^{d}={\delta }_{ij}\sum _{j}M_{ij}}$ (10) where ${\textstyle \delta _{ij}}$ denotes the Kronecker’s delta function. In order to form the matrix and load vectors appearing in Eq. (8) an element-by-element approach is adopted. As the shape function of a node k is nonzero only inside elements containing said node, the integrals need only evaluated in the appropriate elements, e.g. ${\displaystyle M_{ij}=\int _{\Omega }\rho N^{i}N^{j}d\Omega =\sum _{el}\int _{{\Omega }_{el}}\rho N^{i}N^{j}d\Omega {\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}/{\mbox{ }}{\mbox{ }}{\mbox{ }}i,j\subset el}$ (11) ### 2.1.2 Time integration The system of equations (8) is advanced explicitly in time by means of a second order central differencing scheme, selected due to its high efficiency and acceptable accuracy. Thus, given a series of points in time and their corresponding time increments ${\displaystyle {\begin{array}{c}t^{\left(0\right)}{\mbox{ }}{\mbox{ }}{\mbox{ }},{\mbox{ }}{\mbox{ }}{\mbox{ }}...{\mbox{ }}{\mbox{ }}{\mbox{ }},{\mbox{ }}{\mbox{ }}{\mbox{ }}t^{\left(i-1\right)}{\mbox{ }}{\mbox{ }}{\mbox{ }},{\mbox{ }}{\mbox{ }}{\mbox{ }}t^{\left(i\right)}{\mbox{ }}{\mbox{ }}{\mbox{ }},{\mbox{ }}{\mbox{ }}{\mbox{ }}t^{\left(i+1\right)}{\mbox{ }}{\mbox{ }}{\mbox{ }},{\mbox{ }}{\mbox{ }}{\mbox{ }}...\\...{\mbox{ }}{\mbox{ }}{\mbox{ }},{\mbox{ }}{\mbox{ }}{\mbox{ }}\Delta t^{\left(i\right)}=t^{\left(i\right)}-t^{\left(i-1\right)}{\mbox{ }}{\mbox{ }}{\mbox{ }},{\mbox{ }}{\mbox{ }}{\mbox{ }}\Delta t^{\left(i+1\right)}=t^{\left(i+1\right)}-t^{\left(i\right)}{\mbox{ }}{\mbox{ }}{\mbox{ }},{\mbox{ }}{\mbox{ }}{\mbox{ }}...\end{array}}}$ (12) the change in midpoint velocity can be defined as ${\displaystyle {\frac {d{\boldsymbol {u}}}{dt}}^{\left(i+{\frac {1}{2}}\right)}-}$${\displaystyle {\frac {d{\boldsymbol {u}}}{dt}}^{\left(i-{\frac {1}{2}}\right)}=}$${\displaystyle {\frac {\Delta t^{\left(i+1\right)}+\Delta t^{\left(i\right)}}{2}}\cdot {\frac {d^{2}{\boldsymbol {u}}^{\left(i\right)}}{dt^{2}}}}$ (13) where the accelerations and velocities are evaluated at different instants times. This scheme provides second order accuracy in time by virtue of the centered approximation for the time derivative. Once the intermediate velocities have been computed, the displacements can be updated by ${\displaystyle {\boldsymbol {u}}^{\left(i+1\right)}={\boldsymbol {u}}^{\left(i\right)}+}$${\displaystyle \Delta t^{\left(i+1\right)}\cdot {\frac {d{\boldsymbol {u}}^{\left(i+{\frac {1}{2}}\right)}}{dt}}=}$${\displaystyle {\boldsymbol {u}}^{\left(i\right)}+\Delta t^{\left(i+1\right)}\cdot \left[{\frac {d{\boldsymbol {u}}}{dt}}^{\left(i-{\frac {1}{2}}\right)}+\right.}$${\displaystyle \left.{\frac {\Delta t^{\left(i+1\right)}+\Delta t^{\left(i\right)}}{2}}\cdot {\frac {d^{2}{\boldsymbol {u}}^{\left(i\right)}}{dt^{2}}}\right]}$ (14) The method outlined has an extremely low computational cost per time step; however, it shows a very important limitation. The explicit scheme is only conditionally stable meaning the time step cannot be made arbitrarily large to prevent divergence of the solution. The maximum allowable time step is given by ${\displaystyle \Delta t\leq {\frac {2}{{\omega }_{max}}}}$ (15) with ωmax being the angular frequency of the highest eigenmode of the system. An alternative estimate of the maximum time step is given by the minimum transit time of the dilatational waves across the elements of the mesh, i.e. ${\displaystyle \Delta t\leq min\left({\frac {L_{e}}{c_{d}}}\right)}$ (16) where Le is a characteristic element dimension and cd is the dilatational wave speed. This can be obtained for an isotropic linear elastic solid by ${\displaystyle {\begin{array}{c}c_{d}={\sqrt {\frac {\lambda +2\mu }{\rho }}}\\\lambda =K-{\frac {2}{3}}G{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}\mu =G={\frac {E}{2(1+\nu )}}\end{array}}}$ (17) where λ and μ are the Lamé constants which can be calculated from the shear modulus (G) and bulk modulus (K) of the material as stated above. ## 2.1.3 Numerical damping In order to achieve a smooth solution of the problem, some numerical damping is required to be introduced into the equations. Thus, two forms of user-adjustable damping are proposed in this work to allow a greater flexibility controlling the solution process: Rayleigh damping and bulk viscosity. In the first case the damping matrix is built from the mass and stiffness matrices ${\displaystyle {\boldsymbol {C}}=\alpha {\boldsymbol {M}}+\beta {\boldsymbol {K}}}$ (18) Hence, the equation system (8) supplemented with this damping term becomes ${\displaystyle {\boldsymbol {M{\ddot {u}}}}={\boldsymbol {b}}+{\boldsymbol {t}}-}$${\displaystyle {\boldsymbol {I}}-{\boldsymbol {C{\dot {u}}}}}$ (19) The ${\displaystyle \alpha }$ term creates a damping force proportional to the absolute velocity of the nodes. This is roughly equivalent to having the nodes of the structure move trough a viscous fluid. The damping ratio introduced by the mass proportional damping term on a mode of frequency ${\displaystyle \omega }$ is ${\displaystyle \xi ={\frac {\alpha }{2\omega }}}$ (20) From Eq. (20) it is apparent that the α term affects mainly the low frequency components of the solution. Thus, it can be useful to accelerate convergence to a static solution when only the long-term response is sought. On the other hand, the β term introduces forces that are proportional to the material strain rate. An extra stress ${\displaystyle {\boldsymbol {\sigma }}_{d}}$ is added to the constitutive law ${\displaystyle {\boldsymbol {\sigma }}_{d}=\beta {\boldsymbol {D}}^{el}{\boldsymbol {:{\dot {\epsilon }}}}}$ (21) with ${\displaystyle {\boldsymbol {D}}^{el}}$ being the tangent stiffness tensor of the material. The fraction of critical damping for a given mode is ${\displaystyle \xi ={\frac {\beta {\mbox{ }}\omega }{2}}}$ (22) In this case only the high order modes are affected appreciably. An additional form of damping is included to prevent high frequency “ringing”. This is caused by excitation of element dilatational modes which are always associated with the highest eigenvalues of the system. An additional hydrostatic stress is included in the constitutive routines which is proportional to the volumetric strain rate. This volumetric viscous stress is given by ${\displaystyle {\sigma }_{h}=b{\mbox{ }}\rho {\mbox{ }}c_{d}{\mbox{ }}L_{e}{\mbox{ }}{\dot {\epsilon }}_{vol}}$ (23) where ${\displaystyle b}$ is the desired damping ratio for the dilatational mode. ## 2.1.4 Element formulation Linear two-node cables and three node membranes are employed. As an introduction to the details of implementation the cable element formulation is described first. While extremely simple, it contains many of the relevant features needed to formulate the surface element. As only small tensile strains are expected, a small-strain formulation has been adopted to calculate the elemental stresses. This assumption allows for efficient coding while maintaining acceptable accuracy. ### a. Two-node linear cable element Let us consider a linear cable element stretching between nodes i and j, having cross sectional area A and subject to a distributed loading per unit length ${\textstyle {\boldsymbol {f}}_{d}}$ as shown in Figure 1. Figure 1. Linear cable element subject to internal and external loads.> As large displacements are expected, the position of the nodes can be written either on the undeformed (reference) configuration or in the deformed (current) configuration. From now on upper-case letters will denote the original coordinates while lower-case will be reserved for the current configuration. For example, the original length of the cable element is given by ${\displaystyle L_{0}=\Vert {\textbf {X}}_{j}-{\bf {X}}_{i}\Vert }$ (24) while the actual length at any given time is ${\displaystyle L(t)=\Vert {\bf {x}}_{j}-{\textbf {x}}_{i}\Vert }$ (25) The unit vector along the element is ${\displaystyle {\boldsymbol {e}}_{\boldsymbol {l}}={\frac {{\textbf {x}}_{j}-{\textbf {x}}_{i}}{\Vert {\boldsymbol {x}}_{j}-{\textbf {x}}_{i}\Vert }}}$ (26) From the change in length of the element the axial strain and stress can be obtained. Assuming linear elastic behaviour ${\displaystyle \epsilon ={\frac {L-L_{0}}{L_{0}}}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}\sigma =}$${\displaystyle max(0{\mbox{ }},{\mbox{ }}E\epsilon )}$ (27) The cables buckle instantly under compressive loads, thus, there is a lower bound on the allowable stresses. Therefore, a minimum stress value of zero is enforced in Eq. (27) and the internal forces at the nodes become ${\displaystyle {\textbf {I}}_{i}=-\sigma A{\textbf {e}}_{l}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\textbf {I}}_{j}=+\sigma A{\textbf {e}}_{l}}$ (28) The nodal generalized external force due to the distributed loading is calculated as indicated in Eq. (7). If the load ${\textstyle {\textbf {f}}_{d}}$ is constant across the element it reduces to ${\displaystyle {\textbf {b}}_{i}=\int _{0}^{L}N_{i}{\textbf {f}}_{d}dL={\frac {L}{2}}{\textbf {f}}_{d}}$ (29) When numerical damping is included the stress term in Eq. (27) is augmented with the viscous contributions ${\displaystyle {\frac {d\epsilon }{dt}}={\frac {({\dot {\bf {x}}}_{j}-{\dot {\bf {x}}}_{i})\cdot {\textbf {e}}_{1}}{L_{0}}}\,;\quad \sigma =E\left(max(0{\mbox{ }},{\mbox{ }}\epsilon )+\beta {\dot {\epsilon }}\right)+b{\mbox{ }}\rho {\mbox{ }}c_{d}{\mbox{ }}L_{0}{\mbox{ }}{\dot {\epsilon }}}$ (30) The mass matrix can be obtained using Eq. (11) ${\displaystyle {\textbf {M}}=\rho AL\left[{\begin{array}{cc}{\frac {1}{3}}&{\frac {1}{6}}\\{\frac {1}{6}}&{\frac {1}{3}}\end{array}}\right]{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\textbf {M}}^{d}={\frac {\rho AL}{2}}\left[{\begin{array}{cc}1&0\\0&1\end{array}}\right]}$ (31) ### b. Three-node linear membrane element A triangular element composed by three corner nodes ${\displaystyle {\bf {x}}^{1}}$, ${\displaystyle {\bf {x}}^{2}}$ and ${\displaystyle {\bf {x}}^{3}}$ is defined according to Figure 2. Given that large displacements are expected, the strain state of the element is easier to assess using a local corrotational frame than in the global reference system. Figure 2. Linear cable element subject to internal and external loads. The three unit vectors along the local axes are obtained from ${\displaystyle {\boldsymbol {e}}_{1}={\frac {{\boldsymbol {x}}^{2}{-}{\boldsymbol {x}}^{1}}{\Vert {\boldsymbol {x}}^{2}{-}{\boldsymbol {x}}^{1}\Vert }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\boldsymbol {n}}=}$${\displaystyle {\frac {{\boldsymbol {e}}_{1}\times ({\boldsymbol {x}}^{3}{-}{\boldsymbol {x}}^{1})}{\Vert {\boldsymbol {e}}_{1}\times ({\boldsymbol {x}}^{3}{-}{\boldsymbol {x}}^{1})\Vert }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\boldsymbol {e}}_{2}=}$${\displaystyle {\boldsymbol {n}}\times {\boldsymbol {e}}_{1}}$ (32) Thus, any point of the triangle can now be identified by its two local coordinates (ξ,η) by ${\displaystyle (\xi ,\eta )=\left(({\boldsymbol {x}}-{\boldsymbol {x}}^{1})\cdot {\boldsymbol {e}}_{1},({\boldsymbol {x}}-{\boldsymbol {x}}^{1})\cdot {\boldsymbol {e}}_{2}\right)}$ (33) As a linear triangle always remains flat, the problem is greatly simplified by analysing the stress state on the ξ-η plane. Figure 3. Nodal coordinates in the triangle local reference frame.> The components of the strain tensor can now be determined easily using the gradients of the element shape functions ${\displaystyle \left[{\begin{array}{c}{\epsilon }_{\xi }\\{\epsilon }_{\eta }\\{\gamma }_{\xi \eta }\end{array}}\right]=\left[{\begin{array}{cccccc}{\frac {\partial N^{1}}{\partial \xi }}&0&{\frac {\partial N^{2}}{\partial \xi }}&0&0&0\\0&{\frac {\partial N^{1}}{\partial \eta }}&0&{\frac {\partial N^{2}}{\partial \eta }}&0&{\frac {\partial N^{3}}{\partial \eta }}\\{\frac {\partial N^{1}}{\partial \eta }}&{\frac {\partial N^{1}}{\partial \xi }}&{\frac {\partial N^{2}}{\partial \eta }}&{\frac {\partial N^{2}}{\partial \xi }}&{\frac {\partial N^{3}}{\partial \eta }}&0\end{array}}\right]\left[{\begin{array}{c}u_{\xi }^{1}\\u_{\eta }^{1}\\u_{\xi }^{2}\\u_{\eta }^{2}\\u_{\xi }^{3}\\u_{\eta }^{3}\end{array}}\right]}$ (34) Note that many of the displacements are zero by virtue of the definition of the coordinate system; however. Eq. (34) is still useful because it can be used with any virtual displacement field. The corresponding stresses are calculated assuming a plane stress state (an acceptable hypothesis for thin surface elements) and linear elastic isotropic behaviour. Hence, ${\displaystyle \left[{\begin{array}{c}{\sigma }_{\xi }\\{\sigma }_{\eta }\\{\tau }_{\xi \eta }\end{array}}\right]={\frac {E}{1-{\nu }^{2}}}\left[{\begin{array}{c}{\epsilon }_{\xi }+\nu {\epsilon }_{\eta }\\{\epsilon }_{\eta }+\nu {\epsilon }_{\xi }\\{\frac {1-\nu }{2}}{\gamma }_{\xi \eta }\end{array}}\right]}$ (35) As the membrane buckles under compressive loads, the stresses given by Eq. (35) must be corrected to account for this fact. To this end we shall refer to Eq. (35) as the trial stress state ${\displaystyle {\boldsymbol {\tau }}^{t}}$. Then, three possible membrane states, depicted in Figure 4, are considered: • Taut: the minimum principal trial stress is positive. No corrections are needed. • Wrinkled: membrane is not taut, but the maximum principal strain is positive. Trial state is replaced with a uniaxial stress state. • Slack: the maximum principal strain is negative. The corrected stresses are zero. Figure 4. Trial membrane states: taut (A), wrinkled (B) and slack (C). When the membrane is wrinkled the stress state must be corrected by (see Figure 5) ${\displaystyle {\begin{array}{c}{\sigma }_{I}=E{\epsilon }_{I}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\sigma }_{II}=0{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\tau }_{max}={\sigma }_{m}={\frac {{\sigma }_{I}}{2}}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}\theta ={tan}^{-1}\left({\frac {{\gamma }_{\xi \eta }}{2{\epsilon }_{\xi }}}\right)\\{\sigma }_{\xi }={\sigma }_{m}\left(1+{\frac {{\epsilon }_{\xi }-{\epsilon }_{\eta }}{{\gamma }_{max}}}\right){\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\sigma }_{\eta }={\sigma }_{m}\left(1-{\frac {{\epsilon }_{\xi }-{\epsilon }_{\eta }}{{\gamma }_{max}}}\right){\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\tau }_{\xi \eta }={\sigma }_{m}{\frac {{\gamma }_{\xi \eta }}{{\gamma }_{max}}}\end{array}}}$ (36) Figure 5. Stress correction for wrinkled membrane. The elastic stresses are next augmented with viscous terms to introduce a suitable level of numerical damping. Using the nodal velocities the components strain rate tensor can be computed. The damping stresses are then given by ${\displaystyle {\left[{\begin{array}{c}{\sigma }_{\xi }\\{\sigma }_{\eta }\\{\tau }_{\xi \eta }\end{array}}\right]}_{damp}={\frac {\beta E}{1-{\nu }^{2}}}\left[{\begin{array}{c}{\dot {\epsilon }}_{\xi }+\nu {\dot {\epsilon }}_{\eta }\\{\dot {\epsilon }}_{\eta }+\nu {\dot {\epsilon }}_{\xi }\\{\frac {1-\nu }{2}}{\dot {\gamma }}_{\xi \eta }\end{array}}\right]+b\rho c_{d}L\left({\dot {\epsilon }}_{\xi }+\right.}$${\displaystyle \left.{\dot {\epsilon }}_{\eta }\right)\left[{\begin{array}{c}1\\1\\0\end{array}}\right]}$ (37) The total stress (elastic plus viscous) is then used to calculate the nodal forces using the change in energy due a virtual displacement of the nodes. As the triangular linear elements create a constant strain (and stress) field a single point Gauss quadrature is adequate to capture the effect of the virtual displacement ${\displaystyle \int {\boldsymbol {\sigma }}:\delta {\boldsymbol {\epsilon }}{\mbox{ }}d\Omega =}$${\displaystyle tA_{0}\left({\sigma }_{\xi }\delta {\epsilon }_{\xi }+\right.}$${\displaystyle \left.{\sigma }_{\eta }\delta {\epsilon }_{\eta }+\right.}$${\displaystyle \left.{\tau }_{\xi \eta }\delta {\gamma }_{\xi \eta }\right)}$ (38) with t being the element thickness and A0 its reference (undeformed) area. When the element faces are subject to a pressure loading, the corresponding nodal generalized forces are obtained from Eq. (7). For the particular case of a uniform pressure ${\displaystyle p}$ acting on the upside element face (the side towards the normal vector ${\displaystyle {\bf {n}}}$ points) the nodal forces are ${\displaystyle \left[{\begin{array}{c}I_{n}^{1}\\I_{n}^{2}\\I_{n}^{3}\end{array}}\right]=-{\frac {pA_{p}}{3}}\left[{\begin{array}{c}1\\1\\1\end{array}}\right]}$ (39) where ${\displaystyle A_{p}}$ stands for current projected area of the element. Finally, once all the components of the internal forces have been determined on the local reference frame, the global force vector can be assembled. The transformation to the global inertial reference system is performed through ${\displaystyle {\textbf {I}}_{glob}^{i}=I_{\xi }^{i}{\textbf {e}}_{1}+I_{\eta }^{i}{\textbf {e}}_{\boldsymbol {2}}+I_{n}^{i}{\textbf {n}}}$ (40) Finally, assuming uniform density, the mass matrices for the element are given by ${\displaystyle {\textbf {M}}={\frac {\rho tA_{0}}{3}}\left[{\begin{array}{ccc}{\frac {1}{2}}&{\frac {1}{4}}&{\frac {1}{4}}\\{\frac {1}{4}}&{\frac {1}{2}}&{\frac {1}{4}}\\{\frac {1}{4}}&{\frac {1}{4}}&{\frac {1}{2}}\end{array}}\right]{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\textbf {M}}^{d}=}$${\displaystyle {\frac {\rho tA_{0}}{3}}\left[{\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}}\right]}$ (41) ## 2.2 The aerodynamic model In order to solve the aerodynamics of ram-air type parachutes a potential flow model is adopted assuming the fluid to be incompressible, inviscid and irrotational. The behaviour of this ideal fluid is described by the following Laplace’s equation ${\displaystyle {\nabla }^{2}\Phi {\mbox{ }}={\mbox{ }}0}$ (42) where ${\displaystyle \Phi }$ is a scalar function named velocity potential, defined in such a way that the flow velocity field results ${\displaystyle {\bf {V}}=\nabla \Phi }$. The solution of Eq. (42) must be subject to proper boundary conditions; typically, a far-field condition and a kinematic condition on the body (Neumann condition) are required. The far-field condition requires that the flow disturbances disappear far away from the body. The Neumann condition specifies the normal component of the velocities across the body boundaries. Considering the parachute to be attached to a body fixed coordinate system (${\displaystyle x,y,z}$) which moves in a steady inertial frame (${\displaystyle X,Y,Z}$) according to a specified flight path, the Neumann condition can be expressed as ${\displaystyle \left(\nabla \Phi {\mbox{ }}-{\mbox{ }}v\right){\mbox{ }}\cdot {\mbox{ }}{\overset {\mbox{ˆ}}{\bf {n}}}{\mbox{ }}=}$${\displaystyle {\mbox{ }}V_{N}}$ (43) where ${\displaystyle \nabla \Phi }$ is the total velocity of a fluid particle, ${\displaystyle {\bf {x}}}$ is the kinematic local velocity of the boundary, ${\displaystyle {\overset {\mbox{ˆ}}{n}}\left(x,y,z\right)}$ is the outward normal vector to the boundary and ${\displaystyle V_{N}}$ is a specified normal velocity relative to the boundary. The kinematic velocity in the inertial axes results ${\displaystyle {\textbf {v}}={\textbf {V}}_{0}+{\boldsymbol {\omega }}\times {\textbf {r}}+{\textbf {v}}_{rel}}$, being ${\displaystyle {\bf {V}}_{0}}$ the translational velocity of the body, ${\displaystyle {\boldsymbol {\omega }}}$ the rates of rotation about the fixed axes, ${\textstyle r=(x,y,z)}$ the position vector with respect to the body’s origin and ${\displaystyle {\bf {v}}_{rel}=\left({\dot {x}},{\dot {y}},{\dot {z}}\right)}$ a relative velocity accounting for deformations of the body. In spite of the fact that Eq. (42) along with far-field and Neumann conditions leads to a mathematically well-posed problem, the solution of lifting problems is not unique unless the circulation around the body is fixed. This is achieved by applying an additional condition known as Kutta condition. The latter determines the proper amount of circulation to be fixed and leads to ideal flow solutions in agreement with the real attached flow behaviour. Next, the panel technique described by the authors in [5] is employed for solving the potential flow problem. The main characteristics of the methodology will be outlined next but further details can be found in the aforementioned work and the references cited therein. ### 2.2.1 A general solution procedure In order to solve the potential problem a low-order panel technique is applied. The problem setup consists of a parachute system immerse in an ideal fluid flow ${\displaystyle \Omega }$ enclosed by far-field boundary ${\displaystyle S_{\infty }}$. The parachute is defined by boundary ${\displaystyle S_{B}}$ and ${\displaystyle S_{W}}$ represents the upper (U) and lower (L) sides of a thin wake which extending downstream from the body (see Figure 6). Figure 6. Aerodynamic problem setup. The boundaries ${\displaystyle S=S_{B}+S_{W}}$ divide the problem domain into external and internal regions having total velocity potentials ${\displaystyle \Phi }$ and ${\displaystyle \Phi _{i}}$, both satisfying Eq. (42). By applying Green’s theorem [6], a general solution for the velocity potential at any point ${\displaystyle p}$ can be obtained ${\displaystyle {\Phi }_{p}{\mbox{ }}={\mbox{ }}{\frac {1}{4\pi }}{\underset {S_{B}}{\int \!\int }}\mu {\mbox{ }}\nabla \left({\frac {1}{r}}\right){\mbox{ }}\cdot {\overset {\mbox{ˆ}}{\bf {n}}}{\mbox{ }}dS{\mbox{ }}-{\mbox{ }}{\frac {1}{4\pi }}{\underset {S_{B}}{\int \!\int }}\left({\frac {\sigma }{r}}\right){\mbox{ }}dS{\mbox{ }}+{\mbox{ }}{\frac {1}{4\pi }}{\underset {S_{W}}{\int \!\int }}{\mu }_{W}{\mbox{ }}\nabla \left({\frac {1}{r}}\right){\mbox{ }}\cdot {\overset {\mbox{ˆ}}{\bf {n}}}{\mbox{ }}dS{\mbox{ }}+{\mbox{ }}{\phi }_{\infty }(p)}$ (44) where ${\displaystyle r}$ is the distance between the point ${\displaystyle p}$ and a surface element ${\displaystyle dS}$ having normal vector ${\displaystyle {\overset {\mbox{ˆ}}{\bf {n}}}}$ pointing outside ${\displaystyle \Phi }$, ${\textstyle {\phi }_{\infty }}$ is a constant freestream potential due to ${\displaystyle S_{\infty }}$ and no jump in the normal component of the velocity across the wake is considered (thin wake assumption). The terms ${\textstyle -\mu {\mbox{ }}={\mbox{ }}\Phi -{\Phi }_{i}}$ and ${\displaystyle {\mbox{ }}-\sigma =\nabla \left(\Phi -{\Phi }_{i}\right)\cdot {\overset {\mbox{ˆ}}{\bf {n}}}}$ are the strength (per unit area) of doublet and source surface distribution. These represent, respectively, jumps in velocity potential and the normal component of the velocity across the boundaries. If the point ${\displaystyle p}$ lies on the integration surface (e.g. when evaluating the influence of a panel on itself), ${\textstyle r{\mbox{ }}\rightarrow {\mbox{ }}0}$ and Eq. (44) becomes singular. In such a case the point must be excluded from the integration and this procedure leads to ${\displaystyle {\Phi }_{\mbox{p}}=-\left(\Phi -{\Phi }_{i}\right)/2=\mu /2}$ when ${\displaystyle p}$ is inside the body. Note that the perturbation potential tends to zero when ${\textstyle r{\mbox{ }}\rightarrow {\mbox{ }}\infty }$. Then, the far-field condition is automatically satisfied. In order to solve Eq. (44), the internal Dirichlet condition ${\displaystyle {\Phi }_{i}{\mbox{ }}={\mbox{ }}const.{\mbox{ }}={\mbox{ }}{\phi }_{\infty }}$ is applied. Thus, considering the velocity potential at either region to consist of a freestream potential ${\textstyle {\phi }_{\infty }}$ plus a perturbation potential due to the body and its wake ${\textstyle \phi {\mbox{ }}={\mbox{ }}\Phi {\mbox{ }}-{\mbox{ }}{\phi }_{\infty }}$ , for a point ${\displaystyle p}$ inside the body Eq. (44) results ${\displaystyle 0{\mbox{ }}={\mbox{ }}{\frac {1}{4\pi }}{\underset {S_{B}}{\int \!\int }}\mu {\mbox{ }}\nabla \left({\frac {1}{r}}\right){\mbox{ }}\cdot {\mbox{ }}{\overset {\mbox{ˆ}}{n}}{\mbox{ }}dS{\mbox{ }}-{\mbox{ }}{\frac {1}{4\pi }}{\underset {S_{B}}{\int \!\int }}\left({\frac {\sigma }{r}}\right){\mbox{ }}dS{\mbox{ }}+{\mbox{ }}{\frac {1}{4\pi }}{\underset {S_{W}}{\int \!\int }}{\mu }_{W}\nabla \left({\frac {1}{r}}\right){\mbox{ }}\cdot {\mbox{ }}{\overset {\mbox{ˆ}}{\bf {n}}}{\mbox{ }}dS}$ (45) where the doublet strength turns into the perturbation velocity potential ${\textstyle -\mu {\mbox{ }}={\mbox{ }}\phi {\mbox{ }}={\mbox{ }}\Phi -}$${\displaystyle {\phi }_{\infty }}$ and the source strength results ${\displaystyle -\sigma =\nabla \left(\Phi -{\phi }_{\infty }\right)\cdot {\overset {\mbox{ˆ}}{n}}}$ . The solution of Eq. (45) reduces to find a suitable distribution of doublets and sources along the body. With this purpose, an arbitrary choice is made for ${\displaystyle \sigma }$ considering the Neumann condition (43). This leads to ${\displaystyle \sigma {\mbox{ }}={\mbox{ }}-V_{N}{\mbox{ }}-{\mbox{ }}\left(V_{\mbox{0}}{\mbox{ }}+{\mbox{ }}\omega {\mbox{ }}\times {\mbox{ }}r{\mbox{ }}+{\mbox{ }}v_{rel}\right){\mbox{ }}\cdot {\mbox{ }}{\overset {\mbox{ˆ}}{n}}}$ (46) and Eq. (45) can now be solved for the unknown body doublet distribution assuming the wake doublicity is determined (this is achieved by means of the Kutta condition). Certain component parts of the parachutes which are very thin (e.g. stabilizer panels) cannot be regarded as enclosing an internal volume and the application of Eq. (45) could lead to numerical misbehaviours. An equation for modeling thin boundaries can be obtained at a given point ${\displaystyle p}$ by replacing the perturbation velocity, obtained by differentiating Eq. (44), into the Neumann condition. This yields ${\displaystyle {\frac {1}{4\pi }}{\underset {S_{B}}{\int \!\int }}\mu {\mbox{ }}{\overset {\mbox{ˆ}}{\bf {n}}}_{p}{\mbox{ }}\cdot {\mbox{ }}\nabla \left({\overset {\mbox{ˆ}}{\bf {n}}}{\mbox{ }}\cdot {\mbox{ }}\nabla \left({\frac {1}{r}}\right)\right){\mbox{ }}dS{\mbox{ }}+{\mbox{ }}{\frac {1}{4\pi }}{\underset {S_{W}}{\int \!\int }}{\mu }_{W}{\mbox{ }}{\overset {\mbox{ˆ}}{\bf {n}}}_{p}{\mbox{ }}\cdot {\mbox{ }}\nabla \left({\overset {\mbox{ˆ}}{\bf {n}}}{\mbox{ }}\cdot {\mbox{ }}\nabla \left({\frac {1}{r}}\right)\right){\mbox{ }}dS{\mbox{ }}-{\mbox{ }}{\overset {\mbox{ˆ}}{\bf {n}}}_{p}{\mbox{ }}\cdot {\mbox{ }}{\bf {v}}{\mbox{ }}={\mbox{ }}V_{N_{P}}}$ (47) which can be solved for the doublet distribution on thin aerodynamic surfaces. Given the fact that no jump in the normal component of the velocity is assumed across thin boundaries, the source contributions disappear from Eq. (47). However, if aerodynamic configurations having mixed thin/thick surfaces are considered, the contribution to the normal component of the velocity due to the source distribution on thick boundaries must be accounted for. A discrete form of the governing equations (45) and (47) is achieved by breaking down the surface integrals into integrals over quadrilateral or triangular flat panels distributed along the body and the wake. Then, the discrete equations are satisfied at each panel ${\textstyle {\mbox{ }}J=1,N_{B}}$ on the body by considering all panels contributions ${\textstyle K=1,N_{B}+N_{W}}$ . Accordingly, the discrete version of Eq. (45) is set at each control point ${\displaystyle J=1,N_{B}^{thick}}$ on thick boundaries by ${\displaystyle \sum _{K=1}^{N_{B}}{\mu }_{K}C_{JK}{\mbox{ }}+{\mbox{ }}\sum _{L=1}^{N_{W}}{\mu }_{L}C_{JL}{\mbox{ }}={\mbox{ }}\sum _{K=1}^{N_{B}}{\sigma }_{K}B_{JK}}$ (48) where CJK and BJK denote the perturbation potential (per unit strength) due to a constant doublet and source distribution on panel K acting on a control point J. These influence coefficients are given by ${\displaystyle C_{JK}{\mbox{ }}={\mbox{ }}{\underset {S_{K}}{\int \!\int }}{\nabla }_{K}\left({\frac {1}{r_{JK}}}\right){\mbox{ }}{\overset {\mbox{ˆ}}{\bf {n}}}_{K}{\mbox{ }}dS_{K}}$ (49) ${\displaystyle B_{JK}{\mbox{ }}={\mbox{ }}{\underset {S_{K}}{\int \!\int }}{\frac {1}{r_{JK}}}dS_{K}}$ (50) and can be computed in a close manner in terms of the geometry of the panel and the coordinates of the point where the influence is sought. Similarly, Eq. (47) is discretized for each control point ${\displaystyle j=1,N_{B}^{thin}}$ on thin surfaces as ${\displaystyle \sum _{K=1}^{N_{B}}{\mu }_{K}E_{JK}{\mbox{ }}+{\mbox{ }}\sum _{L=1}^{N_{W}}{\mu }_{L}E_{JL}{\mbox{ }}={\mbox{ }}\sum _{K=1}^{N_{B}}{\sigma }_{K}D_{JK}{\mbox{ }}+{\mbox{ }}{\overset {\mbox{ˆ}}{\bf {n}}}_{J}{\mbox{ }}\cdot {\textbf {v}}{\mbox{ }}+{\mbox{ }}V_{N_{J}}}$ (51) The normal components of the perturbation velocity at control point J due to a constant doublet and source distribution (per unit strength) on panel K are ${\displaystyle E_{JK}{\mbox{ }}={\mbox{ }}{\overset {\mbox{ˆ}}{\bf {n}}}_{J}{\mbox{ }}\cdot {\mbox{ }}{\bf {V}}_{{\mu }_{JK}}}$ (52) ${\displaystyle D_{JK}{\mbox{ }}={\mbox{ }}{\overset {\mbox{ˆ}}{\bf {n}}}_{J}{\mbox{ }}\cdot {\mbox{ }}{\bf {V}}_{{\sigma }_{JK}}}$ (53) with ${\displaystyle {\bf {V}}_{{\mu }_{JK}}{\mbox{ }}={\mbox{ }}{\frac {1}{4\pi }}{\underset {S_{K}}{\int \!\int }}{\nabla }_{J}\left({\overset {\mbox{ˆ}}{\bf {n}}}_{K}{\mbox{ }}\cdot {\mbox{ }}{\nabla }_{K}\left({\frac {1}{r_{JK}}}\right)\right){\mbox{ }}dS_{K}}$ (54) ${\displaystyle {\bf {V}}_{{\sigma }_{JK}}{\mbox{ }}={\mbox{ }}{\frac {1}{4\pi }}{\underset {S_{K}}{\int \!\int }}{\nabla }_{J}\left({\frac {1}{r_{JK}}}\right)dS_{K}}$ (55) Note that the source contribution given by ${\displaystyle D_{JK}}$ is included in the discrete Eq. (51) to account for the perturbation velocity induced on thin panels by source distributions placed on thick panels (for configurations presenting mixed thin/thick boundaries). ### 2.2.2 Wake modeling The time-steeping procedure presented in [6, 7] is adopted for modeling the wake. In this manner, non-linearities in the problem solution are avoided and the wake develops according to the motion of the body during the simulation. In order to determine the doublicity of the first row of panels shed into the wake, the Kutta condition is applied. Consequently, zero total vorticity at each spanwise station along the trailing edge is enforced by setting ${\displaystyle {\mu }_{TE_{U}}{\mbox{ }}-{\mbox{ }}{\mu }_{TE_{L}}-{\mu }_{W}{\mbox{ }}=}$${\displaystyle {\mbox{ }}0}$ (56) where ${\displaystyle {\mu }_{TE_{U}}}$ and ${\displaystyle {\mu }_{TE_{L}}}$ are the doublet strength at the upper and lower surfaces of the trailing edge and ${\displaystyle {\mu }_{W}}$ is the wake doublet strength next to the trailing edge. Eq. (56) allows the strengths of the shed panels to be written in terms of the body doublets and the linear system resulting from Eqs. (48) and (51) can be solved for the body doublets. Once the panels are shed its doublicity must remain constant to satisfy vorticity conservation. Consequently, these terms can be moved to the right hand side of the equations. The wake shape is determined from the fact that no force can act on it, thus, the wake panels must be parallel to the local streamlines of the flow. In order to align a wake panel with the local flow (wake rollup), the induced velocities on the panel’s corner points are computed in the inertial frame and the coordinates of these points are displaced by ${\textstyle (\Delta x,\Delta y,\Delta z){\mbox{ }}={\mbox{ }}{\left(u,v,w\right)}_{ind}\Delta t}$ . The induced velocity at each point is obtained by adding all the doublet and source panel contributions given by Eqs. (54) and (55) (note that the inertial frame is defined to be at rest). ### 2.2.3 Computation of the aerodynamic loads The aerodynamic loads acting on the body are computed by means of the unsteady Bernoulli’s equation. Thus, the coefficient of pressure (Cp) can be calculated at any point as ${\displaystyle Cp{\mbox{ }}={\mbox{ }}{\frac {p{\mbox{ }}-{\mbox{ }}p_{\infty }}{{\frac {1}{2}}{\rho }_{\infty }V_{\infty }^{2}}}{\mbox{ }}=}$${\displaystyle {\mbox{ }}1{\mbox{ }}-{\mbox{ }}{\left({\frac {V}{V_{\infty }}}\right)}^{2}{\mbox{ }}-}$${\displaystyle {\mbox{ }}{\frac {2}{V_{\infty }^{2}}}{\frac {\partial \phi }{\partial t}}}$ (57) being ${\displaystyle {V}}$ the magnitude of the total velocity at the control point (kinematic + perturbation), ${\displaystyle {V}_{\infty }}$ the magnitude of a reference freestream velocity and the unsteady term ${\textstyle \partial \phi /\partial t=-\partial \mu /\partial t=}$${\displaystyle -({\mu }_{t}-{\mu }_{t-\Delta t})/\Delta t}$ . For a surface panel ${\displaystyle {K}}$ belonging to thick boundaries, the tangential components of the perturbation velocity can be evaluated by taking the gradient of ${\displaystyle \mu }$ in panel coordinates. Hence, ${\displaystyle q_{l}{\mbox{ }}={\mbox{ }}{\frac {\partial \mu }{\partial {\overset {\mbox{ˆ}}{l}}}}{\mbox{ }},{\mbox{ }}q_{m}{\mbox{ }}=}$${\displaystyle {\mbox{ }}{\frac {\partial \mu }{\partial {\overset {\mbox{ˆ}}{\bf {m}}}}}}$ (58) and the normal component of the velocity is given by ${\displaystyle q_{n}{\mbox{ }}={\mbox{ }}\sigma }$ (59) being ${\displaystyle \sigma }$ the panel’s source strength (46). Then, the total velocity on panel ${\displaystyle {K}}$ is obtained by adding the perturbation velocity to the instantaneous local kinematic velocity, i.e. ${\displaystyle {\bf {V}}{\mbox{ }}={\mbox{ }}q_{l}{_{\ast }}{\overset {\mbox{ˆ}}{\bf {l}}}{\mbox{ }}+{\mbox{ }}q_{m}{_{\ast }}{\overset {\mbox{ˆ}}{\bf {m}}}{\mbox{ }}+{\mbox{ }}{\bf {q}}_{n}{_{\ast }}{\overset {\mbox{ˆ}}{\bf {n}}}{\mbox{ }}+{\mbox{ }}\left({\bf {V}}_{0}+{\boldsymbol {\omega }}\times {\textbf {r}}+{\bf {v}}_{rel}\right)}$ (60) Despite the fact that the evaluation of Eqs. (58) can be easily performed on structured discretizations by using finite difference approximations, a more general approach is needed for arbitrary body discretizations. In this work the derivatives are evaluated at each panel using the value of the doublet strength at the panel’s corner points ${\displaystyle {i}}$. These are obtained by ${\displaystyle {\mu }^{i}{\mbox{ }}={\mbox{ }}{\frac {\sum _{j=1,ns_{i}}A_{J}{\mu }_{J}}{\sum _{j=1,ns_{i}}A_{J}}}}$ (61) where ${\displaystyle {A}_{J}}$ and ${\displaystyle {\mu }_{J}}$ are the surface area and doublet strength of a panel ${\displaystyle {J}}$' respectively and the summation is performed over the ${\displaystyle {ns}_{i}}$ panels surrounding a corner point ${\displaystyle {i}}$. Once the doublet strengths at the panel’s corner points are determined, the derivatives (58) are evaluated (in panel coordinates) by using a standard finite element approximation. A rather different procedure is needed when computing the aerodynamic loads acting on thin panels. In this case, the gradient of the doublet strength provides the jump in the tangential velocity across the panel, i.e. ${\displaystyle \Delta {\bf {V}}_{}^{t}{\mbox{ }}={\mbox{ }}{\bf {V}}_{t}^{U}{\mbox{ }}-{\mbox{ }}{\bf {V}}_{t}^{L}{\mbox{ }}={\mbox{ }}\nabla \mu }$ (62) and the tangential components of the total velocity are obtained by ${\displaystyle {\begin{array}{c}{\bf {V}}_{t}^{U}={\bf {V}}_{a}{\mbox{ }}+{\frac {1}{2}}\Delta {\bf {V}}_{}^{t}\\{\bf {V}}_{t}^{L}={\bf {V}}_{a}{\mbox{ }}-{\frac {1}{2}}\Delta {\bf {V}}_{}^{t}\end{array}}}$ (63) being ${\displaystyle {\bf {V}}_{a}}$ an averaged tangential velocity at the panel’s control point (kinematic + perturbation) which omits the panel contribution on itself. Therefore, the net pressure acting on the panel can be computed by replacing Eqs. (63) with (62) in Eq. (57). This results ${\displaystyle \Delta Cp{\mbox{ }}={\mbox{ }}Cp^{U}{\mbox{ }}-{\mbox{ }}Cp^{L}{\mbox{ }}=}$${\displaystyle {\mbox{ }}-{\frac {2}{V_{\infty }^{2}}}\left({\bf {V}}_{a}\cdot \Delta {\bf {V}}_{}^{t}{\mbox{ }}+{\mbox{ }}{\frac {\partial \mu }{\partial t}}\right)}$ (64) ### Suspension lines and internal flow treatment In addition to the aerodynamic loads on the parachute canopy and stabilizer panels, wind loads are applied to the suspension and control lines. These loads are computed in a simplified manner by considering the cords as long cylinders exposed to the wind. In this way, experimental drag coefficients can be applied and the aerodynamic force is computed taking into account the magnitude of the local dynamic pressure and the direction of the total velocity vector acting on the cable elements. Given the fact that the parachute internal flow is not resolved in this work, a constant stagnation pressure is applied inside the canopy to pressurize the cells. As regards parachute air intake, note that it should be panelized to close the canopy under study (as required by the aerodynamic model employed) but no aerodynamic loads are computed on these panels. Furthermore, as the presence of air intake panels may cause disruption of the structural behaviour of the canopy, the former are assumed to be made of a material having a relative small thickness, Young module and density, in such a way that the panels capability to withstand loads is negligible in relation to the rest of canopy panels. ## 2.3 Coupling the aerodynamic and structural modules A 2-way coupling between the aerodynamic (A) and structural (S) solvers is adopted in the parachute simulation code and the A/S models share the same mesh. Due to the fact that the aerodynamic mesh can be composed by quadrilateral or triangular elements, when a quadrilateral element is passed to the structural solver, it is internally transformed into a pair of triangles in order to carry out the analysis. As the stability limit of the explicit structural solver is small, several structural iterations are performed for each aerodynamic time step and the convergence to the steady state regime can be accelerated by using an under-relaxation technique when transferring aerodynamic loads to the structure. For long-term response analysis (e.g. trajectory analysis) the number of structural steps can be reduced by approximating the behaviour of the membrane as quasi-static (i.e. considering only discrete states of equilibrium along the flight path). # 3. PROGRAM USER INTERFACE The use of the parachute simulation code is facilitated by a graphical user interface2 (GUI) developed on the basis of the GiD system [8], an in-house software providing fully integrated tools for geometry and input data definition, code execution and post-processing of the analysis results. The GiD pre-process module allows generating all the input data for the simulation. Complex CAD geometries can be created and manipulated in a simple and efficient manner and models can be also imported on IGES, DXF and many other typical formats. The application of boundary conditions and the definition of the simulation parameters can be easily carried out through a series of customized menus which also allow verifying the conditions prescribed on the model. A view of the user interface in pre-process mode is displayed in Figure 7. In addition, GiD includes all the necessary tools for generating structured and non-structured meshes in complex geometries. After creating the mesh, all the code input files are automatically generated by specific customized templates. Then, the simulation can be started and the evolution of the process followed into the GiD enviroment. As soon as results are available, these can be visualized by switching to the GiD post-process module. This tool offers a wide range of possibilities, not only for visualization purposes but also for results data extraction including text and binary files, user-defined plots, images and video animations showing the simulation evolution in time. Figure 7. GiD pre-process module showing a model and a window menu for simulation parameters setup. # 4. NUMERICAL APPLICATIONS Three test cases are presented next with the aim of showing the performance of the present methodology. The first example concerns a stationary analysis of a ram-air parachute in which the flight equilibrium angle is sought. The second one involves a non-stationary manoeuvre simulation and, finally, the third test case presents a preliminary analysis of the deployment and inflation of a conventional circular parachute. All the examples presented give an idea of the potential the proposed techniques have for analyzing parachute problems. ## 4.1 Stationary analysis of a large ram-air parachute Steady aerodynamic characteristics of a large ram-air parachute are investigated in this example and the results are compared with experimental measurements given in [9]. The model is a high glide-performance parachute aimed at delivering very heavy payloads designed and manufactured by CIMSA in the framework of the FASTWing Project [10]. The model canopy discretization consists of an unstructured distribution of 11760 triangular elements (8824 for the aerodynamic surfaces exposed to the wind and 2936 for the internal ribs) and 11912 cable elements model the suspension and control lines and reinforcements integrated into the canopy. The freestream velocity is set to 23 m/s and the simulation is initialized with a partially inflated parachute configuration. The movement of the suspension line’s confluence points is restricted to follow the experimental setup. To obtain a faster convergence to the equilibrium position of the parachute, some degree of under-relaxation is employed when transferring the aerodynamic loads to the structure. Moreover, a simplified wake model with a limited roll-up is adopted to reduce non-linear effects in the aerodynamic problem solution. After some time steps the parachute achieves its equilibrium position according to the rigging angle established by the model geometry. Initial and equilibrium parachute configurations are shown in Figure 8. Figure 8. Different views of the initial (top) and computed (bottom) equilibrium configurations. The time history of force and moment coefficients is displayed in Figure 9, where the moment coefficients are computed about a point located between the suspension line’s confluence points. Note that the transient behaviour lacks a physical meaning as under-relaxation has been employed to accelerate the problem convergence to the steady state solution. Figure 9. Computed time history of forces and moments coefficients.> Considering the equilibrium lift and drag coefficients displayed in Figure 9, a numerical angle of descent ${\textstyle \Gamma =-C_{D}/C_{L}\approx -10{^{\circ }}}$ can be estimated. Experimental measurements report stationary lift and drag coefficients ${\textstyle C_{L}=0.577}$ and ${\textstyle C_{D}=0.179}$ , which yield a descent angle ${\textstyle \Gamma {\mbox{ }}\approx {\mbox{ }}-17{^{\circ }}}$ . Taking into account the characteristics of the aerodynamic model employed in this work, numerical results are quite consistent with experimental ones. The differences could be explained by two main factors. First, the potential flow model could overestimate lift as the entire flow around the canopy is considered to be attached and, second, viscous contributions to the drag coefficient are not accounted for in this simulation. It should be noticed that important discrepancies could arise in the equilibrium conditions due to drag variations, particularly for low lift values. In this sense, semi-empirical models for canopy drag estimation could be employed to improve the accuracy of the numerical model. Moreover, coupled boundary layer-potential approaches could be suitable in certain analysis, though these two possibilities should be investigated further. Next, Figure 10 shows Cp distribution computed over the parachute for the equilibrium condition. Figure 10. Parachute pressure coefficient, Γ ≈ -10º.> ## 4.2 Parachute manoeuvre analysis The parachute studied in the previous test case is induced to initiate a skid steering right-turn by applying a 2.2 m downward deflection to the right brake lines. The freestream velocity is set to 23 m/s and the simulation starts with a partially inflated parachute configuration. Once the equilibrium flight conditions are achieved, the turn manoeuvre is induced by pulling the right control lines. The simulation is stopped some time units after the turn is established because the parachute support mechanism employed is not intended for a full turn simulation. The time evolution of aerodynamic force and moment coefficients during the manoeuvre is presented in Figure 11 and some snapshots of the configuration taken at different instant times are shown in Figure 12. In spite of the fact that no experimental data is available for this test case, the numerical results match those observed in real behaviour of ram-air parachutes performing similar manoeuvres. As regards manoeuvre simulation, it should be noticed that at present different supporting systems intended to obtain parachute static and dynamic characteristics (as the parachute was mounted on a wind tunnel) are being evaluated. In addition, the structural analysis module has been recently enhanced to allow simulating the payload attached to the parachute, being the complete parachute-payload system subject to gravitational forces. This avoids the need to restrain the movements of the parachute’s confluence points and allows simulating specific flight conditions in detail and completing trajectory analyses with higher reliability. Figure 11. Time evolution of force and moment coefficients during the right-turn manoeuvre. Figure 12. Parachute computed deformed configurations at different instant times during the manoeuvre. The yaw angle Ψ increases from left to right. ## 4.3 Inflation process of a conventional parachute This is a simple inflation test aimed at exploring the capabilities of the computational code to simulate parachute deployment and inflation. The model proposed for the aerodynamic loads is quite simple. The parachute is initially deployed by applying a force at the canopy apex in the direction of the incident wind. The inflation stage, which begins after line and canopy stretching, occurs due to a variable pressure force applied on the canopy accounting for relative wind direction and velocity. In this way, the maximum pressure force corresponds to the fluid stagnation condition and that value is decreased according to the orientation of the elements in relation to the incident wind. The parachute is discretized by an unstructured distribution of 3390 triangular elements and 2040 cable elements modeling the suspension lines and the fabric reinforcements. The canopy has a surface area ${\textstyle S_{0}{\mbox{ }}={\mbox{ }}222.25{\mbox{ }}m^{2}}$ and the relation between the latter and the projected area is ${\textstyle S_{p}/S_{0}\approx 0.44}$ . The airstream velocity is set to ${\textstyle V_{\infty }{\mbox{ }}={\mbox{ }}28.5{\mbox{ }}m/s}$ . Some snapshots of the parachute at different instant times during the inflation process are presented in Figure 13. Figure 13. Parachute views at different instant times during the inflation phase. The reaction force computed at the confluence point during the inflation process is shown in Figure 14. There, a satisfactory agreement with experimental results reported in [11] for similar configurations is observed. Figure 14. Evolution of the parachute opening force. Future developments in parachute deployment and inflation simulation are to be focused on evaluating the feasibility of more-accurate semi-empirical models such as filling distance, kinetics and momentum methods. These inflation theories can be implemented with a low computational cost and have been applied to a wide range of problems with satisfactory results; see [12] for a review. # 5. COMPLEMENTARY TOOLS: TRAJECTORY AND CONTROL ANALYSIS The development of a set of tools aimed at an integral study of the flight performance of a parachute-payload system, its trajectory dynamics and guidance control systems effects is undertaken with the objective of assisting the design, analysis and evaluation of guided parafoil systems3. In the 6-DoF model proposed, the dynamic model of the parachute is characterized by aerodynamic, mass and inertial properties which can be obtained from experimental and numerical sources. Provided these characteristics, the model allows predicting the behaviour of the parachute system subject to different flight and environmental conditions with a very low computational cost. The aerodynamic data characterizing the parachute is to be obtained with the computational code described in Section 2. A suitable evaluation strategy intended to this aim is currently under development taking into consideration the experimental setups presented in [13, 14]. The autonomous guidance, navigation and control system (GNC) is implemented by means of a proportional-integral-derivative (PID) algorithm, similar to Unmanned Aerial Vehicles (UAV) autopilot systems. Future implementations will include Monte Carlo simulations, parameter identification for flight-test data reduction and an advanced GUI. Next, a brief description of the dynamic 6 DoF model and a preliminary implementation of control strategies are presented. By the end of this section, two numerical results illustrate the performance of the proposed methodologies. (2) The user interface has been developed at Terrassa CIMNE classroom (CIMNE-ETSEIAT). (3) This work is being developed at the Instituto Universitario Aeronáutico CIMNE classroom (CIMNE-IUA). ## 5.1 The parachute dynamic model The parachute dynamic model adopted in this work is based on that developed by Slegers and Costello [15]. Its main characteristics can be summarized as follows • 6 DoF aerodynamic model based on derivatives • Apparent mass model based on Barrows [16] • Independent left/right brake input for lateral control • Variable incidence angle for longitudinal control (glide slope) • Automatic control for precision placement • Standard atmosphere and wind input capability Figure 15 sketches the parachute-payload configuration analyzed. With the exception of movable brakes, the canopy is considered to have a fixed shape after being completely inflated. The combined system of the parachute canopy and the payload are modeled with 6 DoF, including three inertial position components of the total system mass center as well as the three Euler orientation angles. Orientation of the canopy with respect to the payload is defined by the incidence angle Γ and is considered a control variable. Rotation of the canopy about point C allows the tilting of the canopy lift and drag vectors, causing changes in the equilibrium glide slope. The kinematic equations for the parachute-payload system are given by ${\displaystyle \left\{{\begin{array}{c}{\dot {x}}\\{\dot {y}}\\{\dot {z}}\end{array}}\right\}={\left[{\bf {T}}_{IB}\right]}^{T}\left\{{\begin{array}{c}u\\v\\w\end{array}}\right\}{\mbox{ }},{\mbox{ }}\left\{{\begin{array}{c}{\dot {\phi }}\\{\dot {\theta }}\\{\dot {\psi }}\end{array}}\right\}=\left[{\begin{array}{ccc}1&s_{\phi }t_{\theta }&c_{\phi }t_{\theta }\\0&c_{\phi }&-s_{\phi }\\0&s_{\phi }/c_{\theta }&c_{\phi }/c_{\theta }\end{array}}\right]\left\{{\begin{array}{c}p\\q\\r\end{array}}\right\}}$ (65) where ${\textstyle sin(\alpha )=s_{\alpha }}$, ${\textstyle cos(\alpha )=c_{\alpha }}$, ${\textstyle tan(\alpha )=t_{\alpha }}$, ${\displaystyle T_{IB}}$ is the transformation matrix from the inertial reference frame to the body reference frame, ${\displaystyle \phi ,\theta ,\psi }$ are the Euler roll, pitch and yaw angles and ${\displaystyle p,q,r}$ are the angular velocities in a body reference frame. ${\displaystyle {\bf {T}}_{IB}=\left[{\begin{array}{ccc}c_{\theta }c_{\psi }&c_{\theta }s_{\psi }&-s_{\theta }\\s_{\phi }s_{\theta }c_{\psi }-c_{\phi }s_{\psi }&s_{\phi }s_{\theta }s_{\psi }+c_{\phi }c_{\psi }&s_{\phi }c_{\theta }\\c_{\phi }s_{\theta }c_{\psi }+s_{\phi }s_{\psi }&c_{\phi }s_{\theta }s_{\psi }-c_{\phi }c_{\psi }&c_{\phi }c_{\theta }\end{array}}\right]}$ (66) The dynamic equations are obtained by summing forces and moments about the system CG (in the body reference frame) and equating to the time derivative of linear and angular momentum respectively, i.e. ${\displaystyle {\begin{array}{c}\left\{{\begin{array}{c}{\dot {u}}\\{\dot {v}}\\{\dot {w}}\end{array}}\right\}={\frac {1}{m}}\left({\bf {F}}_{W}+{\bf {F}}_{A}+{\bf {F}}_{S}-{\bf {F}}_{AM}\right)-{\bf {S}}_{\omega }^{B}\left\{{\begin{array}{c}u\\v\\w\end{array}}\right\}\\\left\{{\begin{array}{c}{\dot {p}}\\{\dot {q}}\\{\dot {r}}\end{array}}\right\}={\left[{\bf {I}}_{T}\right]}^{-1}\left({\bf {M}}_{A}+{\bf {M}}_{AM}+{\bf {S}}_{CG.P}^{B}{\bf {F}}_{A}+{\bf {S}}_{CG.S}^{B}{\bf {F}}_{S}+{\bf {S}}_{CG.M}^{B}{\bf {F}}_{AM}-S_{\omega }^{B}\left[{\bf {I}}_{T}\right]\left\{{\begin{array}{c}u\\v\\w\end{array}}\right\}\right)\end{array}}}$ (67) where the indices ${\displaystyle W,A,S}$ and ${\displaystyle AM}$ denote weight, aerodynamic, payload and apparent mass respectively. ${\displaystyle S_{r}^{A}}$are cross-product matrices of vector ${\displaystyle r}$ expressed in the ${\displaystyle A}$ reference frame, ${\displaystyle I_{t}}$ is the total system inertia matrix and ${\displaystyle F}$ and ${\displaystyle M}$ stand for aerodynamic forces and moments. These terms are calculated by adding contributions of the different aerodynamic derivatives of the system [15]. The apparent mass forces and moments are computed according to the model developed by Barrows [16]. ## 5.2 Lateral Track Control Strategy A heading tracking control strategy is implemented following the PID navigation system presented in [17] with the aim of guiding the parachute from a waypoint ${\displaystyle {\bf {W}}_{p1}}$ (parachute deployment) to a specified target waypoint ${\displaystyle {\bf {W}}_{p2}}$ through a yaw-rate command (see Figure 16). Figure 16. Lateral control strategy The control strategy proceeds as follows. For a given parachute track position ${\textstyle \left(X_{track},Y_{track}\right)}$ , measured from ${\displaystyle {\bf {W}}_{p2}}$, the vehicle ground velocity vector ${\textstyle V_{ground}}$ is pointed in the direction of a line intercepting the track at point ${\displaystyle C}$. The interception point ${\displaystyle C}$ is determined by considering that the distance on the track line from this point to ${\displaystyle {\bf {W}}_{p2}}$ is, at any instant time, equal to ${\textstyle (1-k){\mbox{ }}X_{track}}$ , being ${\displaystyle k}$ a design parameter. From the geometry of the similar triangles ${\displaystyle OAB}$ and ${\displaystyle OCD}$, the parachute position and velocity is established according to ${\displaystyle {\frac {{\dot {X}}_{track}}{k\cdot X_{track}}}={\frac {{\dot {Y}}_{track}}{Y_{track}}}}$ (68) Accordingly, an error E is defined by ${\displaystyle E=k\cdot X_{track}{\dot {Y}}_{track}-Y_{track}{\dot {X}}_{track}=0}$ (69) and this equation is satisfied using a proportional feedback control law. In other words, if Eq. (69) is not satisfied (there is a heading error), the error commands the yaw-rate control input by giving it an input of desired yaw-rate as, ${\displaystyle r_{CMD}=K_{R}E=K_{R}\left(k\cdot X_{track}{\dot {Y}}_{track}-\right.}$${\displaystyle \left.Y_{track}{\dot {X}}_{track}\right)}$ (70) where the proportional gain ${\displaystyle K_{R}}$ is determined iteratively in order to achieve good tracking without overshoots and the yaw-rate command is limited to ${\textstyle R_{max}{\mbox{ }}={\mbox{ }}\pm {\mbox{ }}0.2{\mbox{ }}{\mbox{rad/s}}}$ with the aim of avoiding numerical misbehaviours. Then, the desired yaw-rate is compared to the actual yaw-rate and left or right brake are applied according to ${\displaystyle \delta _{brake}=K_{brake}\cdot (r_{CMD}-{\dot {\psi }})}$ (71) being ${\displaystyle K_{brake}}$ a gain parameter accounting for the kinematics of the parachute brake. ## 5.3 Application examples Two simulation cases involving a small ram-air parachute, subject to different flight conditions, are intended to show the performance of the dynamic model and the lateral control strategy. The geometrical, mass, inertial and aerodynamic characteristics4 of the test parachute-payload system are presented in Table 1. Table 1. Characteristics of the parachute-payload system Test case 1: glide slope (GS) control systems are studied in this example along with standard brake inputs for different wind settings. The parachute-payload system is deployed from a 400 m altitude with heading north, fixed incidence angle ${\displaystyle \Gamma =-12^{\circ }}$ and no wind. The computed trajectory is presented in Figure 17, where left and right brake inputs are applied. Figure 17. Computed trajectory (400 m altitude, ${\displaystyle \Gamma =-12^{\circ }}$ and no wind). Next, the same simulation is performed, but this time applying a 3 m/s wind coming from East. The computed results are displayed in Figure 18. Figure 18. Computed trajectory (400 m altitude, ${\displaystyle \Gamma =-12^{\circ }}$ and 3 m/s wind from East). Test case 2: in this example the parachute-payload system is guided when following a flight path between two specified waypoints ${\displaystyle {\bf {W}}_{p1}}$ and ${\displaystyle {\bf {W}}_{p2}}$. The starting position and heading of the vehicle is shown with the parachute top-view in Figure 19. There, several flight paths computed for different values of parameter ${\displaystyle k}$ and heading angles are shown. This test case reveals the potential these set of tools have for studying GNC strategies. Figure 19. Left: influence of parameter ${\displaystyle k}$ on the guided trajectory for a South release heading (180º). Right: automatic control response to different release heading angles (${\displaystyle k=0.8}$). (4) Data provided by Prof. Costello from Georgia Tech in personal communication to the authors. # 6. CONCLUSIONS The role parachutes have in many civil, humanitarian and military applications call for new and improved computational tools aimed at tackling the current lack of software applications in the field. New developments carried out at CIMNE to this end have been presented in this work. These developments involve a coupled fluid-structural solver intended for the unsteady simulation of ram-air type parachutes and a set of complementary tools aimed at studying trajectory and control systems effects. The coupled solution approach, based on an unsteady low-order panel method for aerodynamics, in conjunction with an explicit dynamic FE solver for the structure, has succeeded in solving ram-air parachute problems. This finding has been supported by the numerical examples presented and other many simulations performed to date. In addition, though at a preliminary stage, the complementary tools intended for trajectory and control effects analyses have shown their potential for the design, analysis and evaluation of guided parachute systems. The robustness and efficiency of the new coupled fluid-structural code have been highly improved with respect to the previous developments, at the same time that the scope of its potential applications has been widely extended. As it has been highlighted throughout this work, the challenges involved in the numerical simulation of parachutes are not minor; however, the numerical results obtained to date encourage us to go further in the development of numerical tools which are currently necessary as well as important contributions to the analysis and evaluation of parachute systems. # 7. FUTURE LINES OF INVESTIGATION The future lines of research intended for the development of new and enhanced implementations in the field of numerical simulation of parachutes are described next. From the point of view of the structural code, the modeling of anisotropic materials taking into account the directional nature of the fabric is planned for the near future. In addition, sliding cables and a contact model are required for a proper simulation of parachute deployment and inflation. This needs the slider assisted reefing process during parachute inflation to be correctly modeled. Thus, a special type of kinematic constraint must be implemented in which a mesh node is forced to lie on a cable element, but the constrained node need not to coincide with the nodes of the cable. This is a complex problem which was not properly resolved in former developments and requires further investigation. Moreover, a contact model must be incorporated into the solver to reproduce the parachute deployment sequence. This shall prevent the fabric surface from intersecting itself, thus yielding non-physical behaviour. An efficient way of implementing the reefing and contact models must also be investigated in order to keep the good performance of the structural solver. As long as aerodynamics is concerned, the adopted panel technique has limitations in the treatment of manoeuvres involving large deformations and the simulation of round-type conventional parachutes. Partially inflated canopy configurations are usually found during the flight, another situation which the present model cannot simulate accurately. Consequently, new approaches should be evaluated taking always computational efficiency as a priority variable. In this sense, boundary methods could even be applied and certain techniques, such as vortex methods, are promising as they can manage a wide range of parachute configurations and flight conditions with low computation cost, even when massive flow separation must be considered (e.g [18]). In relation to parachute deployment and inflation, at present the software allows specified constant or time varying pressure forces to be applied on the canopy and suspension lines for an approximate simulation of these processes. In order to address the complexity of the flow behaviour in these situations, at a first stage it is planned to investigate the application of semi-empirical deployment and inflation models such as filling distance, kinetics and momentum methods, which have been extensively applied with quite satisfactory results [12]. Finally, the tool package for trajectory and control analysis is being expanded. Current efforts are focused on designing a procedure for estimating parachute aerodynamic parameters based on the coupled fluid-structural code implemented. Future developments will include standard/customizable PID control strategies, Monte-Carlo simulations for sensitivity analysis and a parameter estimation technique for flight test data reduction. # REFERENCES [1] Guerra, A. Development and validation of a numerical code for analyzing parachutes. Undergraduated thesis presented at the Scola Tecnica Superior d'Enginyers Industrial i Aeronàutica de Terrassa, Technical University of Catalonia (in Spanish). 2009. [2] PARACIMSA. New simulation tools for parachute design improvements. REF. CIT-020400-2005-30. Programa PROFIT, Ministerio de Educación y Ciencia, 01/01/2005 - 31/12/2005. [3] CIMSA Ingeniería en Sistemas. Web page: http://www.cimsa.com/ (22 March 2010). [4] Flores, R., Ortega, E., and Oñate, E. Explicit dynamic analysis of thin membrane structures. CIMNE publication, 2010. [5] Ortega, E., Flores, R., and Oñate, E. A 3D low-order panel method for unsteady aerodynamic problems. CIMNE publication, 2010. [6] Katz, J. and Plotkin, A. Low-Speed aerodynamics. From wing theory to panel methods., McGraw-Hill, 1991. [7] Ashby, D. L. Potential flow theory and operation guide for the panel code PMARC_14. NASA TM-1999-209582, 1999. [8] GiD. The personal pre and post processor. Web page: http://www.gidhome.com (17 March 2010). [9] Hollestelle, P. The FASTWing Project: Wind tunnel test - Realisation and results. 18th AIAA Aerodynamic Decelerator Systems Technology Conference and Seminar. AIAA paper 2005-1641, 2005. [10] Benolol, S. and Zapirain, F. The fastwing project, parafoil development and manufacturing. 18th AIAA Aerodynamic Decelerator Systems Technology Conference and Seminar. AIAA paper 2005-1639, 2005. [11] Scher, S. H. and Young, I. G. Drag coeficcients for partially inflated flat circular parachutes. NASA Technical Note D-6423, 1971. [12] Cockrell, D. J. The aerodynamics of parachutes. AGARDograph Nº 295. AGARD-AG-295, 1987. [13] Burk, S. M. and Ware, G. M. Static Aerodynamic Characteristics of Three Ram-Air Inflated Low Aspect Ratio Fabric Wings. NASA Report, NASA-TN-D-4182, 1967. [14] Ware, G. M. and Hassell, J. L. J. Wind-Tunnel Investigation of Ram-Air-Inflated All-Flexible Wings of Aspect Ratios 1.0 to 3.0. NASA Report, NASA-TM-SX-1923, 1969. [15] Slegers, N. and Costello, M. Use of variable incidence angle for glide slope control of autonomous parafoils. Journal of Guidance, Control and Dynamics, 31: 585-596, 2008. [16] Barrows, T. Apparent mass of parafoils with spanwise camber. AIAA Paper 2001-2006, 2001. [17] Nicolescu, M. Lateral tack control law for aerosonde UAV. AIAA Paper 2001-0016, 2001. [18] Strickland, J. H., Homicz, G. F., Porter, V. L., and Gossler, A. A. A 3-D vortex code for parachute flow predictions: version 1.0. Sandia National Laboratories, Report SAND2002-2174, 2002. ### Document information Published on 01/01/2010 ### Document Score 0 Views 73 Recommendations 0
## Whitman College: David Guichard's "Calculus, Chapter 6: Applications of the Derivative, Section 6.4: Linear Approximations" Read Section 6.4 (pages 139 and 140). In this reading, you will see how tangent lines can be used to locally approximate functions.
# 1 What is LDA? Latent Dirichlet Allocation (LDA) is a generative probablistic model for collections of discrete data developed by Blei, Ng, and Jordan. (Blei, Ng, and Jordan 2003) The most common use of LDA is for modeling of collections of text, also known as topic modeling. A topic is a probability distribution over words.(Steyvers and Griffiths 2007) Imagine you have a bag that has a bunch of little squares in it with a word printed on each (similar to the game Scrabble, but a word on each chip instead of a single letter). Any word not in the bag has a probability of being drawn equal to zero. However, all other words in the bag have a probability greater than zero. Let’s say we have 2 chips with the word ‘Philadelphia’ and 1 with the word ‘Eagles’ on it. We would say you have a 1/3 chance of drawing ‘Eagles’, 2/3 chance of drawing ‘Philadelphia’, and 0 for any other word. This is effectively what a topic is; it provides us with the probabilities of a set of words for the given topic. The general idea of LDA is that each document is generated from a mixture of topics and each of those topics is a mixture of words. This can be used as a mechanism for generating new documents, i.e. we know the topics a priori, or for inferring topics present in a set of documents we already have. In regards to the model name, you can think of it as follows: • Latent: Topic structures in a document are latent meaning they are hidden structures in the text. • Dirichlet: The Dirichlet distribution determines the mixture proportions of the topics in the documents and the words in each topic. • Allocation: Allocation of words to a given topic. To review: we have latent structures in a corpus (topics), with topic distributions in each document and word distributions in each topic based on the Dirichlet distribution, to allocate words to a given topic and topics to a given document. I realize some readers may be unfamiliar with a portion of the terminology and distributions mentioned in the opening paragraphs. Please keep reading, all the nuts an bolts will be addressed in the following chapters, but to help get an understanding of what LDA is and why it is useful, I will offer a quick example. We will get to the math and technical concepts in the following chapters. ## 1.1 Animal Generator The majority of this book is about words, topics, and documents, but lets start with something a bit different: animals and where they live. One of the ways you can classify animals is by where they spend the majority of their time - land, air, sea. Obviously there are some animals that only dwell in one place; a cow only lives on land and a fish only lives in the sea. However, there are other animals, such as some birds, that split their time between land, sea, and air. You are probably asking yourself ‘where is he going with this?’. We can think of land, air, and sea as topics that contain a distribution of animals. In this case we can equate animals with words. For example, on land I am much more likely to see a cow than a whale, but in the sea it would be the reverse. If I quantify these probabilities into a distribution over all the animals (words) for each type of habitat (land,sea, air - topics) I can use them to generate sets of animals (words) to populate a given location (document) which may contain a mix of land, sea, and air (topics). So let’s move on to generating a specific location. We know that different locations will vary in terms of which habitats are present. For example, a beach contains land, sea, and air, but some areas inland may only contain air and land like a desert. We can define the mixture of these types of habitats in each location. For example, a beach is 1/3 land, 1/3 sea, and 1/3 air. We can think of the beach as a single document. To review: a given location (document) contains a mixture of land, air, and sea (topics) and each of those contain different mixtures of animals (words). Let’s work through some examples using our animals and habitats. The examples provided in this chapter are oversimplified so that we can get a general idea how LDA works. We’ll start by generating our beach location with 1/3 land animals, 1/3 sea animals, and 1/3 air animals. Below you can see our collection of animals and their probability in each topic. Note that some animals have zero probabilities in a given topic, i.e. a cow is never in the ocean, where some have higher probabilities than others; a crab is in the sea sometimes, but a fish is always in the sea. You may notice that there is only 1 animal in the air category. There are several birds, but only 1 of them is cabable of flight in our vocabulary. These are the probability of a word given the topic and therefore the probabilities of each habitat (column) sum to 1. Table 1.1: Animal Distributions in Each Habitat vocab land sea air 🐋 0.00 0.12 0 🐳 0.00 0.12 0 🐟 0.00 0.12 0 🐠 0.00 0.12 0 🐙 0.00 0.12 0 🦀 0.05 0.06 0 🐊 0.05 0.06 0 🐢 0.05 0.06 0 🐍 0.05 0.06 0 🐓 0.10 0.00 0 🦃 0.10 0.00 0 🐦 0.05 0.06 1 🐧 0.05 0.06 0 🐿 0.10 0.00 0 🐘 0.10 0.00 0 🐂 0.10 0.00 0 🐑 0.10 0.00 0 🐪 0.10 0.00 0 To generate a beach (document) based off the description we would use those probabilities in a straightforward manner: words_per_topic <- 3 equal_doc <- c(vocab[sample.int(length(vocab),words_per_topic, prob=phi_ds$land, replace = T)], vocab[sample.int(length(vocab),words_per_topic, prob=phi_ds$sea, replace = T)], vocab[sample.int(length(vocab),words_per_topic, prob=phi_ds$air, replace = T)]) cat(equal_doc) ## 🐘 🐂 🐪 🐧 🐧 🐙 🐦 🐦 🐦 In the above example the topic mixtures are static and equal, so each habitat (topic) contributes 3 animals to the beach. Before proceeding, I want to take a moment to give recognition to Tim Hopper for his presentation utilizing emoji to shed some light on how generative LDA works (Hopper 2016). Ok, now let’s make an ocean setting. In the case of the ocean we only have sea and air present, so our topic distribution in the document would be 50% sea, 50% air, and 0% land. words_per_topic <- 3 ocean_doc <- c(vocab[sample.int(length(vocab),words_per_topic, prob=phi_ds$sea, replace = T)], vocab[sample.int(length(vocab),words_per_topic, prob=phi_ds\$air, replace = T)]) cat(ocean_doc) ## 🐋 🐳 🐢 🐦 🐦 🐦 In the example above only the air and land contribute to the ocean location. Therefore they both contribute an equal number of animals to the location. ### 1.1.1 Generating the Mixtures It is important to note the examples above use static word and topic mixtures that were predetermined, but these mixtures could just as easily be created by sampling from a Dirichlet distribution. This is an important distinction to make as it is the foundation of how we can use LDA to infer topic structures in our documents. The Dirichlet distribution and it’s role in LDA is discussed in detail in the coming chapters. ## 1.2 Inference We have seen that we can generate collections of animals that are representative of the given location. What if we have thousands of locations and we want to know the mixture of land, air, and sea that are present? And what if we had no idea where each animal spends its time? LDA allows us to infer both of these peices of information. Similar to the locations (documents) generated above, I will create 100 random documents with varying length and various habitat mixtures. Table 1.2: Animals at the First Two Locations Document Animals 1 🐪 🐘 🐪 🐘 🐦 🐓 🐪 🐪 🐍 🐪 🐧 🦀 🦀 🐪 🐓 🐍 🐘 🐓 🐍 🐿 🐧 🐢 🐧 🦃 🦃 🐧 🐦 🐑 🐑 🐊 🐳 🦀 🦀 🐿 🐢 🐢 🐿 🐓 🐪 🐊 🐘 🐦 🐪 🐂 🐍 🐓 🐍 🐓 🐦 🐍 🦃 🐦 🐪 🐍 🐿 🐦 🐦 🐂 🐿 🐍 🐂 🐿 🐦 🐦 🐑 🐂 🐓 🐓 🐧 🐑 🐦 🐪 🐦 🐧 🐿 🐪 🐦 🐢 🦃 🐿 🦃 🐦 🐑 🐊 🦃 🐪 🦃 🐓 🐂 🐊 🐊 🐂 🦃 2 🐙 🐂 🐦 🐦 🐓 🐧 🐪 🐙 🐧 🐙 🐪 🐘 🐋 🐂 🐦 🐧 🐦 🐙 🐦 🐳 🐟 🐊 🐟 🐢 🐠 🐠 🐪 🐢 🐦 🐘 🐍 🐳 🦃 🐟 🐙 🦀 🐊 🐳 🐪 🐠 🐧 🐢 🐢 🐦 🐍 🐧 🐿 🐢 🐪 🐢 🐧 🐓 🐑 🐳 🐧 🐍 🐊 🐂 🦃 🐋 🐪 🐓 🐿 🐟 🐙 🐋 🦀 🐂 🐦 🐳 🐢 🐟 🐦 🐘 🐊 🐓 🐓 🐧 🐊 🐢 🐪 🐓 🐊 🐢 🐑 🐢 🐙 🐊 🐢 🐧 🐪 The topic word distributions shown in Table 1.1 were used to generate our sample documents. The true habitat (topic) mixtures used to generate the first couple of documents are shown in Table 1.3: Table 1.3: Distribution of Habitats in the First Two Locations Document air land sea 1 0.09 0.90 0.01 2 0.09 0.51 0.39 With the help of LDA we can go through all of our documents and estimate the topic/word (habitat/animal) distributions and the topic/document (habitat/location) distributions. The true and estimated topic word distributions are shown in Table 1.4. Table 1.4: True and Estimated Word Distribution for Each Topic air estimated air land estimated land sea estimated sea 🐋 0.00 0 0.00 0.00 0.11 0.12 🐳 0.01 0 0.01 0.00 0.10 0.12 🐟 0.00 0 0.00 0.00 0.11 0.12 🐠 0.00 0 0.00 0.00 0.12 0.12 🐙 0.00 0 0.00 0.00 0.12 0.12 🦀 0.01 0 0.04 0.05 0.06 0.06 🐊 0.00 0 0.05 0.05 0.07 0.06 🐢 0.01 0 0.05 0.05 0.06 0.06 🐍 0.00 0 0.06 0.05 0.05 0.06 🐓 0.00 0 0.10 0.10 0.00 0.00 🦃 0.00 0 0.10 0.10 0.00 0.00 🐦 0.95 1 0.02 0.05 0.11 0.06 🐧 0.00 0 0.05 0.05 0.06 0.06 🐿 0.00 0 0.10 0.10 0.01 0.00 🐘 0.00 0 0.10 0.10 0.00 0.00 🐂 0.00 0 0.10 0.10 0.00 0.00 🐑 0.00 0 0.10 0.10 0.01 0.00 🐪 0.00 0 0.11 0.10 0.00 0.00 The document topic mixtures and the estimated mixtures are shown below for the first 5 documents: Table 1.5: The Estimated Topic Distributions for the First 5 Documents Location air estimated air land estimated land sea estimated sea 1 0.09 0.09 0.90 0.90 0.01 0.01 2 0.05 0.09 0.46 0.51 0.48 0.39 3 0.41 0.36 0.54 0.64 0.05 0.00 4 0.53 0.50 0.45 0.48 0.02 0.01 5 0.47 0.41 0.25 0.40 0.27 0.19 6 0.03 0.02 0.35 0.38 0.62 0.59 The results of our estimations of both the word topic distributions and the document topic distributions have some variation from the true distributions used to generate the documents. The cosine similarity between the estimated and true topic proportions in each document are shown below. Table 1.6: Cosine Similarity between Estimated Document Topic Distributions and Real Distributions air land sea 0.99 0.99 0.99 Table 1.7: Cosine Similarity between Estimated Topic Word Distributions and Real Distributions air land sea 1.00 0.99 0.98
## Introduction Light is a prominent tool to probe the properties of materials and their electronic structure, as evidenced by the widespread use of light-based spectroscopies across the physical sciences1,2. Among these tools, far-field optical techniques are particularly prevalent, but are constrained by the diffraction limit and the mismatch between optical and electronic length scales to probe the response of materials only at large length scales (or, equivalently, at small momenta). Plasmon polaritons—hybrid excitations of light and free carriers—provide a mean to overcome these constraints through their ability to confine electromagnetic radiation to the nanoscale3. Graphene, in particular, supports gate-tunable plasmons characterized by an unprecedentedly strong confinement of light4,5,6. When placed near a metal, graphene plasmons (GPs) are strongly screened and acquire a nearly linear (acoustic-like) dispersion7,8,9,10 (contrasting with the square-root-type dispersion of conventional GPs). Crucially, such acoustic graphene plasmons (AGPs) in graphene–dielectric–metal (GDM) structures have been shown to exhibit even higher field confinement than conventional GPs with the same frequency, effectively squeezing light into the few-nanometer regime8,9,10,11. Recently, using scanning near-field optical microscopy, these features were exploited to experimentally measure the conductivity of graphene, σ(q,ω), across its frequency (ω) and momentum (q) dependence simultaneously8. The observation of momentum dependence implies a nonlocal response (i.e., response contributions at position r from perturbations at $${\bf{r}}^{\prime}$$), whose origin is inherently quantum mechanical. Incidentally, traditional optical spectroscopic tools cannot resolve nonlocal response in extended systems due to the intrinsically small momenta k0 ≡ ω/c carried by far-field photons. Acoustic graphene plasmons, on the other hand, can carry large momenta—up to a significant fraction of the electronic Fermi momentum kF and with group velocities asymptotically approaching the electron’s Fermi velocity vF—and so can facilitate explorations of nonlocal (i.e., q-dependent) response not only in graphene itself but also, as we detail in this Article, in nearby materials. So far, however, only aspects related to the quantum response of graphene have been addressed8, leaving any quantum nonlocal aspects of the adjacent metal’s response unattended, despite their potentially substantial impact at nanometric graphene–metal separations12,13,14,15,16. Here, we present a theoretical framework that simultaneously incorporates quantum nonlocal effects in the response of both the graphene and of the metal substrate for AGPs in GDM heterostructures. Further, our approach establishes a concrete proposal for experimentally measuring the low-frequency nonlocal electrodynamic response of metals. Our model treats graphene at the level of the nonlocal random-phase approximation (RPA)4,9,17,18,19 and describes the quantum aspects of the metal’s response—including nonlocality, electronic spill-out/spill-in, and surface-enabled Landau damping—using a set of microscopic surface-response functions known as the Feibelman d-parameters12,13,15,16,20,21. These parameters, d and d, measure the frequency-dependent centroids of the induced charge density and of the normal derivative of the tangential current density, respectively (Supplementary Note 1). Using a combination of numerics and perturbation theory, we show that the AGPs are spectrally shifted by the quantum surface-response of the metal: toward the red for $${\rm{Re}} {\,}{d}_{\perp } > \,0$$ (associated with electronic spill-out of the induced charge density) and toward the blue for $${\rm{Re}} {\,}{d}_{\perp } < \,0$$ (signaling an inward shift, or “spill-in”). Interestingly, these shifts are not accompanied by a commensurately large quantum broadening nor by a reduction of the AGP’s quality factor, thereby providing the theoretical support explaining recent experimental observations11. Finally, we discuss how state-of-the-art measurements of AGPs could be leveraged to map out the low-frequency quantum nonlocal surface response of metals experimentally. Our findings have significant implications for our ability to optimize photonic designs that interface far- and mid-infrared optical excitations—such as AGPs—with metals all the way down to the nanoscale, with pursuant applications in, e.g., ultracompact nanophotonic devices, nanometrology, and in the surface sciences more broadly. ## Results ### Theory We consider a GDM heterostructure (see Fig. 1) composed of a graphene sheet with a surface conductivity σ ≡ σ(q,ω) separated from a metal substrate by a thin dielectric slab of thickness t and relative permittivity ϵ2 ≡ ϵ2(ω); finally, the device is covered by a superstrate of relative permittivity ϵ1 ≡ ϵ1(ω). While the metal substrate may, in principle, be represented by a nonlocal and spatially non-uniform (near the interface) dielectric function, here we abstract its contributions into two parts: a bulk, local contribution via $${\epsilon }_{\text{m}}\equiv {\epsilon }_{\text{m}}(\omega )={\epsilon }_{\infty }(\omega )-{\omega }_{\text{p}}^{2}/({\omega }^{2}+\text{i}\omega {\gamma }_{\text{m}})$$, and a surface, quantum contribution included through the d-parameters. These parameters are quantum-mechanical surface-response functions, defined by the first moments of the microscopic induced charge (d) and of the normal derivative of the tangential current (d); see Fig. 1 (Supplementary Note 1 gives a concise introduction). They allow the leading-order corrections to classicality to be conveniently incorporated via a surface dipole density ( d) and a surface current density ( d)9,15,16, and can be obtained either by first-principles computation20,21, semiclassical models, or experiments15. The electromagnetic excitations of any system can be obtained by analyzing the poles of the (composite) system’s scattering coefficients. For the AGPs of a GDM structure, the relevant coefficient is the p-polarized reflection (or transmission) coefficient, whose poles are given by $$1\ -\ {r}_{p}^{2|{\mathrm{g}}| 1}\ {r}_{p}^{2|{\mathrm{m}}}\ {\text{e}}^{{\text{i}}2{k}_{z,2}t}=0$$ (ref. 22). Here, $${r}_{p}^{2| \text{g}| 1}$$ and $${r}_{p}^{2| \text{m}}$$ denote the p-polarized reflection coefficients for the dielectric–graphene–dielectric and the dielectric–metal interface (detailed in Supplementary Note 2), respectively. Each coefficient yields a material-specific contribution to the overall quantum response: $${r}_{p}^{2|\text{g}| 1}$$ incorporates graphene’s via σ(q,ω) and $${r}_{p}^{2| \text{m}}$$ incorporates the metal’s via the d-parameters (see Supplementary Note 2). The complex exponential [with $${k}_{z,2}\equiv {({\epsilon }_{2}{k}_{0}^{2}-{q}^{2})}^{1/2}$$, where q denotes the in-plane wavevector] incorporates the effects of multiple reflections within the slab. Thus, using the above-noted reflection coefficients (defined explicitly in Supplementary Note 2), we obtain a quantum-corrected AGP dispersion equation: $$\left[\frac{{\epsilon }_{1}}{{\kappa }_{1}}+\frac{{\epsilon }_{2}}{{\kappa }_{2}}+\frac{\,\text{i}\,\sigma }{\omega {\epsilon }_{0}}\right]\left[{\epsilon }_{\text{m}}{\kappa }_{2}+{\epsilon }_{2}{\kappa }_{\text{m}}-\left(\right.{\epsilon }_{\text{m}}-{\epsilon }_{2}\left)\right.\left(\right.{q}^{2}{d}_{\perp }-{\kappa }_{2}{\kappa }_{\text{m}}{d}_{\parallel }\left)\right.\right] \\ =\left[\frac{{\epsilon }_{1}}{{\kappa }_{1}}-\frac{{\epsilon }_{2}}{{\kappa }_{2}}+\frac{\,\text{i}\,\sigma }{\omega {\epsilon }_{0}}\right]\left[{\epsilon }_{\text{m}}{\kappa }_{2}-{\epsilon }_{2}{\kappa }_{\text{m}}+\left(\right.{\epsilon }_{\text{m}}-{\epsilon }_{2}\left)\right.\left(\right.{q}^{2}{d}_{\perp }+{\kappa }_{2}{\kappa }_{\text{m}}{d}_{\parallel }\left)\right.\right]{\text{e}}^{-2{\kappa }_{2}t},$$ (1) for in-plane AGP wavevector q and out-of-plane confinement factors $${\kappa }_{j}\equiv ( {q}^{2}-{\epsilon }_{j}{k}_{0}^{2} )^{1/2}$$ for j {1, 2, m}. Since AGPs are exceptionally subwavelength (with confinement factors up to almost 300)8,10,11, the nonretarded limit (wherein κj → q) constitutes an excellent approximation. In this regime, and for encapsulated graphene, i.e., where ϵd ≡ ϵ1 = ϵ2, Eq. (1) simplifies to $$\left[1+\frac{2{\epsilon }_{\text{d}}}{q}\frac{\omega {\epsilon }_{0}}{\text{i}\,\sigma }\right]\left[\frac{{\epsilon }_{\text{m}}+{\epsilon }_{\text{d}}}{{\epsilon }_{\text{m}}-{\epsilon }_{\text{d}}}-q\left(\right.{d}_{\perp }-{d}_{\parallel }\left)\right.\right]=\left[\right.1+q\left(\right.{d}_{\perp }+{d}_{\parallel }\left)\right.\left]\right.{\text{e}}^{-2qt}.$$ (2) For simplicity and concreteness, we will consider a simple jellium treatment of the metal such that d vanishes due to charge neutrality21,23, leaving only d nonzero. Next, we exploit the fact that AGPs typically span frequencies across the terahertz (THz) and mid-infrared (mid-IR) spectral ranges, i.e., well below the plasma frequency ωp of the metal. In this low-frequency regime, ωωp, the frequency dependence of d (and d) has the universal, asymptotic dependence $${d}_{\perp }(\omega )\simeq \zeta +\,\text{i}\frac{\omega }{{\omega }_{\text{p}}}\xi \,\qquad (\text{for}\,\,\omega \ll {\omega }_{\text{p}}),$$ (3) as shown by Persson et al.24,25 by exploiting Kramers–Kronig relations. Here, ζ is the so-called static image-plane position, i.e., the centroid of induced charge under a static, external field26, and ξ defines a phase-space coefficient for low-frequency electron–hole pair creation, whose rate is qωξ21: both are ground-state quantities. In the jellium approximation of the interacting electron liquid, the constants ζ ≡ ζ(rs) and ξ ≡ ξ(rs) depend solely on the carrier density ne, here parameterized by the Wigner–Seitz radius $${r}_{s}{a}_{\text{B}}\equiv {(3{n}_{\text{e}}/4\pi )}^{1/3}$$ (Bohr radius, aB). In the following, we exploit the simple asymptotic relation in Eq. (3) to calculate the dispersion of AGPs with metallic (in addition to graphene’s) quantum response included. ### Quantum corrections in AGPs due to metallic quantum surface-response The spectrum of AGPs calculated classically and with quantum corrections is shown in Fig. 2. Three models are considered: one, a completely classical, local-response approximation treatment of both the graphene and the metal; and two others, in which graphene’s response is treated by the nonlocal RPA4,9,17,18,19 while the metal’s response is treated either classically or with quantum surface-response included (via the d-parameter). As noted previously, we adopt a jellium approximation for the d-parameter. Figure 2a shows that—for a fixed wavevector—the AGP’s resonance blueshifts upon inclusion of graphene’s quantum response, followed by a redshift due to the quantum surface-response of the metal (since $${\rm{Re}} {\,}{d}_{\perp } > \,0$$ for jellium metals; electronic spill-out)13,15,16,21,27,28. This redshifting due to the metal’s quantum surface-response is opposite to that predicted by the semiclassical hydrodynamic model (HDM) where the result is always a blueshift14 (corresponding to $${\rm{Re}}{\,}{d}_{\perp }^{\text{HDM}} < \,0$$; electronic “spill-in”) due to the neglect of spill-out effects29. The imaginary part of the AGP’s wavevector (that characterizes the mode’s propagation length) is shown in Fig. 2b: the net effect of the inclusion of d is a small, albeit consistent, increase of this imaginary component. Notwithstanding this, the modification of $${\rm{Im}}\, q$$ is not independent of the shift in $${\rm{Re}}{\,}q$$; as a result, an increase in $${\rm{Im}}{\,}q$$ does not necessarily imply the presence of a significant quantum decay channel [e.g., an increase of $${\rm{Im}}{\,}q$$ can simply result from increased classical loss (i.e., arising from local response alone) at the newly shifted $${\rm{Re}}\, q$$ position]. Because of this, we inspect the quality factor $$Q\equiv {\rm{Re}}{\,}q/{\rm{Im}}{\,}q$$ (or “inverse damping ratio”30,31) instead32 (Fig. 2c), which provides a complementary perspective that emphasizes the effective (or normalized) propagation length rather than the absolute length. The incorporation of quantum mechanical effects, first in graphene alone, and then in both graphene and metal, reduces the AGP’s quality factor. Still, the impact of metal-related quantum losses in the latter is negligible, as evidenced by the nearly overlapping black and red curves in Fig. 2c. To better understand these observations, we treat the AGP’s q-shift due to the metal’s quantum surface-response as a perturbation: writing q = q0 + q1, we find that the quantum correction from the metal is q1q0d/(2t), for a jellium adjacent to vacuum in the $${\omega }^{2}/{\omega }_{\text{p}}^{2}\ll {q}_{0}t\ll 1$$ limit (Supplementary Note 3). This simple result, together with Eq. (3), provides a near-quantitative account of the AGP dispersion shifts due to metallic quantum surface-response: for ωωp, (i) $${\rm{Re}}\, {d}_{\perp }$$ tends to a finite value, ζ, which increases (decreases) $${\rm{Re}}\, q$$ for ζ > 0 (ζ < 0); and (ii) $${\rm{Im}}{\,}{d}_{\perp }$$ is $$\mathop{\propto}\omega$$ and therefore asymptotically vanishing as ω/ωp → 0 and so only negligibly increases $${\rm{Im}}\, q$$. Moreover, the preceding perturbative analysis warrants $${\rm{Re}}{\,}{q}_{1}/{\rm{Re}}{\,}{q}_{0} \approx {\rm{Im}}{\,}{q}_{1}/{\rm{Im}}{\,}{q}_{0}$$ (Supplementary Note 3), which elucidates the reason why the AGP’s quality factor remains essentially unaffected by the inclusion of metallic quantum surface-response. Notably, these results explain recent experimental observations that found appreciable spectral shifts but negligible additional broadening due to metallic quantum response10,11. Next, by considering the separation between graphene and the metallic interface as a renormalizable parameter, we find a complementary and instructive perspective on the impact of metallic quantum surface-response. Specifically, within the spectral range of interest for AGPs (i.e., ωωp), we find that the “bare” graphene–metal separation t is effectively renormalized due to the metal’s quantum surface-response from t to $$\tilde{t}\equiv t-s$$, where sdζ (see Supplementary Note 4), corresponding to a physical picture where the metal’s interface lies at the centroid of its induced density (i.e., $${\rm{Re}}{\,}{d}_{\perp }$$) rather than at its “classical” jellium edge. With this approach, the form of the dispersion equation is unchanged but references the renormalized separation $$\tilde{t}$$ instead of its bare counterpart t, i.e.: $$1+\frac{2{\epsilon }_{\text{d}}}{q}\frac{\omega {\epsilon }_{0}}{\text{i}\sigma }=\frac{{\epsilon }_{\text{m}}-{\epsilon }_{\text{d}}}{{\epsilon }_{\text{m}}+{\epsilon }_{\text{d}}}\ {\text{e}}^{-2q\tilde{t}},$$ (4) This perspective, for instance, has substantial implications for the analysis and understanding of plasmon rulers33,34,35 at nanometric scales. Furthermore, our findings additionally suggest an interesting experimental opportunity: as all other experimental parameters can be well-characterized by independent means (including the nonlocal conductivity of graphene), high-precision measurements of the AGP’s dispersion can enable the characterization of the low-frequency metallic quantum response—a regime that has otherwise been inaccessible in conventional metal-only plasmonics. The underlying idea is illustrated in Fig. 3; depending on the sign of the static asymptote ζ ≡ d(0), the AGP’s dispersion shifts toward larger q (smaller ω; redshift) for ζ > 0 and toward smaller q (larger ω; blueshift) for ζ < 0. As noted above, the q-shift is ~q0ζ/(2t). Crucially, despite the ångström-scale of ζ, this shift can be sizable: the inverse scaling with the spacer thickness t effectively amplifies the attainable shifts in q, reaching up to several μm−1 for few-nanometer t. We stress that these regimes are well within current state-of-the-art experimental capabilities8,10,11, suggesting a new path toward the systematic exploration of the static quantum response of metals. ### Probing the quantum surface-response of metals with AGPs The key parameter that regulates the impact of quantum surface corrections stemming from the metal is the graphene–metal separation, t (analogously to the observations of nonclassical effects in conventional plasmons at narrow metal gaps13,36,37); see Fig. 4. For the experimentally representative parameters indicated in Fig. 4, these come into effect for t 5 nm, growing rapidly upon decreasing the graphene–metal separation further. Chiefly, ignoring the nonlocal response of the metal leads to a consistent overestimation (underestimation) of AGP’s wavevector (group velocity) for d < 0, and vice versa for d > 0 (Fig. 4a); this behavior is consistent with the effective renormalization of the graphene–metal separation mentioned earlier (Fig. 4b). Finally, we analyze the interplay of both t and EF and their joint influence on the magnitude of the quantum corrections from the metal (we take d = −4 Å, which is reasonable for the Au substrate used in recent AGP experiments7,8,11); in Fig. 4c we show the relative wavevector quantum shift (excited at λ0 = 11.28 μm32). In the few-nanometer regime, the quantum corrections to the AGP wavevector approach 5%, increasing further as t decreases—for instance, in the extreme, one-atom-thick limit (t ≈ 0.7 nm11, which also approximately coincides with edge of the validity of the d-parameter framework, i.e., t 1 nm15) the AGP’s wavevector can change by as much as 10% for moderate graphene doping. The pronounced Fermi level dependence exhibited in Fig. 4c also suggests a complementary approach for measuring the metal’s quantum surface-response even if an experimental parameter is unknown (although, as previously noted, all relevant experimental parameters can in fact be characterized using currently available techniques8,10,11,15): such an unknown variable can be fitted at low EF using the “classical” theory (i.e., with d = d = 0), since the impact of metallic quantum response is negligible in that regime. A parameter-free assessment of the metal’s quantum surface-response can then be carried out subsequently by increasing EF (and with it, the metal-induced quantum shift). We emphasize that this can be accomplished in the same device by doping graphene using standard electrostatic gating8,10,11. ## Discussion In this Article, we have presented a theoretical account that establishes and quantifies the influence of the metal’s quantum response for AGPs in hybrid GDM structures. We have demonstrated that the nanoscale confinement of electromagnetic fields inherent to AGPs can be harnessed to determine the quantum surface-response of metals in the THz and mid-IR spectral ranges (which is typically inaccessible with traditional metal-based plasmonics). Additionally, our findings elucidate and contextualize recent experiments10,11 that have reported the observation of nonclassical spectral shifting of AGPs due to metallic quantum response but without a clear concomitant increase of damping, even for atomically thin graphene–metal separations. Our results also demonstrate that the metal’s quantum surface-response needs to be rigorously accounted for—e.g., using the framework developed here—when searching for signatures of many-body effects in the graphene electron liquid imprinted in the spectrum of AGPs in GDM systems8, since the metal’s quantum-surface response can lead to qualitatively similar dispersion shifts, as shown here. In passing, we emphasize that our framework can be readily generalized to more complex graphene–metal hybrid structures either by semi-analytical approaches (e.g., the Fourier modal method38 for periodically nanopatterned systems) or by direct implementation in commercially available numerical solvers (see refs. 15,39), simply by adopting d-parameter-corrected boundary conditions15,16. Further, our formalism provides a transparent theoretical foundation for guiding experimental measurements of the quantum surface-response of metals using AGPs. The quantitative knowledge of the metal’s low-frequency, static quantum response is of practical utility in a plethora of scenarios, enabling, for instance, the incorporation of leading-order quantum corrections to the classical electrostatic image theory of particle–surface interaction20 as well as to the van der Waals interaction21,25,40 affecting atoms or molecules near metal surfaces. Another prospect suggested by our findings is the experimental determination of ζ ≡ d(0) through measurements of the AGP’s spectrum. This highlights a new metric for comparing the fidelity of first-principle calculations of different metals (inasmuch as ab initio methods can yield disparate results depending on the chosen scheme or functional)41,42 with explicit measurements. Our results also highlight that AGPs can be extremely sensitive probes for nanometrology as plasmon rulers, while simultaneously underscoring the importance of incorporating quantum response in the characterization of such rulers at (sub)nanometric scales. Finally, the theory introduced here further suggests additional directions for exploiting AGP’s high-sensitivity, e.g., to explore the physics governing the complex electron dynamics at the surfaces of superconductors43 and other strongly correlated systems.
## Rocky Mountain Journal of Mathematics ### Functions analytic in the unit ball having bounded $L$-index in a direction #### Abstract We propose a generalization of a concept of bounded index for analytic functions in the unit ball. Use of directional derivatives gives us a possibility to deduce the necessary and sufficient conditions of boundedness of $L$-index in a direction for analytic functions of several variables, namely, we obtain an analog of Hayman's theorem and a logarithmic criteria for this class. The criteria describe the behavior of the directional logarithmic derivative outside the zero set and a uniform distribution of zeros in some sense. The criteria are useful for studying analytic solutions of partial differential equations and estimating their growth. We present a scheme of this application. #### Article information Source Rocky Mountain J. Math., Volume 49, Number 4 (2019), 1063-1092. Dates First available in Project Euclid: 29 August 2019 https://projecteuclid.org/euclid.rmjm/1567044028 Digital Object Identifier doi:10.1216/RMJ-2019-49-4-1063 Mathematical Reviews number (MathSciNet) MR3998910 Zentralblatt MATH identifier 07104706 Subjects Primary: 32A10: Holomorphic functions Secondary: 32A17: Special families of functions 35B08: Entire solutions #### Citation Bandura, Andriy; Skaskiv, Oleh. Functions analytic in the unit ball having bounded $L$-index in a direction. Rocky Mountain J. Math. 49 (2019), no. 4, 1063--1092. doi:10.1216/RMJ-2019-49-4-1063. https://projecteuclid.org/euclid.rmjm/1567044028 #### References • A. Bandura and O. Skaskiv, Directional logarithmic derivative and the distribution of zeros of an entire function of bounded $L$-index along the direction, Ukr. Math. J., 69 (2017), no. 3, 500–508. • A.I. Bandura, Some improvements of criteria of $L$-index boundedness in direction, Mat. Stud. 47 (2017), no. 1, 27–32. • A.I. Bandura, M.T. Bordulyak and O.B. Skaskiv, Sufficient conditions of boundedness of L-index in joint variables, Mat. Stud. 45 (2016), no. 1, 12–26. • A. Bandura, O. Skaskiv and P. Filevych, Properties of entire solutions of some linear PDE's, J. Appl. Math. Comput. Mech. 16 (2017), no. 2, 17–28. • A. Bandura and O. Skaskiv, Functions analytic in a unit ball of bounded $L$-index in joint variables, J. Math. Sci. 227 (2017), no. 1, 1–12. • A.I. Bandura and O.B. Skaskiv, Analytic functions in the unit ball of bounded $\mathbf{L}$-index: asymptotic and local properties, Mat. Stud. 48 (2017), no. 1, 37–73. • A. Bandura, N. Petrechko and O. Skaskiv, Maximum modulus in a bidisc of analytic functions of bounded ${\mathbf L}$-index and an analogue of Hayman's theorem, Mat. Bohemica 143 (2018), no. 4, 339–354. • A.I. Bandura and N.V. Petrechko, Properties of power series of analytic in a bidisc functions of bounded $\mathbf{L}$-index in joint variables, Carpathian Math. Publ. 9 (2017), no. 1, 6–12. • A.I. Bandura and O.B. Skaskiv, Entire functions of bounded $L$-index in direction, Mat. Stud. 27 (2007), no. 1, 30–52. • A. Bandura and O. Skaskiv, Entire functions of bounded $\mathbf{L}$-Index: its zeros and behavior of partial logarithmic derivatives, J. Complex Analysis 2017 (2017), art. ID 3253095. • A. Bandura and O. Skaskiv, Entire functions of several variables of bounded index, I.E. Chyzhykov, Lviv (2016). • M.T. Bordulyak, A proof of Sheremeta conjecture concerning entire function of bounded $l$-index, Mat. Stud. 11 (1999), no. 2, 108–110. • W.K. Hayman, Differential equations and local valency, Pacific J. Math. 44 (1973), no. 1, 117–137. • Gopala J. Krishna and S.M. Shah, Functions of bounded indices in one and several complex variables, pp. 223–235 in Mathematical essays dedicated to A.J. Macintyre, Ohio Univ. Press, Athens, Ohio (1970). • A.D. Kuzyk and M.M. Sheremeta, Entire functions of bounded $l$-distribution of values, Math. Notes 39 (1986), no. 1, 3–8. • B. Lepson, Differential equations of infinite order, hyperdirichlet series and entire functions of bounded index, pp. 298–307 in Entire functions and related parts of analysis (La Jolla, CA, 1966), Proc. Sympos. Pure Math. \bf11, Amer. Math. Soc., Providence, RI (1968). • W. Rudin, Function theory in the unit ball of $\mathbb{C}^n$, Grundlehren der Math. Wiss. \bf241, Springer (1980). • M. Salmassi, Functions of bounded indices in several variables, Indian J. Math. 31 (1989), no. 3, 249–257. • S.M. Shah, Entire function of bounded index, Lect. Notes in Math. 599 (1977), 117–145. • M. Sheremeta, Analytic functions of bounded index, VNTL Publishers, Lviv (1999). • S.N. Strochyk and M.M. Sheremeta, Analytic in the unit disc functions of bounded index, Dopov. Akad. Nauk Ukr. 1 (1993), 19–22. • M.N. Sheremeta and A. D. Kuzyk, Logarithmic derivative and zeros of an entire function of bounded l-index, Sib. Math. J. 33 (1992), 12, 304–312.
# Free resolution of short exact sequence coherent sheaves I'm trying to prove the following: Let $X$ be a non-singular scheme then it is known that there exists a finite free resolution (resolution by locally free sheaves) for any coherent sheaves on $X$. Given a short exact sequence of coherent sheaves on $X$: $$0 \rightarrow \mathcal{F}' \rightarrow \mathcal{F} \rightarrow \mathcal{F}'' \rightarrow 0.$$ If we had finite free resolutions $E'_* \rightarrow \mathcal{F}' \rightarrow 0, E_* \rightarrow \mathcal{F} \rightarrow 0$ and $E''_* \rightarrow \mathcal{F}'' \rightarrow 0$ then we have the following equality in $K(X)$ (the Grothendieck group of locally free sheaves on $X$): $$\sum_{i=0}^\infty (-1)^i[E_i] = \sum_{i=0}^\infty (-1)^i[E'_i] + \sum_{i=0}^\infty (-1)^i[E''_i]$$ (these are finite sums, of course). My first thought/attempt on this: This statement would be easy to proof if we had a Horseshoe lemma for sheaves. But as far as I know, a locally free sheaf $E$ doesn't have to be a projective object in the category of coherent sheaves (unless $X$ is affine or something, I think, please correct me if I'm wrong here). Trying to read B.8.3 in Fulton (intersection theory): Fulton did exactly this proof in B.8.3. I was trying to understand that, but the proof is too brief for me. First step is to construct this diagram (the rightmost vertical map is missing in the book, I think this is by design) \begin{array}{llllllllllllll} 0 & \rightarrow & E_n & \rightarrow & ... & \rightarrow & E_0 & \rightarrow & \mathcal{F} & \rightarrow & 0\\ & & \downarrow & & & & \downarrow & & & & \\ 0 & \rightarrow & E''_n & \rightarrow & ... & \rightarrow & E''_0 & \rightarrow & \mathcal{F}'' & \rightarrow & 0\\ & & \downarrow & & & & \downarrow & & & & \\ & & 0 & & ... & & 0 & & & & \\ \end{array} where $E_0$ is mapped surjectively into the kernel of the canonical map $E''_0 \oplus \mathcal{F} \rightarrow \mathcal{F}''$. I'm not sure what this mean, but I will assume the map is by projection then the horizontal map $E''_0 \oplus \mathcal{F} \rightarrow E''_0 \rightarrow \mathcal{F}''$. Then I don't see why $E_0 \rightarrow E''_0$ should be surjective. If I looked at stalks, at any $x \in X$, shouldn't the kernel of $E''_{0,x} \rightarrow \mathcal{F}''_x$ be a submodule of $E''_{0,x}$, hence not surjective? The book repeated the similar construction to get the rest of resolution $E_* \rightarrow \mathcal{F}$, so I didn't understand them. Could anyone please guide me through this proof or the alternative one please? Thank you. First prove that if there are two locally free resolutions of the same sheaf and a morphism between these resolutions, inducing the identity on the sheaf, then the alternating sums for the two resolutions coincide. Second, prove the same without an assumption about a morphism (for this construct a third resolution with maps to the first two). Third, construct a compatible system of locally free resolutions for the exact sequence. This will give an equality for these resolutions, and then by the previous observation also for the original resolutions. EDIT (some details on the first part). Assume $\{E'_k\}$ and $\{E''_k\}$ are the resolutions and $f_k \colon E'_k \to E''_k$ is a morphism of resolutions. Consider it as a bicomplex $\{E_{k,l}\}$ with $E_{k,0} = E'_k$, $E_{k,1} = E''_k$, and with the other terms being zero (just two lines). Then the totalization $Tot(E)_n = \oplus_{k+l = n}E_{k,l}$ of this bicomplex is acyclic, hence $\sum (-1)^n[Tot(E)_n] = 0$. This means that $\sum (-1)^k[E'_k] = \sum (-1)^k[E''_k]$. • This is exactly what I don't know how to do. Could you provide me more detail please? – user113988 Dec 4 '16 at 1:56 • Which of the parts? – Sasha Dec 4 '16 at 12:59 • I incline to say every parts, but I guess the first part and I might get an idea how to do the rest. Thank you. – user113988 Dec 4 '16 at 14:35 • OK, I edited the answer. – Sasha Dec 4 '16 at 15:32 • Thank you for your answer. I will have time to look through it more carefully in a few days and might ask you more question. – user113988 Dec 5 '16 at 16:27
Opuscula Math. 36, no. 5 (), 563-574 http://dx.doi.org/10.7494/OpMath.2016.36.5.563 Opuscula Mathematica # Criticality indices of 2-rainbow domination of paths and cycles Abstract. A $$2$$-rainbow dominating function of a graph $$G\left(V(G),E(G)\right)$$ is a function $$f$$ that assigns to each vertex a set of colors chosen from the set $$\{1,2\}$$ so that for each vertex with $$f(v)=\emptyset$$ we have $${\textstyle\bigcup_{u\in N(v)}} f(u)=\{1,2\}$$. The weight of a $$2$$RDF $$f$$ is defined as $$w\left( f\right)={\textstyle\sum\nolimits_{v\in V(G)}} |f(v)|$$. The minimum weight of a $$2$$RDF is called the $$2$$-rainbow domination number of $$G$$, denoted by $$\gamma_{2r}(G)$$. The vertex criticality index of a $$2$$-rainbow domination of a graph $$G$$ is defined as $$ci_{2r}^{v}(G)=(\sum\nolimits_{v\in V(G)}(\gamma_{2r}\left(G\right) -\gamma_{2r}\left( G-v\right)))/\left\vert V(G)\right\vert$$, the edge removal criticality index of a $$2$$-rainbow domination of a graph $$G$$ is defined as $$ci_{2r}^{-e}(G)=(\sum\nolimits_{e\in E(G)}(\gamma_{2r}\left(G\right)-\gamma_{2r}\left( G-e\right)))/\left\vert E(G)\right\vert$$ and the edge addition of a $$2$$-rainbow domination criticality index of $$G$$ is defined as $$ci_{2r}^{+e}(G)=(\sum\nolimits_{e\in E(\overline{G})}(\gamma_{2r}\left(G\right)-\gamma_{2r}\left( G+e\right)))/\left\vert E(\overline{G})\right\vert$$, where $$\overline{G}$$ is the complement graph of $$G$$. In this paper, we determine the criticality indices of paths and cycles. Keywords: 2-rainbow domination number, criticality index. Mathematics Subject Classification: 05C69.
# Mathematical Logic Algorithm • Apr 7th 2009, 08:19 PM VENI Mathematical Logic Algorithm I need to write an algorithm with appropriate assertions and input and output specifications that gives the sum of n given numbers. Then I need to prove it. This is what I have, is it right? Input: $n_{1},n_{2},...,n_{m}$ Output: $s$ Algorithm SUM $\{n_{1},n_{2},...,n_{m}$ integers $, -\infty < n_{1},...,n_{m} < \infty \}$ $s \leftarrow 0; i \leftarrow 0$ Loop Invariant: $\{s=s+n_{i}, i=1,2,...,m \}$ Repeat until: $i=m$ $s \leftarrow s + n_{i}$ $i \leftarrow i + 1$ End repeat Loop termination condition: $\{s=s+n_{i} \wedge i=m \}$ Output specification: $\{s= n_{1},n_{2},...,n_{m}\}$
# Java Program to check if all digits of a number divide it JavaServer Side ProgrammingProgramming To check if all digits of a number divide it, the Java code is as follows − ## Example Live Demo import java.io.*; public class Demo{ static boolean divisibility_check(int val, int digit){ return (digit != 0 && val % digit == 0); } static boolean divide_digits(int val){ int temp = val; while (temp > 0){ int digit = val % 10; if ((divisibility_check(val, digit)) == false) return false; temp /= 10; } return true; } public static void main(String args[]){ int val = 150; if (divide_digits(val)) System.out.println("All the digits of the number divide the number completely."); else System.out.println("All the digits of the number are not divided by the number completely."); } } ## Output All the digits of the number are not divided by the number completely. A class named Demo contains a function named ‘divisibility_check’, which has two parameters- the number and the digit. This function returns a Boolean value depending on whether the output returned is true or false. It checks if the number is not 0 and if the number divided by digit of the number is completely divided or not. Another function named ‘divide_digits’ is a Boolean function that takes the number as the parameter. This function checks to see if all the digits in a number divide the number fully. In the main function, a value for the number is defined and the function is called with this value, if it returns ‘true’, then relevant message is displayed. If not, then it gives a message stating the number can’t be completely divided. Updated on 08-Jul-2020 11:34:00
# HD ## HD Definition / HD Means The exact definition of HD is “High-Definition”. ## What is HD? HD is “High-Definition”. ## The Meaning of HD HD means “High-Definition”. ## What does HD mean? HD is an acronym, abbreviation or slang word which means “High-Definition”. This Page is dedicated to all those internet users who are looking for HD Definition, The Meaning of HD and What does HD mean?. You can checkout the information shared above for acronym HD and other 9000+ slang words shared on Web Acronym.
Search Results Results 1 - 50 of 1951 1 2 3 4 5 6 7 8 9 10 > 1 Sun Deqiang - - 2014 Bisulfite sequencing (BS-seq) is the gold standard for studying genome-wide DNA methylation. We developed MOABS to increase the speed, accuracy, statistical power and biological relevance of BS-seq data analysis. MOABS detects differential methylation with 10-fold coverage at single-CpG resolution based on a Beta-Binomial hierarchical model and is capable of processing ... 2 Mroczek Dariusz D 1Chair of Motor Activity, University School of Physical Education, Wroclaw, Poland 2Faculty of Physical Education and Physiotherapy, Opole University of Technology, - - 2014 The present study aims to assess motor activity of volleyball players using an original video recording method developed by the authors. 28 volleyball players taking part in four matches of the Polish Volleyball League were examined. The recorded data were analyzed in view of the mean total distance covered by ... 3 Demetrashvili Nino N 1Department of Epidemiology, University Medical Center Groningen, University of Groningen, RB Groningen, the - - 2014 Confidence intervals for intraclass correlation coefficients in agreement studies with continuous outcomes are model-specific and no generic approach exists. This paper provides two generic approaches for intraclass correlation coefficients of the form $$\sum _{q=1}^{Q}{\sigma }_{q}^{2}/(\sum _{q=1}^{Q}{\sigma }_{q}^{2}+\sum _{p=Q+1}^{P}{\sigma }_{p}^{2})$$. The first approach uses Satterthwaite's approximation and an F-distribution. The second ... 4 Meaney Christopher - - 2014 In biomedical research, response variables are often encountered which have bounded support on the open unit interval - (0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression ... 5 Choi Yongju - - 2014 The validity of a hydrophobic organic contaminant mass transfer model to predict the effectiveness of in-situ AC treatment under stagnant sediment-AC contact is studied for different contaminants and sediments. The modeling results and data from a previous 24-month column study experiment of uptake in polyethylene samplers are within a factor ... 6 Wang Xiaoyan - - 2014 Among the methods of parallel magnetic resonance imaging (PMRI), Generalized Auto-calibrating Partially Parallel Acquisitions (GRAPPA) technique reconstructs the missing k-space data by a set of weights, which are derived from auto-calibration signal (ACS) lines acquired in parallel to the reduced lines. In this paper, a novel hybrid method is proposed ... 7 Martin Andrew A NHS Greater Manchester Commissioning Support Unit, Salford, Manchester, - - 2014 Background: Generic losartan provides an opportunity to enhance angiotensin receptor blocker (ARB) prescribing efficiency, with all ARBs essentially being similar. Initially, there was limited activity in NHS Bury (UK). This changed in March 2011 with therapeutic switching and other measures encouraging the prescribing of losartan following generics to enhance its ... 8 Legendre Pierre P Département de Sciences Biologiques, Université de Montréal, , C.P. 6128, Succursale Centre-ville, Montréal, Québec, Canada , H3C 3J7, Laboratoire des Sciences de l'Environnement Marin (LEMAR), UMR CNRS 6539, Institut Universitaire Européen de la Mer, Université de Bretagne Occidentale, , rue Dumont d'Urville, Plouzané 29280, - - 2014 This review focuses on the analysis of temporal beta diversity, which is the variation in community composition along time in a study area. Temporal beta diversity is measured by the variance of the multivariate community composition time series and that variance can be partitioned using appropriate statistical methods. Some of ... 9 Otani Koichi - - 2014 It has been suggested that interpersonal sensitivity, a personality trait associated with depression and anxiety disorders, is linked with attachment insecurity. To confirm this link, we studied the correlations of interpersonal sensitivity with working models of the self and other. The subjects were 301 healthy Japanese. Interpersonal sensitivity and working ... 10 Montero-Marin Jesus J Faculty of Health Sciences and Sports, University of Zaragoza, Huesca, Spain ; Department of Psychiatry, University of Zaragoza, Zaragoza, Spain ; REDIAPP "Red de Investigación en Actividades Preventivas y Promoción de la Salud" (Research Network on Preventative Activities and Health Promotion), Zaragoza, - - 2014 Burnout occurs when professionals use ineffective coping strategies to try to protect themselves from work-related stress. The dimensions of 'overload', 'lack of development' and 'neglect', belonging to the 'frenetic', 'under-challenged' and 'worn-out' subtypes, respectively, comprise a brief typological definition of burnout. The aim of the present study was to estimate ... 11 Kanoatov Mirzo - - 2013 We describe a mathematical approach that enables extraction of kinetic rate constants from thousands of studies conducted over the past two decades with affinity capillary electrophoresis (ACE). Previously, ACE has been used almost exclusively for obtaining equilibrium constants of intermolecular interactions. In this article, we prove that there exists an ... 12 Noel Yvonnick - - 2013 An unfolding model for continuous bounded responses is proposed, derived both from a hypothetical interpolation response mechanism and from the hypothesis of two opposite sources of item refusal being collapsed. These two sources of refusal are made explicit in a three-component Dirichlet response model and then collapsed to obtain a (two-component) beta response ... 13 Stanga D D National Institute of R&D for Physics and Nuclear Engineering "HoriaHulubei", IFIN-HH, P.O.Box MG-6, Bucharest-Magurele R-077125, Romania. Electronic address: - - 2013 A simple method has been developed for determining the activity of large-area beta reference sources in anodized aluminum foils. It is based on the modeling of the transmission of beta rays through thin foils in planar geometry using Monte Carlo simulation. The method was checked experimentally and measurement results show ... 14 Iknayan Kelly J - - 2013 Estimates of species richness and diversity are central to community and macroecology and are frequently used in conservation planning. Commonly used diversity metrics account for undetected species primarily by controlling for sampling effort. Yet the probability of detecting an individual can vary among species, observers, survey methods, and sites. We ... 15 Havelka Miroslav M Czech Metrology Institute, Inspectorate for Ionizing Radiation, Radiová 1, 102 00 Prague, Czech Republic. Electronic address: - - 2013 The activity of the radionuclide (64)Cu was determined by the efficiency extrapolation method applied to 4π(PC)-γ coincidence counting. The standardisation was performed by software coincidence counting-a digital method for primary activity measurement that simplifies the setting of optimal coincidence parameters. The γ-ray-energy window, characterised by identical gamma detection efficiency related ... 16 Lee Chang-Hyun - - 2013 The sylvian arachnoid cyst (AC) is a common benign disease; however, it sometimes leads to subdural or intracystic hemorrhage without major trauma. The reason of easy bleeding of the AC is not fully understood. The purpose of this study was to investigate the bleeding mechanism of the sylvian AC in ... 17 Hardy J C - - 2013 Because of angular-momentum conservation, superallowed β decay between 0(+) analog states involves only the vector part of the weak interaction, so its measured ft value can be used to determine the vector coupling constant, GV. If many such transitions are measured, then the constancy of GV can be established and ... 18 Battisti Sabrina - - 2013 In May 2005, beta-hexachlorocyclohexane (β-HCH) was found in a sample of bovine bulk milk from a farm in the Sacco River valley (Latium region, central Italy). The primary source of contamination was suspected to be industrial discharge into the environment with the Sacco River as the main mean of dispersion. ... 19 Zhou Mingyuan - - 2013 The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to a NB process for ... 20 He Wenqi - - 2013 We comment on the recent Letter, by Liu et al. [Opt. Lett.38, 1651 (2013)] in which a method to improve the security strength of the asymmetric cryptosystem (ACS) by Qin and Peng [Opt. Lett.35, 118 (2010)] was proposed. However, in accordance with Liu's methodology, we could easily find other more efficient ... 21 Yu Xiaoyi - - 2013 Long-term storage of articular cartilage (AC) has excited great interest due to the practical surgical significance of this tissue. The liquidus-tracking (LT) method developed by Pegg et al. [Cryobiology, 52 (2006) 360-368] for vitreous preservation of AC achieved reasonable survival of post-warming chondrocytes in situ, but the design of the ... 22 Sathyamoorthy Sandeep - - 2013 Accurate prediction of pharmaceutical concentrations in wastewater effluents requires that the specific biochemical processes responsible for pharmaceutical biodegradation be elucidated and integrated within any modeling framework. The fate of three selected beta blockers - atenolol, metoprolol, and sotalol - was examined during nitrification using batch experiments to develop and evaluate ... 23 Carvalho Susana - - 2013 The present study analyses the composition, structure and trophic function of epibenthic assemblages in two artificial reefs (ARs) 16 years after deployment and in nearby natural reefs (NRs), aiming at providing insights on the complementarity between both habitats. Current findings suggest that after 16 years the ARs (concrete blocks), located ... 24 Cisler Josh M - - 2013 Many cognitive and clinical neuroscience research studies seek to determine how contextual factors modulate cognitive processes. In fMRI, hypotheses about how context modulates distributed patterns of information processing are often tested by comparing functional connectivity between neural regions A and B as a function of task condition X and Y, ... 25 Salari Nader - - 2013 The classification of Acute Coronary Syndrome (ACS), using artificial intelligence (AI), has recently drawn the attention of the medical researchers. Using this approach, patients with myocardial infarction can be differentiated from those with unstable angina. The present study aims to develop an integrated model, based on the feature selection and ... 26 Silva M M - - 2014 This paper addresses the local identifiability and sensitivity properties of two classes of Wiener models for the neuromuscular blockade and depth of hypnosis, when drug dose profiles like the ones commonly administered in the clinical practice are used as model inputs. The local parameter identifiability was assessed based on the ... 27 Hansson Mari - - 2013 High-throughput screening (HTS) is widely used in the pharmaceutical industry to identify novel chemical starting points for drug discovery projects. The current study focuses on the relationship between molecular hit rate in recent in-house HTS and four common molecular descriptors: lipophilicity (ClogP), size (heavy atom count, HEV), fraction of sp(3)-hybridized ... 28 El Karoui Noureddine N Department of Statistics, University of California, Berkeley, CA 94720, USA. - - 2013 We study regression M-estimates in the setting where p, the number of covariates, and n, the number of observations, are both large, but p ≤ n. We find an exact stochastic representation for the distribution of β = argmin(β∈ℝ(p)) Σ(i=1)(n) ρ(Y(i) - X(i')β) at fixed p and n under various ... 29 Comas J - - 2013 Field trials may be required to assess risks of genetically modified crops (GMCs) for nontarget arthropods. One critical point of these trials is their capacity to detect differences between the density of one taxon in the GMC and that in the comparator. The detection capacity of a trial depends on ... 30 Kuss Oliver - - 2013 There are still challenges when meta-analyzing data from studies on diagnostic accuracy. This is mainly due to the bivariate nature of the response where information on sensitivity and specificity must be summarized while accounting for their correlation within a single trial. In this paper, we propose a new statistical model ... 31 Tan Vincent Y F - - 2013 This paper addresses the estimation of the latent dimensionality in nonnegative matrix factorization (NMF) with the $(\beta)$--divergence. The $(\beta)$-divergence is a family of cost functions that includes the squared euclidean distance, Kullback-Leibler (KL) and Itakura-Saito (IS) divergences as special cases. Learning the model order is important as it is necessary ... 32 Legendre Pierre - - 2013 Beta diversity can be measured in different ways. Among these, the total variance of the community data table Y can be used as an estimate of beta diversity. We show how the total variance of Y can be calculated either directly or through a dissimilarity matrix obtained using any dissimilarity ... 33 Yang Lingling - - 2013 Studies have shown that statistically there are differences in theta, alpha and beta band powers when people look at blue and red colors. In this paper, a game has been developed to test whether these statistical differences are good enough for online Brain Computer Interface (BCI) application. We implemented a ... 34 Horoi M - - 2013 Neutrinoless double beta decay, if observed, could distinguish whether the neutrino is a Dirac or a Majorana particle, and it could be used to determine the absolute scale of the neutrino masses. ^{136}Xe is one of the most promising candidates for observing this rare event. However, until recently there were ... 35 El Kerdawy Ahmed - - 2013 We have used a set of four local properties based on semiempirical molecular orbital calculations (the electron density (ρ), hydrogen bond donor field (HDF), hydrogen bond acceptor field (HAF) and molecular lipophilicity potential (MLP)) for 3D-QSAR studies to overcome the limitations of the current force-field based molecular interaction fields (MIFs). ... 36 Xu Xu Steven - - 2013 Beta regression models have been recommended for continuous bounded outcome scores that are often collected in clinical studies. Implementing beta regression in NONMEM presents difficulties since it does not provide gamma functions required by the beta distribution density function. The objective of the study was to implement mixed-effects beta regression ... 37 Srinivas Nuggehally R - - 2013 Ciclesonide, a novel glucocorticosteroid, through a rapid metabolism to desisobutyryl-ciclesonide (des-ciclesonide), provides an effective treatment option for asthma episodes by the inhaled route of administration. The availability of pharmacokinetic parameters (clearance [CL/F]; volume of distribution [Vd/F]; elimination half-life [T1/2]; and elimination rate constant [Kel]) in mice, rats, rabbits, and dogs ... 38 Tulupyev Alexander - - 2013 Our aim is to model the frequency of certain behavioral acts, especially those that are likely to transmit communicable diseases between persons. We develop a generalized linear model on the basis of the beta prime distribution to model the responses to a survey question of the form, 'When was the ... 39 Domenech de Cellès Matthieu - - 2013 BACKGROUND: Extended-spectrum beta-lactamase--producing Enterobacteriaceae (ESBL-E) are a growing concern in hospitals and the community. How to control the nosocomial ESBL-E transmission is a matter of debate. Contact isolation of patients has been recommended but evidence supporting it in non-outbreak settings has been inconclusive. METHODS: We used stochastic transmission models to ... 40 Boot Maikel - - 2013 BACKGROUND: Genes encoding Extended Spectrum Beta Lactamases are usually located on transferable plasmids. Each plasmid contains its own replication mechanism. Carattoli et al. developed an extended PCR-based replicon typing method to characterize and identify the replicons of the major plasmid incompatibility groups in Enterobacteriaceae. Based on this method, we designed ... 41 Moorthy N S Hari Narayana - - 2013 Abstract In the present computational analysis, pharmacophore-based active conformer selection method was used to derive active conformers for the physicochemical descriptors calculation. The significant regression models were validated using different validation methods, which provided significant Q(2) values. The distance-based approaches were also used to analyze the discriminant property of the ... 42 Rocha J Leonel - - 2013 In this work a new probabilistic and dynamical approach to an extension of the Gompertz law is proposed. A generalized family of probability density functions, designated by Beta • (p,q), which is proportional to the right hand side of the Tsoularis-Wallace model, is studied. In particular, for p=2, the investigation ... 43 Vojniković Bozo - - 2013 The authors discussed about the problem of special form in astigmatism classification. This special type of astigmatism is the form of obliquely crossed astigmatism. In which the meridians, major and minor, are not right angles. In this astigmatism is not possible to prescribing for cylindrical (toric) spectacle lens. Authors describe ... 44 Liu Chuan-Fen - - 2013 OBJECTIVE: To illustrate how the analysis of bimodal U-shaped distributed utilization can be modeled with beta-binomial regression, which is rarely used in health services research. DATA SOURCES/STUDY SETTING: Veterans Affairs (VA) administrative data and Medicare claims in 2001-2004 for 11,123 Medicare-eligible VA primary care users in 2000. STUDY DESIGN: We ... 45 Huillet Thierry E - - 2013 We study a class of coalescents derived from a sampling procedure out of [Formula: see text] i.i.d. Pareto[Formula: see text] random variables, normalized by their sum, including [Formula: see text] -size-biasing on total length effects ([Formula: see text]). Depending on the range of [Formula: see text] we derive the large ... 46 Steinrücken Matthias - - 2013 We apply recently developed inference methods based on general coalescent processes to DNA sequence data obtained from various marine species. Several of these species are believed to exhibit so-called shallow gene genealogies, potentially due to extreme reproductive behaviour, e.g. via Hedgecock's "reproduction sweepstakes". Besides the data analysis, in particular the inference ... 47 Schmidt Philip J - - 2013 Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model ... 48 Bordonali L - - 2013 We present a systematic experimental comparison of the superparamagnetic relaxation time constants obtained by means of dynamic magnetic measurements and (1)H-NMR relaxometry, on ferrite-based nanosystems with different composition, various core sizes and dispersed in different solvents. The application of a heuristic model for the relaxivity allowed a comparison between the ... 49 Jiang Guoqian - - 2013 The beta phase of the 11th revision of International Classification of Diseases (ICD-11) intends to accept public input through a distributed model of authoring. One of the core use cases is to create textual definitions for the ICD categories. The objective of the present study is to design, develop, and ... 50 Chen Rongda - - 2013 Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery ... 1 2 3 4 5 6 7 8 9 10 >
Net Ionic I am supose to write the net ionic equation for Complete the right side of the following molecular equation (but do not enter). Then enter the net ionic equation. Assume all salts formed are soluble. Acid salts are possible. Use = instead of ==>. Do not use spaces or subscripts. You may use brackets. $$Ca(OH)_2(aq) + 2H_2SO_4(aq) ==>$$ I get that the products would be $$H_2O(l)+CaSO_4(s)$$ So Do I just have to balance this equation? then seperate the two aq on the left into ions and leave the right as it is. Related Biology and Chemistry Homework Help News on Phys.org GCT $$Ca^{2+}_{(aq)} + SO_4^{2-}_{(aq} ---> CaSO4_{(s)}$$
# A question on polynomials. Let a polynomial $f\in\mathbb{R}[x,y]$, and $f(x,y)=(x^2+y^2)p(x,y)^2-q(x,y)^2$ and $p,q$ are coprime to each other. When do, $f$ and $\frac{{\partial f}}{{\partial x}}$ and $\frac{{\partial f}}{{\partial y}},$share a non-trivial common factor?
# SSS Similarity ## Triangles are similar if their corresponding sides are proportional. % Progress MEMORY METER This indicates how strong in your memory this concept is Progress % Geometry Similarity Similarity Test True or False Teacher Contributed When you compare two triangles, if the corresponding sides are in ratio of $\frac{5}{10}$$\dfrac{5}{10}$ and $\frac{8}{20}$$\dfrac{8}{20}$, then the triangles are similar by the SSS similarity test. qid: 100274 Reviews
How to resize partitions? Previously, I have installed Windows 7 on my 320 GB laptop with three partitions 173, 84 and 63 GB each. The 63 GB partition was where the Windows was installed. The rest were for file containers. Now I changed my OS to Ubuntu 12.04 LTS. I installed Ubuntu by replacing the entire Windows 7 on the 63 GB partition. The rest of the partitions remain as an NTFS Windows partition and I can still access them both (the 173 and 84 GB partitions). Now I want to change the two partitions of Windows into an Ubuntu format partitions plus most importantly, I want to extend the 63 GB partition to more than a 100 GB because at the moment I am running out of memory. Whenever I try to install any application, especially using wine, it always complains for a shortage of memory. How do I do the extending activity before I entirely format my laptop again and lose all the important files on my partitions? - (Step 0:) Back up anything really valuable. This is a pretty tried and tested formula but things can go wrong. A power cut at the wrong moment could really ruin your day if you haven't backed up. 1. Boot to a LiveCD or LiveUSB drive in "try me" mode. 2. Load gparted (should be installed by default, you can apt-get it if it's not) 4. Click apply and sit back while it does the job. 5. Reboot, taking out the USB stick or CD when it tells you to. - If you have already installed Ubuntu 12.04, then install GParted with sudo apt-get install gparted. Launch it using Alt+F2, and typing gparted. In order to expand the 63GB partition, you must have free space in front of or after it. So first you will have to use GParted to resize a partition above or below your 63GB partition. Refer the following figure: When you click on resize, a window will open where you can easily drag and resize your partition. Once the free space is made available, resize your 63GB partition just like the above covering that free space. Hope this works for you. - Thanks man, appreciate it. –  Chance Jun 29 '12 at 19:45 You can't do it from your installed system since the partition will be in use. –  psusi Feb 11 '14 at 21:19 Good point. He will have to use Gparted from a live CD as Oli's answer suggests. –  harisibrahimkv Feb 12 '14 at 3:21 2. Burn it on CD and follow the steps to resize your partition - You can only re-partition unmounted partitions. I have a gParted live disc ready for things like that. You can find it here: http://gparted.sourceforge.net/livecd.php Basic features: GParted enables you to easily manage your disk partitions: • Create partition tables, (e.g., msdos or gpt) • Create, move, copy, resize, check, label, set new UUID, and delete partitions • Enable and disable partition flags, (e.g., boot or hidden) • Align partitions to mebibyte (MiB) or traditional cylinder boundaries • Attempt data rescue from lost partitions Resizing is explained in the documentation of gParted. In short (the link has some extra information and tips): Resizing and moving a partition can be performed by a single gparted operation. To resize a partition: • Select an unmounted partition. See the section called “Selecting a Partition”. • Choose: Partition → Resize/Move. The application displays the Resize/Move /path-to-partition dialog. • Adjust the size of the partition. See the section called “Specifying Partition Size and Location”. • Specify the alignment of the partition. See the section called “Specifying Partition Alignment”. • Click Resize/Move. - One way that you can shrink partitions without losing data is by using GParted. A very good application, but be careful with it. Edit: Boot from a live CD so you will be able to do the resizing. Install gparted with Ubuntu Software center, or any other way (synaptic etc) you prefer, if it isn't already installed. It will ask you to authenticate when you run it, as it has access to things that can damage your installation badly. Realise that by altering partitions on your hard drive(s) you can potentially stop your system booting completely. As I said, be careful. It will then search devices it can see and display the partitions on the first one (probably /dev/sda, if not try different devices from the pull down at the top right). You should be able to see that one of them contains your root (/) mount point. When you are sure you have the correct one (the size itself is a good indicator), right-click on that partition and choose Resize/Move (if it is greyed out, you might need to unmount it first (make sure you have booted off a live CD, and not your installed linux system). Reduce the size in the middle (New Size) edit box to what you want (make sure it's still large enough for your system's needs). Click on resize/move, then use the big green tick to apply the changes. If it reports success, then should be able to shutdown the live cd and reboot into your main system. Edit2: I just googled a tutorial you might look at http://www.dedoimedo.com/computers/gparted.html - You may want to expand this answer with more information about how to use GParted, a link to more information, or at least minimal information about how to install it and properly run it (and perhaps also what the basic precautions are). –  Eliah Kagan Jul 22 '12 at 3:06 protected by Community♦Dec 4 '14 at 4:56 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
# All Questions 52 views ### How are the half life of particles mathematically related to the inverse square laws? This video https://www.youtube.com/watch?v=K6i-qE8AigE&list=PL3B0924C6A0CB8B67 explains how forces are boson particles. It talks about the other non-force particles giving off and creating bosons, ... 22 views 32 views ### Would using a fan cause an air conditioning system to think it's cold? Recently my office manager said this, after I requested a desk fan: ...using a fan causes a draught, which then leads the air conditioning to believe it’s cooler than it is and it then blows out ... 43 views ### is this a good introduction to the standard model? [on hold] I am not familiar enough with the subject matter to know if this is a good undergraduate/first course book. it is not cheap. 33 views ### Bose-Einstein condensation as a phase transition 1.How does a non-interacting system exhibit phase transition? 2.Is partition function of BEC is non-analytic (just like in the ordinary phase transitions)? 3.What is the order of BEC phase ... 46 views ### Why can't classical particles make Bose-Einstein condensate? Where the quantum mechanics enters in BEC, why can't (non-interacting) classical particles condense? 51 views ### Why does accounting for direction give the wrong result for a bouncing ball? This a conservation of momentum problem, gathered from an old textbook. I thought it would be simple, but I seem to be goofing up somewhere in my basic conceptual understanding. PROBLEM: A 1.0 kg ... 16 views ### Why is 0 $\nu \beta\beta$ decay often written with electron emission and not positron? According to http://www.cobra-experiment.org/double_beta_decay/ I can see that double $\beta$+ decay is possible, but I often find neutrinoless double beta sources with the double $\beta$- decay ... 23 views ### Johnson Noise: Source of thermal fluctuations I've read a lot online about Johnson noise being caused by thermal fluctuations, and the Wikipedia page of thermal fluctuations attributes this to the fact that particles don't all have the same ... 33 views ### Best way to heat something in aluminum foil? [on hold] Let's say we have a wet piece of paper, wrapped in aluminum foil, that we need to heat up in the fastest and most energy efficient way possible (no flamethrower). What would that be? Details ... 38 views ### How does virtual particle become real particle in Hawking radiation? [duplicate] I am new to black hole and general relativity and am just getting introduced to these concepts. According to my understanding, virtual particle that forms because of quantum fluctuation becomes real ... 39 views 92 views ### About the perception of the flowing of time The user John Rennie suggested I ask the following as a new question: If time is continuous why would the perception of time being continuous or flowing be an illusion? 20 views ### BIG source or lots of little sources [on hold] source of any power eg audio in a speaker system for a concrete example or a car engine. you have 1000W. do I a 250W engine on each wheel or a 1000W engine under the hood and a transmission system to ... 29 views ### Power transmission loss in a large circuit This is a followup question to this question Consider a large circuit like the german electric power transmission network. You have a lot of of people consuming power and a lot of power stations ... 52 views ### If you are not given a metric, which one is more fundamental: a vector or a covector? [on hold] If we do not have the metric $g_{\mu\nu}$ for a given spacetime, are vectors $x^\mu$ more fundamental than covectors $x_\mu$ or vice versa? Why? (if the metric were given we could just raise/lower ... 37 views ### Calculate Helmholtz Free Energy with Entropy, Work given [on hold] it's my first time here and I hope the post complies with the general rules. My problem originates here: I'm doing a statistical physics task which unfortunately leaves me clueless atm. I keep my ... 44 views ### Why is the Carnot engine the most efficient? It seems that the only condition used in proving that the Carnot engine is the most efficient is that it is reversible. More specifically, the Carnot engine can be run in reverse as a refrigerator. ... 96 views ### Confusion about Length Contraction (ex in Muon decay) I am a bit confused about the implications of length contractions; For example, in the muon decay problem, we assume that the distance between the muon and the earth is contracted only in the frame ... 19 views ### Ray tracing a three-way intersection I've been studying ray tracing in media with linear velocity-depth functions. One of the key concepts I've come across is the ray parameter, and in particular the idea that the ray parameter is ... 133 views ### Where do Newton's Laws not work? [on hold] I'm working on high school level project about Newton's Laws and I picked topic that describes situations, where they dont work. Can you name any practical cases where they do not work? Why do they ... 32 views ### The Higgs field and spacetime intervals The Higgs field imparts a rest mass to certain fundamental particles, but it also seems to do something more interesting. Particles that don't couple to the Higgs field, i.e. the rest-massless, are ... 29 views ### Kirchoff's loop law Can any one provide me with mathematical proof of kirchoff's loop law? I am not able to understand from where to start. 29 views ### Power transmission in a circuit lets say you have a battery and a resistance which form a circuit. The electrons flow through the resistance. How can you visualize the flow of energy. How do you visualize the energy field. I ... 30 views ### Why is energy of emitted particles negative? recently I had one problem here, and in comments I got interesting thing written by user Sofia. Unfortunetly, the topic is in hold now and I can't ask her more. Here is the comment: Radio-activity ... 27 views ### Strong CP Problem So, as far as I know, the Strong CP Problem in QCD results from the theta angle term in the action: $i\theta\int_X F_\nabla\wedge F_\nabla$ where $\nabla$ is the gauge connection and $X$ is a manifold ... 42 views ### How can we show that pressure is exerted sideways too? We can show that pressure upward and downward in a fluid is caused by weight of fluid column or volume. A simple derivation of this: ... 18 views ### Thomson problem vs. maximizing sum of distance Given $N$ equally charged points lying on the unit sphere ("electrons"), the Thomson problem consists of finding the configuration of these points such that the electrostatic potential energy ... 15 views ### Gaze tracking with the pupil-glint response how does the geometry work? I am struggling to understand the geometry behind the pupil glint response (see above) (With only one light source). I understand that you use the glint to find the corneal centre (Centre of corneal ... 37 views ### How come milk pours smoothly from a bag? I have seen the commercial about the new beer can where it is vented by puncturing a whole on the can (air behind water). If you just pour normally with normal cans it will get bottle necked and the ... 50 views ### Show that there exists a Lorentz transformation such that two events are happening at the same time? [on hold] I'm havin problems with the following exercise in a theoretical physics course: Show that there always exists a Lorentz transformation such that the two events $P$ and $P^0$ in the figure are ... 32 views ### How to find the Kinetic energy of a quarter of a wheel? [on hold] A wheel of mass 'm' and radius 'R' is rolling on a level road at a linear speed 'V'. What is the kinetic energy of the upper right quarter part of the wheel , considering the wheel to be of the ...
## Friday, January 8, 2016 ### ZeroAccess 3 Analysis After the takedown attempt in December 2013, the current status of ZeroAccess (alias Sirefef) has been much disputed. Initially the botmaster had released a command to the remaining infrastructure which contained nothing but the message "White Flag", signaling they may had given up on the botnet; but between March 21st 2014 and July 2nd 2014, SecureWorks had reported the ZeroAccess infrastructure resumed distributing commands. Ever since the takedown ZeroAccess has not attempted to infect new hosts, instead it simply uses the large number of remaining infections to perform tasks—until now, that is. On January 3rd R136a1 came across a previously unseen sample, which FireF0X confirmed to be a variant of ZeroAccess. It's uncertain how long this sample has been in the wild, but it certainly points towards the botnet being a little less dead than was previously claimed. ## Dropper The first thing I noticed was the dropper has a resource file which contains a 61 KB PNG. The image may look corrupted, which is because the developer decided to store the encrypted PE file in a valid PNG format. Rather than using stenography to hide the code in an existing image, the image is the code. The dropper uses the native Ldr API to find and acquire the resource (LdrFindResource_U, LdrAccessResource) followed by the GdiPlus API to convert the PNG to a bitmap (GdipCreateBitmapFromStream), get the size of the code (GdipGetImageWidth × GdipGetImageHeight × 4), then decrypts it by xor'ing it with the key 'wDT:'. Execution is then passed to some shellcode which is pointed to by the e_res field of the decrypted PE's header. The shellcode resides directly after the .reloc section but is not part of the PE and therefore will not be present if the executable is later dumped after it's loaded. A unique and interesting thing about this shellcode is the way in which it handles resolving strings. Due to the fact shellcode is designed to be executed at whatever address it's allocated, it can't have the address of the strings hardcoded in; instead, it has to find their absolute address somehow. In this shellcode the author places a relative call directly before the string which makes a relative call to a stub after it, the stub then pops the return address into the eax and returns to the shellcode. As a result when string_ZwMapViewOfSection is called, the return address of call loc_22E (the first byte of the string 'ZwMapViewOfSection') ends up in the eax register. The shellcode itself seems pretty mundane: it simply sets a hardware breakpoint on ZwMapViewOfSection using ZwSetContextThread with flag set to CONTEXT_DEBUG_REGISTERS, then loads shellstyle.dll (a legitimate system DLL). What actually happens though is when the PE loader tries to map shellstyle into memory, the breakpoint is hit and the shellcode's exception handler is called. The exception handler then modifies the return address of ZwMapViewOfSection to point to a function which replaces the module with the one we saw decrypted earlier, as a result loading the malicious DLL in the place of shellstyle.dll. Once the shellcode has replaced shellstyle.dll, execution is returned to the dropper which then uses LdrGetProcedureAddress to resolve and call the payload's entrypoint. The payload consists of a DLL file with an embedded Microsoft Cabinet file (.cab), which can be found by searching the binary for the string 'MSCF'. The cabinet contains 6 files: 3 for 32-bit and 3 for 64-bit. File Description s32 Boostrap list containing IPs and ports for 32-bit nodes s64 Boostrap list containing IPs and ports for 64-bit nodes Next the payload checks a couple of events to see if the system is already infected, if not it create the event \BaseNamedObjects\Restricted\{12E9D947-EDF5-4191-AADB-F51815F004D8}. A semi-unique identifier for the bot is created by MD5 hashing the volume creation time of \SystemRoot, then formatting it as a 32-bit GUID (just like with older versions of the bot). This version of ZeroAccess stores itself in a folder named #., which is contained within the directory %localappdata%\Google\Desktop\Install\<BotGUID>. The folder name #. is special because it makes use of a difference in operation between the NativeAPI (ntdll) and Windows API (kernel32). Using native API functions a folder with a dot at the end of the name is entirely valid and functional; however, under the Windows API it is not; the result is a usable folder that the malware can freely use to store and execute its components, but that cannot be accessed or modified by explorer or any application not using the Native API. For further protection, the folder is set as an NTFS Reparse Point which redirects to nowhere, preventing access by any means that is susceptible to NTFS re-parsing. The payload extracts the bootstrap list (s32 or s64 depending on architecture) from the cabinet file, then decrypts and stores it inside the install directory in a file named @. The dropper is also moved to the same directory with the name GoogleUpdate.exe. GoogleUpdate.exe is added to autostart by creating a key under HKCU\Software\Microsoft\Windows\CurrentVersion\RunOnce with the the bot's GUID as its name; the key name is prefixed with a null byte and written as Unicode, preventing Regedit from displaying it. Throughout its installation process the bot sends status updates to a C&C via UDP on port 53 (DNS) with geolocation information obtained from the freegeoip.net api and an affiliate id stored in the bot (ZeroAccess uses an affiliate scheme in which people can sign up to be paid a small amount of money for each computer they infect with the malware). All status update communication is encrypted with the key 0x4C4F4E47 ('LONG'), which is the same key as in previous versions. Finally the main payloads l32 and k32 or l64 and k64 are extracted from the cabinet (my hypothesis is 'l' stands for 'Loader' and k stands for 'Kernel'). The ZeroAccess kernel is a DLL containing the main code for communicating with the peer-to-peer network and the loader is used to load the DLL in the same way the dropper loaded shellstyle.dll, except this time the DLL loaded and replaced is comres.dll. If the process is UAC elevated it injects into svchost.exe, otherwise it will attempt to inject into explorer and bypass UAC using the same method as previous version (detailed in FireF0X's UACMe), before finally injecting into svchost.exe. The injection methods works by writing the kernel, followed by the loader, to the remote process using ZwAllocateVirtualMemory and ZwWriteVirtualMemory, then executing the loader by calling ZwQueueApcThread. In the case of 64-bit, the payload will use Heaven's Gate to switch into 64-bit mode before calling the injection routine, allowing it to inject 64-bit processes from WOW64. ## Peer to Peer Communications The new ZeroAccess variant largely preserves the communication protocol of its previous versions, with some notable changes. For completeness, the UDP packet format for C&C communication has a common header of the form struct zeroaccess_packet { uint32_t crc32; uint32_t type; uint32_t id; uint32_t length : 16; uint32_t packet_number : 15; }; The packet_number field appears to be new. It is used to ensure incoming packets match earlier requests, possibly to prevent 'retL' flooding by takedown attempts. The type field can be one of 'getL' and 'retL—the 'newL' field of some earlier versions is now gone. id is a 32-bit bot session id, generated at startup and used to identify sessions. The reply field indicates whether the receiving node should reply with a getL of its own. Every packet is encrypted—as before—with a very simple rolling-xor algorithm: void zeroaccess_encrypt(unsigned char * message, size_t bytes) { uint32_t k = 0x31323334UL; assert(bytes % 4 == 0); for(size_t i = 0; i < bytes; i += 4) { store_le32(m + i, load_le32(m + i) ^ k); k = (k << 1) | (k >> 31); } } Note the new key, '1234', instead of the old one 'ftp2'. This same algorithm is used with the 'LONG' key mentioned above. The getL packet is empty and consists only of the above header. The retL packet format is slightly changed due to one of the most noteworthy changes in the protocol—UDP ports are now randomly-generated, and thus the retL packets must include port information. The retL packet is of the form struct peer_entry { uint32_t ip; uint32_t port : 14; uint32_t timestamp : 18; }; struct file_entry { uint32_t id; uint32_t timestamp; uint32_t length; uint8_t signature[128]; }; struct retL { peer_entry peers[16]; file_entry files[/*variable*/]; }; On 32-bit Windows hosts, the port number is computed as peers[i].port + 16384, whereas on 64-bit Windows, it is computed as peers[i].port + 32768. It is therefore simple to figure out an individual node's platform by its UDP port number. The timestamp is computed as (GetTime() - 0x41000000) / 3600, where GetTime() is ULONG GetTime() { FILETIME time; ULONG stamp; GetSystemTimeAsFileTime(&time); RtlTimeToSecondsSince1980((LARGE_INTEGER *)&time, &stamp); return stamp; } We can see that the base timestamp 0x41000000, i.e., July 22 2014, lower-bounds the age of this variant. The file list format remains unchanged (modulo timestamp format), albeit with a distinct RSA-1024 verification key compared to the previous botnets. The bot must also remember what its port number is. This information is stored in the Extended Attributes of the @ file. The list of known peers is stored  in this file, as remarked above. ## Statistics We have some early crawling numbers for this variant. But first, is may be useful to compare with the previous ZeroAccess 2 botnet. We see for ZeroAccess 2 daily distribution seemed rather consistent. The total confirmed connections were approximately 715,000 for the month. Looking deeper into a sub aggregation of country and unique IPs counted we can see a high level in US on Comcast, although this could be explained by IP churning due to DHCP for the cable provider. Nonetheless, this is a high number of uniques compared to any other ISP. Finally, we see a rather healthy distribution of nodes and countries with a heavy weight in Japan and US. Additionally, we observe an unusual high amount of unique IPs the day after Christmas. In comparison, the latest Zero Access 3 variant, we see the IP distribution is primarily focused in Russia. Whether this team is preparing the infrastructure for a larger planned botnet or it is focusing its efforts in the area is still unknown. We can potentially hypothesize that this botnet has been retooled to allow botnet subgroups using the generation of new encryption keys and RSA binary signing keys. The result of this would mean multiple actors could use the same malware, similarly to trends that have been observed with other seed-based malware, like Tiny Banker (aka Tinba). This would make takedowns more difficult in that there is no one central peer to peer network, but rather a cluster of peer to peer networks. Another plausible hypothesis is that the ZeroAccess code base has been sold (or stolen) to a third party, one who is more interested in targeting Eastern Europe. These types of networks are becoming more and more resilient to takedowns, as observed with Dridex, and even ZeroAccess itself is still maintaining its presence. With multiple authors and actors involved, the difficulty of takedown attempts by technical means is also compounded by the difficulty of prosecuting multiple authors (and actors) in multiple geopolitically diverse jurisdictions. We would like to thank MalwareTech for his invaluable contributions to this research.
How to make genotype from allele data 1 0 Entering edit mode 4.9 years ago Hi I have a data which composed of allele information homo, hetero ; normal,disease and etc. Each patient has this information, a table with several columns looks like : Patient SNP1 SNP2 P1 AA 0 P2 Ab 1 P3 BB 0 My question is, Does R has any packages for calculating genotype groups and allele frequency? Thanks SNP R gene • 1.4k views 1 Entering edit mode 4.9 years ago What is the last column in your input with 0 and 1. Why R and what questions you are going to studt with it? plink is popular genotype data analysis. There is a way to use plink from inside R =) As always with R there are packages dedicated for very specific tasks like and many more. 0 Entering edit mode Thanks @Petr Ponomarenko. The last column is another representation of my SNP columns. And about your second question, I'm more familiar with R, I have no information about PLINK. Can I use PLINK for Sanger SNP data like this: 0 Entering edit mode this apears to me as a screenshot with no structure. just numbers in a box =) What are the columns, rows and data types? Please describe the dataset. R is great, but plink might be easier to go around reviewers since it has a lot of different statistical models and tests inside and ways to format data in popular ways.
### Example 15.1 MPS-Format Data Set for a Product Mix Problem Consider a simple product mix problem where a furniture company tries to find an optimal product mix of four items: a desk (), a chair (), a cabinet (), and a bookcase (). Each item is processed in a stamping department (STAMP), an assembly department (ASSEMB), and a finishing department (FINISH). The time each item requires in each department is given in the input data. Because of resource limitations, each department has an upper limit on the time available for processing. Furthermore, because of labor constraints, the assembly department must work at least 300 hours. Finally, marketing tells you not to make more than 75 chairs, to make at least 50 bookcases, and to find the range over which the selling price of a bookcase can vary without changing the optimal product mix. This problem can be expressed as the following linear program: The following DATA step saves the problem specification as an MPS-format SAS data set: data prodmix; infile datalines; input field1 $field2$ field3$field4 field5$ field6; datalines; NAME . PROD_MIX . . . ROWS . . . . . MAX PROFIT . . . . L STAMP . . . . L ASSEMB . . . . L FINISH . . . . COLUMNS . . . . . . DESK STAMP 3.0 ASSEMB 10 . DESK FINISH 10.0 PROFIT 95 . CHAIR STAMP 1.5 ASSEMB 6 . CHAIR FINISH 8.0 PROFIT 41 . CABINET STAMP 2.0 ASSEMB 8 . CABINET FINISH 8.0 PROFIT 84 . BOOKCSE STAMP 2.0 ASSEMB 7 . BOOKCSE FINISH 7.0 PROFIT 76 RHS . . . . . . TIME STAMP 800.0 ASSEMB 1200 . TIME FINISH 800.0 . . RANGES . . . . . . T1 ASSEMB 900.0 . . BOUNDS . . . . . UP BND CHAIR 75.0 . . LO BND BOOKCSE 50.0 . . ENDATA . . . . . ;
What is the radiation resistance of an antenna which is radiating a power of 100 kW and draws a current of 100 A? Question 2 views What is the radiation resistance of an antenna which is radiating a power of 100 kW and draws a current of 100 A?
雅思备考前的冲刺对考生来说是十分重要的,在冲刺阶段一定要坚持练习。本文智课网为大家带来的是雅思阅读考前练习:Striking Back at Lightning,一起来具体的了解一下吧。 点击》》了解雅思阅读考前练习汇总 Striking Back at Lightning With Lasers Seldom is the weather more dramatic than when thunderstorms strike. Their electrical fury inflicts death or serious injury on around 500 people each year in the United States alone. As the clouds roll in, a leisurely round of golf can become a terrifying dice with death — out in the open, a lone golfer may be a lightning bolt’s most inviting target. And there is damage to property too. Lightning damage costs American power companies more than $100 million a year. But researchers in the United States and Japan are planning to hit back. Already in laboratory trials they have tested strategies for neutralising the power of thunderstorms, and this winter they will brave real storms, equipped with an armoury of lasers that they will be pointing towards the heavens to discharge thunderclouds before lightning can strike. The idea of forcing storm clouds to discharge their lightning on command is not new. In the early 1960s, researchers tried firing rockets trailing wires into thunderclouds to set up an easy discharge path for the huge electric charges that these clouds generate. The technique survives to this day at a test site in Florida run by the University of Florida, with support from the Electrical Power Research Institute (EPRI), based in California. EPRI, which is funded by power companies, is looking at ways to protect the United States’ power grid from lightning strikes. ‘We can cause the lightning to strike where we want it to using rockets,’ says Ralph Bernstein, manager of lightning projects at EPRI. The rocket site is providing precise measurements of lightning voltages and allowing engineers to check how electrical equipment bears up. Bad behaviour But while rockets are fine for research, they cannot provide the protection from lightning strikes that everyone is looking for. The rockets cost around$1,200 each, can only be fired at a limited frequency and their failure rate is about 40 per cent. And even when they do trigger lightning, things still do not always go according to plan. ‘Lightning is not perfectly well behaved,’ says Bernstein. ‘Occasionally, it will take a branch and go someplace it wasn’t supposed to go.’ And anyway, who would want to fire streams of rockets in a populated area? ‘What goes up must come down,’ points out Jean-Claude Diels of the University of New Mexico. Diels is leading a project, which is backed by EPRI, to try to use lasers to discharge lightning safely — and safety is a basic requirement since no one wants to put themselves or their expensive equipment at risk. With around $500,000 invested so far, a promising system is just emerging from the laboratory. The idea began some 20 years ago, when high-powered lasers were revealing their ability to extract electrons out of atoms and create ions. If a laser could generate a line of ionisation in the air all the way up to a storm cloud, this conducting path could be used to guide lightning to Earth, before the electric field becomes strong enough to break down the air in an uncontrollable surge. To stop the laser itself being struck, it would not be pointed straight at the clouds. Instead it would be directed at a mirror, and from there into the sky. The mirror would be protected by placing lightning conductors close by. Ideally, the cloud-zapper (gun) would be cheap enough to be installed around all key power installations, and portable enough to be taken to international sporting events to beam up at brewing storm clouds. A stumbling block However, there is still a big stumbling block. The laser is no nifty portable: it’s a monster that takes up a whole room. Diels is trying to cut down the size and says that a laser around the size of a small table is in the offing. He plans to test this more manageable system on live thunderclouds next summer. Bernstein says that Diels’s system is attracting lots of interest from the power companies. But they have not yet come up with the$5 million that EPRI says will be needed to develop a commercial system, by making the lasers yet smaller and cheaper. ‘I cannot say I have money yet, but I’m working on it,’ says Bernstein. He reckons that the forthcoming field tests will be the turning point — and he’s hoping for good news. Bernstein predicts ‘an avalanche of interest and support‘ if all goes well. He expects to see cloud-zappers eventually costing $50,000 to$100,000 each. Other scientists could also benefit. With a lightning ‘switch’ at their fingertips, materials scientists could find out what happens when mighty currents meet matter. Diels also hopes to see the birth of ‘interactive meteorology’ — not just forecasting the weather but controlling it. ‘If we could discharge clouds, we might affect the weather,’ he says. And perhaps, says Diels, we’ll be able to confront some other meteorological menaces. ‘We think we could prevent hail by inducing lightning,’ he says. Thunder, the shock wave that comes from a lightning flash, is thought to be the trigger for the torrential rain that is typical of storms. A laser thunder factory could shake the moisture out of clouds, perhaps preventing the formation of the giant hailstones that threaten crops. With luck, as the storm clouds gather this winter, laser-toting researchers could, for the first time, strike back. Questions 1-3 Choose the correct letter, A, B, C or D. 1 The main topic discussed in the text is A the damage caused to US golf courses and golf players by lightning strikes. B the effect of lightning on power supplies in the US and in Japan. C a variety of methods used in trying to control lightning strikes. D a laser technique used in trying to control lightning strikes. 2 According to the text, every year lightning A does considerable damage to buildings during thunderstorms. B kills or injures mainly golfers in the United States. C kills or injures around 500 people throughout the world. D damages more than 100 American power companies. 3 Researchers at the University of Florida and at the University of New Mexico A receive funds from the same source. B are using the same techniques. C are employed by commercial companies. D are in opposition to each other. Questions 4-6 Complete the sentences below. Choose NO MORE THAN TWO WORDS from the passage for each answer. 4 EPRI receives financial support from ..................... . 5 The advantage of the technique being developed by Diels is that it can be used..................... 6 The main difficulty associated with using the laser equipment is related to its..................... Questions 7-10 Complete the summary using the list of words, A-I, below. Write the correct letter, A-I, in boxes 7-10 on your answer sheet. In this method, a laser is used to create a line of ionization by removing electrons from 7 ..................... . This laser is then directed at 8 ..................... in order to control electrical charges, a method which is less dangerous than using 9..................... . As a protection for the lasers, the beams are aimed firstly at 10 ..................... . A cloud-zappers B atoms C storm clouds D mirrors E technique F ions G rockets H conductors I thunder Questions 11-13 Do the following statements agree with the information given in Reading Passage 1? YES if the statement agrees with the claims of the writer No if the statement contradicts the claims of the writer 11 Power companies have given Diels enough money to develop his laser. 12 Obtaining money to improve the lasers will depend on tests in real storms. 13 Weather forecasters are intensely interested in Diels’s system. 下面是本篇阅读的答案解析: Question 1 答案: D 关键词: main topic 定位原文: 文章标题 解题思路: 通过标题知道整篇文章的主旨是“通过激光来回击闪电”,因此答案是 D 选项,意思为 “一种用于控制闪电袭击的激光技术”,属于对标题的同义替换。 Question 2 答案: A 关键词: every year lightening 定位原文: 第1段内容 解题思路: 本题考查关于每年闪电情况的细节,可 定位于第一段。B 选项可以通过 golfer 一词来定 位,也在第一段,原文意思是“孤单的高尔夫球 手或许将是闪电之箭最为有吸引力的目标”,选 项 B“在美国主要杀死或者伤害高尔夫球手”改 变了原意 ;C 和 D 选项可以分别通过 500,100 这两个数字来定位到第一段,但是 C 选项中将原 文 in the United States 偷换成了 throughout the world,因此不对;D中将原文的$100 million 偷换成 100 companies,也不对。通过对第一段 的概括,可以知道闪电带来的影响是非常大的, 因此答案是 A。 Question 3 答案: A 关键词: University of Florida, University of New Mexico 定位原文: 第三段和第五段内容 解题思路: 题目问的是 University of Florida 和 University of New Mexico 的研究员的关系。通 过 University of Florida 和 University of New Mexico 分别定位至第三段和第五段。对两处论 述进行对比,不难得出两者共同之处是“从同一来源获得经费”,都是 EPRI。答案是 A。 Question 4 答案: power companies 关键词: EPRI, financial support 对应原文: 第3段第4句“EPRI, which is funded…” 解题思路: 用EPRI定位到文章第三段,EPRI第一次出现之后即指出其是由电力公司资助的,原文中的funded 等同于题干中的 receives financial support from, 因此答案应该填power companies。注意不要写成单数。 Question 5 答案: safely 关键词: Diels, advantage 定位原文: 第5段第3句“...to try to use lasers to…” 解题思路: 用人名Diels在文中定位到第五段,从题目看出这里应填入一个副词,所以可以在人名周围寻找 use或者use的替换词,并且在其周围找带有-ly形式的词,这样正确答案safely很快就能浮出水面了。 Question 6 答案: size 关键词: difficulty, laser equipment 定位原文: 第7段第1、2句“…The laser is no nifty…” 解题思路: 这道题目的定位稍微有一些困难,需要将 difficulty一词与文章中的stumbling block联系起来,进而找到第七段中的laser一词。文中提到,该激光设备并不方便携带,它是个体积占据了一整间房间的庞然大物。看到这里,通过理解,考生们可以想到激光设备最大的问题就是体积太大,不好携带,所以正确答案是size。 Question 7 答案: B 关键词: removing electrons 定位原文: 第6段第1句“...to extract electrons out…” 解题思路: 本题关键是要理解题目中的remove...from...与文中的extract...out of...属于同义替换,这里要表达的是从原子(atoms)中提取电荷(electrons)。 Question 8 答案: C 关键词: then, control electrical charges 定位原文: 第6段第2句“If a laser could generate a line of ionization in the air all the way up to a storm cloud...” 解题思路: 注意文中generate是“产生”的意思;directed at对应文中的 all the way up to,其后的 a storm cloud即对应空格处要填的内容。因此正确答案是C。 Question 9 答案: G 关键词: less dangerous than 定位原文: 第4段和第5段内容 解题思路:解答本题需要对文章有一个提炼,第 9 题问的是激光是相对于哪种方式更加有安全 的技术。根据第四段和第五段可以知道,第四段说火箭发射的缺点,第五段说出于安全性的考虑开始使用激光,因此答案应该是火箭(rockets)。 Question 10 答案: D 关键词: protection, aimed firstly 定位原文: 第6段第3、4句“To stop the laser…” 解题思路: protection对应文中的 stop...being struck; at是解题关键词,即使不知道文中的directed和题目中的aimed是同义词,也可以从词组的形式上看出来两者是同位的,其后的名词即为答案。由此可知答案是D。 Question 11 答案: NO 关键词: Diels, enough money 定位原文: 第8段第3句“‘I cannot say I have…” 解题思路: “I cannot say I have money yet, but I am working on it”( “我还不能说我已经拿到钱了,但是我正在为之努力。”)看到这句话,再联系上句:Bernstein says that Diels’ system is attracting lots of interest from the power companies. But they have not yet come up with the$5 million that EPRI says will be needed to develop a commercial system...(Bernstein表示,Diels的激光系统正在引起各电力公司的广泛兴趣。但他们还没有准备EPRI提出的500万美元——开发一个……的商用系统的所需资金。)这两句话足以证明Diels系统还没有得到足够的资金支持。 Question 12 答案: YES 关键词: depend on tests in real storms 定位原文: 第8段第4句“He reckons…” 解题思路: 根据第八段Bernstein的话可知,他认为即将来临的实地测验将是转折点,他希望有好消息。如果一切进展顺利,Bernstein 预测关注和支持将潮涌而来。题目表述符合文意。 Question 13 答案: NOT GIVEN 关键词: Diels, weather forecasters 定位原文: 第9段最后两句“Diels also hopes…” 解题思路: 文章第九段虽然提到了天气预报,即Diels希望将来看到“交互式气象学”, 不仅是预报天气,还可以控制天气 ;但是却并没有提到过 weather forecasters 的态度,他们也许感兴趣,也许不感兴趣,无从判断。 以上是智课网为大家分享的雅思阅读考前练习:Striking Back at Lightning答案及解析,希望能够对大家有帮助。
# Linear Search in Python Written by Himani Kohli ### Linear Search in Python In this program, we will learn to search an element from the given array by using the linear search technique. A linear or sequential search, as the name suggests, is done when you inspect each item in a list one by one from one end to the other to find a match for what you are searching for. As compared with other techniques it is the worst searching algorithm with worst-case time complexity O (n). Algorithm: 1. Input the number to be searched from the user and store in variable n. 2. Array a is initialized. 3. Using for loop, perform the linear search. 4. Check if n==a[i], if true print “Number found”. 5. Also, return its index or position. 6. Iterate till we found the desired number which is asked by the user. 7. Exit. Code: n=int(input("Enter the number to be searched (1-10):")) a=[1, 2, 4, 3,5,7,9,8,6,10 ] for i in range(1,(len(a))): if n==a[i]:         print("number found at",i+1) Output: Enter the number to be searched (1-10):7 Number found at 6
Assign a table or an expression result to a R symbol in the Datashield R session. Note that usage of usage of respectively datashield.assign.table or datashield.assign.expr should be preferred for readability. datashield.assign( conns, symbol, value, variables = NULL, missings = FALSE, identifiers = NULL, id.name = NULL, async = TRUE, success = NULL, error = NULL ) ## Arguments conns DSConnection-class object or a list of DSConnection-classs. symbol Name of the R symbol. value Fully qualified name of a table reference in data repositories (see datashield.assign.table for more details) OR a R expression with allowed assign functions calls (see datashield.assign.expr for more details). variables List of variable names or Javascript expression that selects the variables of a table (ignored if value does not refere to a table). See javascript documentation: http://opaldoc.obiba.org/en/latest/magma-user-guide/variable/ missings If TRUE, missing values will be pushed from data repository to R, default is FALSE. Ignored if value is an R expression. identifiers Name of the identifiers mapping to use when assigning entities to R (if supported by data repository). id.name Name of the column that will contain the entity identifiers. If not specified, the identifiers will be the data frame row names. When specified this column can be used to perform joins between data frames. async Whether the result of the call should be retrieved asynchronously (TRUE means that calls are parallelized over the connections, when the connection supports that feature, with an extra overhead of requests). success Callback function that will be called each time an assignment is successful. The expected function signature is the connection/study name. Default is NULL (no callback). error Callback function that will be called each time the assignment request has failed. The expected function signature is the connection/study name and the error message. Default is NULL (no callback). ## Examples if (FALSE) { # assign a list of variables from table CNSIM1 datashield.assign(conn, symbol="D", value="CNSIM.CNSIM1", variables=list("GENDER","LAB_GLUC")) # assign all the variables matching 'LAB' from table CNSIM1 datashield.assign(conn, symbol="D", value="CNSIM.CNSIM1", variables="name().matches('LAB_')") # do assignment with callback functions datashield.assign(conns, "D", list(server1="CNSIM.CNSIM1", server2="CNSIM.CNSIM2"), success = function(server) { # do something with server's success }, error = function(server, error) { # do something with server's error message }) }
## Fitting 4 parameter distributions in S-Plus (or R) Hi, I am trying to fit sample data to a Johnson SU distribution in S-Plus. It seems not many people use S-Plus, so if you are familiar with R then you could help as well. The code that I have is: f.Jsu.fun.takeslist(x,g,l,r,e) which is a function I have made that calculates the PDF of each value of the list x and has parameters g,l,r,e corresponding to the Johnson SU distribution. I know this PDF works because I have used it to plot graphs. fitdistr(turn$all.turn.y.obs,f.Jsu.fun.takeslist,list(g=0.5,r=3,l=3000, e=-200)) is then what I am trying to use to fit the Johnson SU distribution. fitdistr is a native S-Plus function. Now, the fitdistr doesn't work on this function. What I have done previously though is fitted the Gumbel distribution using the same approach by creating my own PDF function, and fitdistr worked and provided a good fit. The Gumbel distribution is 2 parameter and the Johnson SU is 4 parameter, so I am thinking it is just too many parameters for it to handle, although no limits are specified for fitdistr. I am by no means a master of S-Plus or R so I would appreciate anything you guys have to say on this. Thanks! PhysOrg.com science news on PhysOrg.com >> 'Whodunnit' of Irish potato famine solved>> The mammoth's lament: Study shows how cosmic impact sparked devastating climate change>> Curiosity Mars rover drills second rock target Can you guarantee that turn$all.turn.y.obs follows a Johnson SU? If you can't then maybe the fit is just not good, anyhow it might also be worth to look for outliers in you data. The number of parameters should not be the problem, if anything you could also try to increase the number of max iterations for the optimizers used by fitdistr.
# Is every NP-hard problem computable? Is it required that a NP-hard problem must be computable? I don't think so, but I am not sure. No, an $NP$-hard problem need not be computable. The definition is fairly complete: a problem $L$ is $NP$-hard if that problem having a poly-time solution implies every problem in $NP$ has a poly-time solution (that is, a reduction to $L$ exists for every problem in $NP$.). Uncomputable problems are then vacuously hard: suppose we could solve one in polynomial time. Then we use the proof that it's uncomputable to derive that it's both computable and uncomputable, a contradiction. From this falsehood, we can derive anything, namely that there is a polynomial time algorithm for whatever $NP$ problem we are looking at. For example, consider the halting problem $H$. We can reduce any $NP$ language $A$ to $H$ as follows, assuming we have a polytime checker $f(s,c)$ which checks if $c$ is a certificate for $s\in A$: • Given input $s$ • Construct (but don't run) Turing Machine $M$ which takes input $x$ tries every certificate $c$ and halts if $c$ is a certificate verifying that $s\in A$. • Return $H(M,x)$ (that is, return true iff $M$ halts on input $x$) Thus, with a single call to a poly-time algorithm solving the Halting Problem, we can solve any $NP$ problem in polynomial time. Such a reduction is not useful, because all it does is tells if "if false then something". We already know that there's no polytime algorithm for uncomputable problems. • "The definition is fairly complete", but is not what follows that quote in your answer. ​ ​ – user12859 Nov 7 '16 at 0:02 • I have a question about this. I can imagine a function that solves the halting problem for the largest set of programs possible under some appropriate constraints, but I can imagine this function still not being computable (in the sense that we would never find it even given an infinite amount of time). Yet if we somehow did have the solution to it, it's not even clear to me that it should solve all NP-hard problems necessarily. So either the logic in this answer doesn't follow (meaning undecidable != uncomputable), or my reasoning is flawed (likely). So what is the flaw? – user541686 Nov 7 '16 at 1:27 • Most of this answer is incorrect, including your definition of NP hard: problem A is NP hard if, "for every NP problem B, there is a poly-time reduction of B to A." That is not the same thing as "if A is poly-time, then P = NP." (The latter is a consequence of the former, but not vice versa.) In particular, there are almost certainly non-computable problems that also fail to be NP hard. I haven't worked out the details, but problem of membership in a sufficiently generic set (in the sense of forcing) should do the trick. The halting set, specifically, is NP-hard, however, by your reduction. – user61012 Nov 7 '16 at 4:35 • Think about a poly-time reduction from A to B like this: it is a program that runs in polynomial time, but it has the special ability to query, in a single step, an oracle that answers instances of problem B. Regardless of whether there is a poly-time algorithm for B, or even whether B is computable, it still makes sense to ask the following question: assuming that the oracle correctly answers the questions asked of it (in a single step), does the program in question run in polynomial time and correctly solve instances of problem A? – user61012 Nov 7 '16 at 4:41 • @MikeHaskel Your oracle analogy is only accurate if, after querying the oracle, the program must stop with the same answer as that oracle. Otherwise, co-SAT reduces to SAT: query the oracle and negate. In some reduction notions e.g. Turing reduction, this would be acceptable, but in standard poly-time reduction, or even in many-one reduction, it is not. – chi Nov 7 '16 at 20:18 There appears to be some considerable confusion in this community regarding this question. I'll give a detailed answer in the hope of clearing up the water and illuminating the relationship between computability and NP-hardness. First, I believe that being clear and explicit about the various definitions involved will resolve a lot of the confusion. A string is a finite sequence of characters from some fixed finite alphabet. A decision problem is a set of strings. (This set is typically infinite.) Think of the decision problem as testing strings for some property: the strings with the property are in the set, and the strings without the property are not. Assume we have two decision problems, $$A$$ and $$B$$. Say $$A$$ is polynomial-time reducible to $$B$$ if there is some polynomial $$p(x)$$ and algorithm some algorithm $$M$$ such that, for all strings $$s$$, • If you provide $$M$$ with input $$s$$, $$M$$ halts in fewer than $$p(|s|)$$ steps (where $$|s|$$ is the length of the string $$s$$) and outputs a string $$M(s)$$. • $$s$$ is in $$A$$ if and only if $$M(s)$$ is in $$B$$. A decision problem $$B$$ is NP-hard if, for every NP decision problem $$A$$, $$A$$ is polynomial-time reducible to $$B$$. A decision problem is computable if there is an algorithm $$M$$, that, for all strings $$s$$, • If you provide $$M$$ with input $$s$$, $$M$$ halts and outputs either "yes" or "no". • The output is "yes" if $$s$$ is in $$A$$ and "no" otherwise. With the above definitions, we can immediately clarify what I think might be the root confusion in your question: nothing in the definitions of decision problem, reductions, or NP-hardness requires the decision problems to be computable. The definitions make perfect sense thinking of decisions problems as arbitrary sets of strings, and these sets can be very nasty indeed. That leaves two questions on the table: 1. The definitions leave open the possibility that non-computable functions might be NP-hard. Are there actually non-computable, NP-hard functions? 2. There is an intuition that saying a problem is NP-hard is saying that it is hard to solve. Saying that it is non-computable is like saying it's "really hard" to solve. So, are all non-computable problems NP-hard? Question 1 is easier to answer. There are two particularly important ways to find non-computable decision problems that are NP-hard. The first is the halting problem: the halting problem, $$H$$, has the property that every computable decision problem is polynomial-time reducible to $$H$$. Since NP problems are computable, every NP problem is polynomial-time reducible to $$H$$, so $$H$$ is NP-hard. The other important way to build a non-computable, NP-hard problem is to observe that we can combine any known NP-hard problem with any known non-computable problem. Let $$A$$ be NP-hard and $$B$$ be non-computable. Form the decision problem $$A \oplus B$$ as follows: $$A \oplus B$$ contains those strings of the form "0, followed by a string in $$A$$" and those of the form "1, followed by a string in $$B$$". $$A \oplus B$$ is NP-hard because we can turn any reduction (of any problem) to $$A$$ into a reduction to $$A \oplus B$$: just tweak the algorithm to output an extra "0" at the front of its output string. $$A \oplus B$$ is non-computable, since computing $$A \oplus B$$ requires deciding which strings that start with "1" are in the set; this is impossible, since $$B$$ is non-computable. Question 2 is considerably tricker, but in fact there are non-computable decision problems that are not NP-hard (assuming P $$\neq$$ NP). Yuval's fine answer constructs such a decision problem explicitly. (For any computability theorists in the room, any "Cohen $$\Pi^0_1$$-generic" will do the trick, as well.) I'll break down why the intuition that "NP-hard problems are hard, non-computable problems are harder" is wrong. NP-hardness and non-computability both say that a problem is "hard" in a very general sense, but they are very different and shouldn't be lumped together as the same kind of phenomenon. Specifically, NP-hardness is a "positive" property: an NP-hard problem $$A$$ is hard in the sense that, given access to a cheat sheet for $$A$$, you can solve a hard class of problems. On the other hand, non-computability is a "negative" property: a non-computable problem $$A$$ hard in the sense that you cannot solve $$A$$ with a given class of resources. ("Forcing," by the way, is the technique used to produce the "Cohen $$\Pi^0_1$$ generic" that I mentioned. To be very very vague, forcing is a general way to produce things that are "generic" in that they have no positive properties and every negative property. That's why forcing can directly produce a problem that's neither computable nor NP-hard.) • Can't you construct an undecidable language which isn't NP-hard by diagonalization? Diagonalize against all deciders and all polytime reductions from SAT. – Yuval Filmus Nov 7 '16 at 23:22 • @YuvalFilmus That probably works, yeah. I think writing out the details for why diagonalizing against polytime reductions from SAT is possible amounts is similar in flavor to showing that forcing works, though, so I didn't think about it in those terms. – user61012 Nov 8 '16 at 1:26 • @YuvalFilmus I also added the clarification just now that you have to assume P $\neq$ NP: there was definitely a step in my proof that read "take some problem in NP but not in P." – user61012 Nov 8 '16 at 1:28 • @aelguindy I'm not sure what the most accessible method to prove it is. I mentioned the technique of forcing, which is very general and powerful. I learned it from people, not textbooks, so I don't personally know of a great reference on forcing. As Yuval pointed out, however, forcing is probably overkill: some more direct argument involving diagonalization probably works. Soare's "Recursively Enumerable Sets and Degrees" is a textbook that covers a lot of that style of argument if you want to become familiar with it. Again, most of it is probably overkill, though. ... – user61012 Nov 8 '16 at 1:45 • @aelguindy Also, if you think of the set of of decision problems as a topological space, you can probably massage the Baire Category theorem to produce a proof. This theorem is closely related to forcing, but is older and more straightforward. – user61012 Nov 8 '16 at 1:46 Nope. NP-Hard means it is as hard, or harder, than the hardest NP-problems. Intuitively, being uncomputable will make it a lot more difficult than NP. Wikipedia: There are decision problems that are NP-hard but not NP-complete, for example the halting problem. Everyone knows that is not computable • Note that, while some non-computable problems (like the halting problem) are NP-hard, that does not mean that all non-computable problems are NP-hard. See my comments on jmite's answer. NP-hardness is a positive property: it says that answers to your problem can help solve NP problems. Being NP-hard implies that the problem is, to some degree, difficult. Not all difficult problems are NP-hard. – user61012 Nov 7 '16 at 4:45 • @MikeHaskel: Possessing the solution to the halting problem reduces all problems to P * difficulty of the halting problem.. – Joshua Nov 7 '16 at 17:05 • @Joshua: That makes no sense. It's like a fragment of a non-proof. What do you even mean for a problem to have a finite number of bits in its solution, and why do you think this applies to all uncomputable problems? What do you mean by "P * halts"? What's the rest of "reduce via the nth bit of ..."? – user2357112 supports Monica Nov 7 '16 at 20:36 • @Joshua: Looks like the core issue is that you're assuming that every problem corresponds to a Turing machine. Not every problem corresponds to a Turing machine. There's no problem() function we can call. – user2357112 supports Monica Nov 8 '16 at 5:33 • You should probably move this to chat or something – Destructible Lemon Nov 8 '16 at 5:37 For completeness, let us prove the following theorem: There exists an uncomputable language which is not NP-hard if and only if P$\neq$NP. If P=NP then any non-trivial language (one which differs from $\emptyset,\{0,1\}^*$) is NP-hard (exercise), and in particular any uncomputable language is NP-hard. Now suppose that P$\neq$NP. Let $T_i$ be some enumeration of all Turing machines. We will construct the required language $L$ in stages. At each stage we will keep a $\{0,1,?\}$ coloring of $\{0,1\}^*$ which we also denote by $L$; here $0$ means that we have decided that the string is not in $L$, $1$ means that we have decided that the string is in $L$, and $?$ means that we haven't decided yet. All but finitely many strings will be colored $?$. In step $2i$, we think of $T_i$ as a machine which either accepts its input, rejects it, or never halts. If $T_i$ doesn't always halt then we don't do anything. If $T_i$ always halts then we find a string $x$ such that $L(x) = ?$, and set $L(x) := 0$ if $T_i(x)$ accepts and $L(x) := 1$ if $T_i(x)$ rejects. In step $2i+1$, we think of $T_i$ as a machine computing a (possibly) partial function on its input. If $T_i$ isn't total, or if it is total but doesn't run in polynomial time, or if it is total but its range is finite, we don't do anything. If $T_i$ is total, runs in polynomial time, and has infinite range, then we find a string $x$ such that $L(T_i(x)) = ?$. If $x \in \mathrm{SAT}$ (that is, if $x$ encodes a satisfiable CNF) then we set $L(x) := 0$, and otherwise we set $L(x) := 1$. After infinitely many steps, we get a $\{0,1,?\}$ coloring of $\{0,1\}^*$ which we complete to an actual language in an arbitrary way. The resulting language $L$ isn't computable: step $2i$ ensures that $T_i$ doesn't compute it. It also isn't NP-hard, but here the reasoning is a bit more delicate. Suppose that $T_i$ is a polytime reduction from SAT to $L$. If the range of $T_i$ is finite then we can turn $T_i$ into a polytime machine deciding SAT, by listing the truth table of $L$ on the range of $T_i$. This is impossible by the assumption P$\neq$NP. Thus $T_i$ has infinite range, but then step $2i+1$ rules out its being a reduction from SAT to $L$. A language $L$ is NP-hard if for every $L' \in \mathrm{NP}$ we have that $L'$ is polynomial-time reducible to $L$. The acceptance problem for nondeterministic Turing machines $$A_{\mathsf{NTM}} = \{ \langle M,w \rangle \mid M \text{ is a nondeterministic Turing machine that accepts } w \}$$ is undecidable and is NP-hard. For consider an $L' \in \mathrm{NP}$. $L'$ is decided by some nondeterministic Turing machine $M'$ with polynomial time complexity. A poly-time reduction $f$ from $L'$ to $A_{\mathsf{NTM}}$ is given by $$f(x) = \langle M,x \rangle$$ I think what causes people to think there is no uncomputable NP-hard problem is that they miss the point that NP-hardness is a lower bound on the hardness of a problem, not an upper bound on their hardness like P or NP. A language L being NP-hard means that it is above language in NP and that is. Now if you understand this what need is to show that there are arbitrary harder problem. Let $A$ be a language. Consider algorithms augmented with a black-box that they can use to deciding membership in $A$. Let's denote them by $\mathsf{C}^A$. It is easy to see that the halting problem for $\mathsf{C}^A$, $Halt_{\mathsf{C}^A}$ is not in $\mathsf{C}^A$. In computablity theory this is called jump of $A$ and is denoted by $A'$. So $A < A'$ strictly. And nothing stops us from repeating this: $A<A'<A''<A'''<...$
Still have questions? Join our Discord server and get real time help. -1 # Need help protecting your anti-exploit scripts? [closed] So a lot of people tend to get stuck wondering how in the world they will be able to protect a LocalScript. I thought I'd try to help anyone in need of an answer to this. One way you can do this is by having two LocalScripts. So, there's LocalScript1, and LocalScript2. Lets say LocalScript1 has the anti-exploit code in it. We will use LocalScript2 to detect if LocalScript1(anti-exploit) is deleted. In LocalScript1 we will do the same thing, the only difference is that it will detect is LocalScipt2 is deleted. Doing this will keep them both protected by repeatedly checking each other with a loop. In this case we will just use a while loop. Code Example: LocalScript1 --Anti-Exploit before the while loop. while wait() do --Make sure this while loop is at the end of your script! if LocalScript2 == nil then --Check if the script was deleted. LocalPlayer:Kick("Tried to delete Anti-Exploit") --Kick the player if they deleted LocalScript2. else if LocalScript2.Disabled then --Check if the script is Disabled. LocalScript2.Disabled = false --Here you could Kick the player, but we will just enable it again in this example. end end end So basically, make 2 scripts check if they are deleted and if one gets deleted the other kicks the player, and if one is disabled, the other enables it. Make sure to put them in a place where their parents wont get deleted or have the script check its parent. Hope this was helpful! 0 This isn't that kind of site. DesertusX 430 — 6mo 0 scripting "helpers" ReallyUnikatni 68 — 6mo 0 eh DesertusX 430 — 6mo 0 This is not a place where you should release scripts. VitroxVox 786 — 6mo 0 Use https://devforum.roblox.com instead. PrismaticFruits 837 — 6mo ### Closed as off-topic by VitroxVox, Vinceberget, AspectW, Syclya, PrismaticFruits, JesseSong, moo1210, ScuffedAI, lazycoolboy500, zblox164, EmbeddedHorror, and Utter_Incompetence This question has been closed by our community as being off-topic from ROBLOX Lua Scripting. 2 AspectW 426 6 months ago First of all, the exploiters can destroy/disable both localscripts at once. Second of all, never kick on the client, they could easily bypass that kick. If you really want to get rid of an exploiter on the client then try crashing them with something like: while true do end
Correlation is a measure of the extent to which two variables change together.  A positive correlation means that when we increase one variable, the other increases as well.  It also means that when we decrease one variable, the other decreases as well.  We are moving in the same direction!  A negative correlation means that when we change one variable, the other moves in the opposite direction.  A correlation of zero means that the variables are independent - changing one doesn’t influence the other at all. ### How to calculate correlation? Let’s say that we have two variables, x and y.  Correlation is basically the covariance(x,y) divided by the  standard deviation(x) multiplied by the standard deviation(y).  It’s easy to remember because the equation for covariance and standard deviation are pretty much the same except for the variables that you plug in.  To calculate standard deviation, we subtract the mean from each value of x or y, square those values, add them up, and take the square route. You can see that we do this calculation for both x and y on the bottom of the fraction below.  Covariance (on the top!) is exactly the same except we subtract each of x and y from their respective means, and square those two results.  We don’t take the square route to calculate covariance (see my post comparing the two): Again, the top of this equation is covariance(x,y), and the bottom is standard deviation (x) multiplied by standard deviation (y).  I think that this makes sense, because if x and y were exactly the same, the denominator would simplify to be the same as the numerator, and we would get Cor(X,Y) = 1. One cool thing that I learned recently is that squared correlation is equal to the R (squared) statistic in the case of two variables, x and y!  I think that also means that we could take the square route of the R (squared) statistic and get the correlation.  This means that we could also use correlation as a metric for goodness of fit.  This works nicely in the case of our two variables, however since correlation is only defined to be between these two variables, we can’t use it as this metric for multivariate regression.  In this case, Mr. R (squared) wins.
# Tag Info ## Hot answers tagged vorticity 14 The equation of motion for a fluid parcel in the atmosphere (in Cartesian space) is $$\dfrac{D\mathbf u}{Dt} = -\dfrac{1}{\rho}\nabla p-2 \mathbf \Omega \times \mathbf u + \mathbf g + \mathbf F,$$ where $\mathbf u$ is the wind, $\rho$ is density, $p$ is pressure, $\mathbf\Omega$ is the angular velocity of the Earth, $\mathbf g$ is gravity and $\mathbf F$ ... 11 There are two key intermittent mid latitude circulation patterns during boreal winter. One is blocking flows, leading to the formation of blocking anticyclones. One can gauge the location of blocking events through the CPC blocking index. The other one is sudden stratospheric warming, where the winds cause a disturbance such that the temperature in the ... 9 Tornadoes, land/waterspouts and supercell thunderstorm mesocyclones are examples of vortices where Coriolis is unimportant. Tornadoes are in cyclostrophic balance. Land and waterspouts (as well as non-supercell tornadoes) arise from horizontal convergence of vertical vorticity. The supercell mesocyclone spin originates as horizontal vorticity that is ... 5 I am not not an expert in meteorology, but do study the chemistry involved in these types of events. My understanding is that the folds in the tropopause generally occur below the front of the jet stream, when the potential vorticity is strong enough to transport stratospheric air down through the tropopause/inversion. Please see the relevant quote from Q.... 2 It is more of a consistency argument: if a constant-depth ocean with much less relative vorticity than planetary vorticity is to stay that way, the flow must be zonal. If there were significant meridional velocities, they would cause significant meridional displacements of water masses, which would then, by conservation of angular momentum, no longer have ... 2 Based on an answer from ECMWF support which OP can confirm to my best understanding relative vorticity in the ECMWF model is not calculated using grid points and finite differences (centered and forward and backward). Instead it is calculated using spectral approaches in meteorology. There is a package in Fortran called spherepack and a python wrapper as ... 1 This one has not been answered for a long time and I am going to summarize what I wrote in the comments. If I understood OP's question correctly I believe it is asking why Potential Vorticity is not shown in weather maps in a operational sense. From this old(but still very useful reference) Isentropic Potential Vorticity presuming isentropic surface are ... Only top voted, non community-wiki answers of a minimum length are eligible
## Intermediate Algebra for College Students (7th Edition) $-200$ The nth term can be obtained by $a_n=a_1\cdot r^{n-1}$ where $a_1$ is the first term and $r$ is the common ratio. Hence here: $a_6=6400\cdot(\frac{-1}{2})^5=6400\cdot\frac{-1}{32}=-200$
# serpentTools.data.getFile¶ serpentTools.data.getFile(path) Retrieve the path to one of the reference or example files Parameters path (str) – The name of the file without any additional directory information Returns Path to a data file that can be read with one of the readers or with serpentTools.read() Return type str Raises IOError: – If no match for path exists
Solving Naive Bayes By Hand This is part two in a series on classification with Naive Bayes. Learning Naive Bayes Through Repeated Interceptions On the whole, the Naive Bayes class of algorithms tends to be pretty easy to understand, which is a part of that class’s popularity. So let’s see how easy it really is by solving a problem with Naive Bayes. The 2018 NFL season is officially over (playoffs? What are those?) so let’s take a look at a team which is most glad the Cleveland Browns exist so that they don’t get all of the smack talk: the Buffalo Bills. This year, the Bills went 6-10. I’m going to use Naive Bayes techniques to build a predictor for whether the Bills will lose yet again. This will be useful if ever I spend an eternity having to re-watch the 2018 NFL season; I have already resolved to clean up my life to avoid this fate and I haven’t even built the predictor yet… To make this easy, I’m going to pick four variables: • Who was the starting quarterback? My set is { Josh Allen, Somebody Else } as that gives a fairly nice split of 11 games versus 5 games. • Was this a home game or an away game? My set is { Home, Away } and naturally breaks down 8-8. • Did the team score at least 14 points? My set is { Yes, No } and sadly, 13-14 points was the median points scored. • Who was the top Bills receiver in terms of yardage? My potential set is { Zay Jones, Chris Ivory, LeSean McCoy, Charles Clay, Kelvin Benjamin, Robert Foster }. Yep, on three separate occasions, running backs led the team in receiving yardage, and we aren’t talking about elite receiving backs like Alvin Kamara. But I’m not going with that full set because 3 of those 6 top receivers were 100% winners or (generally) 100% losers—I’ll explain why this matters later. So let’s bundle Ivory + McCoy and Clay + Benjamin, leaving Jones and Foster alone. So with these four variables in mind, we’re going to try to predict whether the Bills would be favored to win or lose a game. Here’s how things stack up: Game QB H/A 14 Points? Top Receiver W/L 1 Other A N Zay Jones L 2 Allen H Y Zay Jones L 3 Allen A Y Chris Ivory W 4 Allen A N Charles Clay L 5 Allen H N LeSean McCoy W 6 Allen A N Kelvin Benjamin L 7 Other A N Kelvin Benjamin L 8 Other H N LeSean McCoy L 9 Other H N Kelvin Benjamin L 10 Other A Y Robert Foster W 11 Allen H Y Robert Foster W 12 Allen A Y Zay Jones L 13 Allen A Y Robert Foster L 14 Allen H Y Robert Foster W 15 Allen A N Zay Jones L 16 Allen H Y Zay Jones W Now that we have our data, it’s time to solve the problem. At least the problem of predicting victory, not the problem of scoring two touchdowns per game. Trust the Process There are three steps to the process of solving the simplest of Naive Bayes algorithms. They are: 1. Find the probability of winning a game (that is, our prior probability). 2. Find the probability of winning given each input variable: whether Josh Allen starts the game, whether the team is home or away, whether the team scores 14 points, and who the top receiver was. 3. Plug in values from our new data into the formula to obtain the posterior probability. So let’s get to it! Prior Probability Our prior probability is the likelihood of a win or a loss independent of any other conditions. The Bills went 6-10, so their probability of a win was 6/16 or 0.375. Per-Variable Probabilities Now things get a little busier, but we’ll look at it step by step. We want to get the probability of a victory for each value of each variable independent of all other variables. It sounds complicated but it really isn’t. Let’s see how easy it is with an example. By Quarterback QB W L P(W) P(L) Allen 5 6 5/6 = .833 6/10 = .600 Other 1 4 1/6 = .167 4/10 = .400 Total: 6 10 100% 100% Home or Away? Location W L P(W) P(L) Home 4 4 4/6 = .667 4/10 = .400 Away 2 6 2/6 = .333 6/10 = .600 Total: 6 10 100% 100% Scored 14+ Points? “Big” Offense W L P(W) P(L) 14+ Points 5 3 5/6 = .833 3/10 = .300 < 14 Points 1 7 1/6 = .167 7/10 = .700 Total: 6 10 100% 100% Top Receiver W L P(W) P(L) Clay + Benjamin 0 4 0/6 = .000 4/10 = .400 Ivory + McCoy 2 1 2/6 = .333 1/10 = .100 Robert Foster 3 1 3/6 = .500 1/10 = .100 Zay Jones 1 4 1/6 = .167 4/10 = .400 Total: 6 10 100% 100% As I mentioned above, I combined some receivers together so that I didn’t end up with 100% probabilities. Well, Clay + Benjamin work well together as “guys the team gave up on and/or vice versa” and I think it’s fitting they belong together. Meanwhile, Ivory & McCoy were the running backs, so there’s something that feels right about combining them. Foster and Jones had both wins and losses so I could leave them be. Plug In Some Values We now have our independent probability tables, so we can estimate whether the team will win or lose given these relevant inputs. Our formula for victory is: $P(W|x) = \dfrac{P(QB_x|W) \cdot P(LOC_x|W) \cdot P(OFF_x|W) \cdot P(TR_x|W) \cdot P(W)}{P(x)}$ We have a formula for victory, but we also need a formula for a loss. Technically a team can tie, but the Bills didn’t have any ties this year, so I’m treating it as a two-class problem to make things easier to follow. $P(L|x) = \dfrac{P(QB_x|L) \cdot P(LOC_x|L) \cdot P(OFF_x|L) \cdot P(TR_x|L) \cdot P(L)}{P(x)}$ There’s one more thing I need to bring up here: We don’t really know the true probability of our particular set of variables coming true, the P(x) in our examples. But when we’re classifying, we technically don’t need to know P(x) because that part of the formula cancels out when doing an inequality comparison between P(W|x) and P(L|x). This is great for us because it makes our problem tractable, but it comes at the cost that our “probabilities” that we output aren’t truly probabilities, so there’s another step if we really want to get those values. With those formulas in mind, let’s test some situations. Scenario 1: The Big Win I’m going to throw out my ringer lineup here. • Josh Allen is the starter. • The team is playing at home. • The team scores at least 14 points. • Robert Foster is the leading receiver. So we plug in the values of our two formulas based on the tables above. Let’s start with winning. With appropriate subscripts we have: $P(W|x) = \dfrac{P(QB_a|W) \cdot P(LOC_h|W) \cdot P(OFF_y|W) \cdot P(TR_r|W) \cdot P(W)}{P(x)}$ Plugging in values from the table we have our partial probability for victory: $P(W|x_1) = \dfrac{5}{6} \cdot \dfrac{4}{6} \cdot \dfrac{5}{6} \cdot \dfrac{3}{6} \cdot \dfrac{6}{16} = 0.0868$ And for a loss: $P(L|x_1) = \dfrac{4}{10} \cdot \dfrac{4}{10} \cdot \dfrac{3}{10} \cdot \dfrac{1}{10} \cdot \dfrac{10}{16} = 0.003$ As you can see, a win is much more likely than a loss in this scenario. As I mentioned above, the two outcomes are not really probabilities (even though I still call it “P”), but we can calculate that the probability of a win is approximately 97% by taking the partial probability of victory (0.0868) and divide by the total pool of partial probabilities (0.0868 + 0.003). Most of the time, though, we don’t need to know the percentages—we just need to know that the Bills are likely to win this game, and it’s not close. Scenario 2: The Big Push In this scenario, we’ll change the inputs a little bit: • Nathan Barkerson is the quarterback. • The team is still playing at home. • The team does not score 14 points. • A running back is the leading receiver. I won’t do the LaTeX formulas for each step in the process, just the probabilities. Hopefully you get it at this point. Here’s the partial probability of victory: $P(W|x_2) = \dfrac{1}{6} \cdot \dfrac{4}{6} \cdot \dfrac{1}{6} \cdot \dfrac{2}{6} \cdot \dfrac{6}{16} = 0.0023$ And again, for a loss: $P(L|x_2) = \dfrac{4}{10} \cdot \dfrac{4}{10} \cdot \dfrac{7}{10} \cdot \dfrac{1}{10} \cdot \dfrac{10}{16} = 0.007$ This is a Buffalo Push: a 75% chance of losing. Speaking of losing… Scenario 3: The Big Loser This final scenario will hit on an issue with Naive Bayes that we’ll solve in a future post. • Josh Allen is the quarterback. • The team is playing at home. • The team scores 14 or more points. • Charles Clay and Kelvin Benjamin fight over the ball like two junkyard dogs, combining for an awe-inspiring 35 yards of receiving between the two of them, with Benjamin’s 18 yards of receiving good enough for first place. Let’s plug the values into our formula once more, starting with a victory: $P(W|x_3) = \dfrac{5}{6} \cdot \dfrac{4}{6} \cdot \dfrac{5}{6} \cdot \dfrac{0}{6} \cdot \dfrac{6}{16} = 0.000$ And for a loss: $P(L|x_3) = \dfrac{4}{10} \cdot \dfrac{4}{10} \cdot \dfrac{3}{10} \cdot \dfrac{4}{10} \cdot \dfrac{10}{16} = 0.012$ So this is pretty interesting. The likelihood of victory was looking great, but Benjamin and Clay never led the team in receiving for a victory, so our expected probability of success is 0. Because Naive Bayes has us perform a cross product of these independent probabilities, if one of the component probabilities is 0, the whole thing is 0. That’s an interesting problem, and one we’ll look at in our next post as I move from predicting Bills victories to classifying words as belonging to particular categories. Conclusion In today’s post, we created a Naive Bayes model by hand and populated it with several scenarios. We also discovered that when a component has an outcome probability of 0—particularly common in sparse data sets like the one we have—we can end up with an unexpected result. In next week’s installment, we will take this algorithm one step further and classify text and figure out this zero-probability outcome problem in the process. Upcoming Events: SQL Saturday Nashville I’ve decided to do some quick blogging about upcoming events a few days out. I’ll keep doing this until I forget or the townspeople with pitchforks and torches storm my manor. Key Details What: SQL Saturday Nashville Where: Middle Tennessee State University, Murfreesboro, Tennessee When: Saturday, January 12th, all day Admission is free. Sign up at the SQL Saturday website. What I’m Presenting 9:45 AM — 10:45 AM — Launching a Data Science Project: Cleaning is Half the Battle Classification With Naive Bayes: An Introduction This is part one in a series on classification with Naive Bayes. What Is Naive Bayes? Let me fix that right away: Naive Bayes isn’t a thing; it’s a class of things. You do realize that collective nouns are typically referred to in the singular, right? You probably shouldn’t do editorializing in the headings, me. Nonetheless, let’s talk about the Naive Bayes class of algorithms and whistle past the pluralization of collective nouns. If you want, every time I write “Naive Bayes,” assume I wrote “the class of algorithms which we have defined as Naive Bayes.” So what is Naive Bayes? [Isn’t that what I asked? – ed] Naive Bayes is a class of algorithms designed to solve classification problems. Naive Bayes algorithms share three primary characteristics: 1. They are probabilistic. This means that we calculate the probability of each output category, and to determine the best, we choose the one with the highest likelihood. 2. Probabilities are derived from Bayes’ Theorem. That’s the “Bayes” in Naive Bayes, and we’ll get into it in a bit. 3. Features are independent. That’s the “naive” part of “Naive Bayes.” The plan in this post is to cover the basics of this class of algorithms, and in follow-up posts, we will look at implementations by hand and in code. But first, the \$64,000 question: Why Should We Use Naive Bayes? Is It the Best Classifier Out There? Probably not, no. In fact, it’s typically a mediocre classifier—it’s the one you strive to beat with your fancy algorithm. So why even care about this one? Because it’s fast, easy to understand, and it works reasonably well. In other words, this is the classifier you start with to figure out if it’s worth investing your time on a problem. If you need to hit 90% category accuracy and Naive Bayes is giving you 70%, you’re probably in good shape; if it’s giving you 20% accuracy, you might need to take another look at whether you have a viable solution given your data. The Foundation of Naive Bayes: Bayes’ Theorem Bayes’ theorem is pretty simple: $P(B|A) = \dfrac{P(A|B) * (B)}{P(A)}$ In case you’re not familiar with the notation, here’s a quick breakdown: • P(B|A) is called the posterior probability. It represents the probability of hypothesis B being correct given input data A. Let’s cover the other elements and loop back to this at the end. • P(A|B) is the probability of us seeing input data A if we know that hypothesis B is true. In other words, if Bob always wears red hats on Thursdays, P(Bob is wearing a red hat | Today is Thursday) is 1. If Bob wears red hats on alternating Thursdays, P(Bob is wearing a red hat | Today is Thursday) is 0.5. And so on. • P(B) is called the prior probability. It represents the likelihood that hypothesis B is true regardless of any specific input data. Continuing with the example above, it is the probability that today is Thursday, which is 1/7 unless you are re-living Groundhogs Day. • P(A) is the probability of us seeing data A in our sample. In our simple example, it is the likelihood that Bob will wear a red hat. Now that I’ve gone into the elements, let me wrap back around to the term posterior probability. Continuing with the example, our goal is to figure out what day it is based solely on Bob’s headgear. In other words, finding P(Today is Thursday | Bob is wearing a red hat), read as “the probability that today is Thursday given that Bob is wearing a red hat.” We started out with a prior: the information we expect (or know) beforehand. Our prior is that, knowing nothing else, is it equally likely to be any day of the week, so P(Today is Thursday) is 1/7. From there, we update our prior by adding new information into the mix. We know that the probability of Bob wearing a red hat is 50% when it is Thursday—for lore-building, let’s say that Bob is a high school football nut and always wears his kids’ high school cap for some Thursday night action. Oh, and that there are 26 games a year because shaddup. So P(Bob is wearing a red hat | Today is Thursday) is 0.5, or 50%. But knowing that Bob wears red hats on half of all Thursdays isn’t enough for us—we need a bit more. We need to know how often Bob wears his red hat in general. So let’s say that we have 52*7=364 data points and Bob wore his red hat 36 times. That means P(Bob is wearing a red hat) is 0.099 in our data set. Armed with this information, we can calculate the posterior probability: $\dfrac{(0.5)(0.14)}{(0.099)} = .722$ In other words, given what we know, there is a 72% chance that, if we see Bob wearing a red hat, it is Thursday. If you decided to ignore all of the rest of the details and just focus on the fact that Bob wore his hat on 26 Thursdays and 36 times in total, you could also take 26/36 and get to .722 as well. So this seems like a lot of calculation for nothing, at least until we introduce the real kicker. One Naive Assumption What makes Naive Bayes so useful is something which seems absurd at first blush: we assume that all inputs are independent. This sounds like a bad idea: there are complex interactions between different inputs, so simply ignoring them seems like we’re throwing away valuable information. But it turns out that this simplification mechanism still retains most of our valuable information while making it easier for us to calculate. Here’s the new formula: $P(B|A) = \dfrac{P(x_1|B) * P(x_2|B) * \ldots * P(x_n|B) * P(B)}{P(A)}$ Now A is a specific collection of {x1, x2, …, xn} inputs. Let’s say this could be things like Bob’s head covering, how surly your co-workers are today (on a scale from 1 to 5 so now it’s totally scientific), and the amount of junk mail you received the day before. Some of these things may have some overlap, but for the purposes of Naive Bayes, we assume that all away and calculate these probabilities (red hat, blue hat, no hat for Bob; 1-5 on the Surl Scale; pounds of junk mail) in our data set separately. We’ve also changed the nature of B a little bit too. Now B is a collection of {B1, B2, … BN} different choices. So now, instead of asking if today is Thursday, we try to figure out which day of the week it is based on the specific values of our inputs. We might end up with a 45% chance of it being Thursday, 29% chance of Tuesday, 17% chance of Wednesday, 6% chance of it being Friday, and 3% chance of it being Monday. In that case, our model predicts that a day with those particular inputs is Thursday. That’s what we mean by the model being probabilistic: even though we end up with a result like “The model predicts that the day is Thursday given those inputs,” we really end up with a series of likelihoods and it’s up to us to figure out any cutoffs between “This day is Thursday” versus “This day could be Tuesday or Thursday” to “I have no idea what day it is and basing that off of some random garbage you pulled together is absurd.” Naturally, these algorithms tend to be a bit too polite to give you the third answer directly, but you can infer it if you look hard enough. Conclusion In today’s post, we looked at the basics of a Naive Bayes model. We walked through Bayes’ Theorem and solved a simple probability question. Finally, we introduced the concept of independence. In the next post (which actually will drop on a Thursday), we will solve a simple but realistic two-class classification problem by hand with Naive Bayes. R Training In Cleveland CANCELLED UPDATE Unfortunately, the R training has been cancelled due to venue limitations. I’ll still be at SQL Saturday Cleveland but will not be able to give my full-day training. To the tens of thousands of people (that estimate might be slightly high) who signed up, SQL Saturday Cleveland crew will be in touch. I’ve taken the link down from this page (and deleted the rest of the text just because) to make sure that nobody accidentally signs up. A Sneak Preview One of my presentation goals for 2019 is to get into video recording. I did a couple of recordings in 2017 but wasn’t that happy with them. Since then, I’ve upped the amount of equipment (mostly lights—you can never have enough lights, apparently) and am getting prepared. Here’s a sneak preview: I still have a long way to go with this, but soon enough I will have this down. Then, my ultimate goal will come true: become a television meteorologist push free content to a couple dozen people. Finding Max Concurrent Operations With T-SQL (Part 2) Last Time On 36 Chambers To give you a quick reminder of the problem we’re trying to solve, we can have multiple concurrent work items processing for a customer, but we want to limit that number to 2.  The development team put in a check to throttle customers and want us to write a query to ensure that they wrote their check correctly. *Narrator’s voice*:  They didn’t. Meanwhile, Back At The Wrench Because this is a difficult problem, I immediately looked for an Itzik Ben-Gan solution and found one.  Here’s my take on his solution, but definitely read his article. Based on yesterday’s work, we can generate an arbitrary number of values which approximate a normal distribution, so we’re going to build 5 million rows worth of start and stop times for customers.  Here’s the code we will use: DROP TABLE IF EXISTS #WorkItems; CREATE TABLE #WorkItems ( WorkItemID INT NOT NULL PRIMARY KEY CLUSTERED, CustomerID INT NOT NULL, WorkItemStart DATETIME2(0) NOT NULL, WorkItemEnd DATETIME2(0) NULL ); DECLARE @NumberOfRecords INT = 5000000, @NumberOfCustomers INT = 150, @StartDate DATETIME2(0) = '2018-12-18 15:00:00', @MeanRunLengthInSeconds DECIMAL(5,2) = 90.0, @StdDevRunLengthInSeconds DECIMAL(5,2) = 18.8, @MeanLengthBetweenRunsInSeconds DECIMAL(5,2) = 128.0, @StdDevLengthBetweenRunsInSeconds DECIMAL(5,2) = 42.3, @Precision INT = 1; WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER(ORDER BY (SELECT 0)) AS n FROM L5), Vals AS ( SELECT TOP (@NumberOfRecords) n, r.CustomerID, s.NumberOfSecondsToRun, s.NumberOfSecondsBeforeNextRunBegins FROM Nums CROSS APPLY ( SELECT RAND(CHECKSUM(NEWID())) AS rand1, RAND(CHECKSUM(NEWID())) AS rand2, CAST(@NumberOfCustomers * RAND(CHECKSUM(NEWID())) AS INT) + 1 AS CustomerID ) r CROSS APPLY ( SELECT ROUND((SQRT(-2.0 * LOG(r.rand1)) * COS(2 * PI() * r.rand2)) * @StdDevRunLengthInSeconds, @Precision) + @MeanRunLengthInSeconds AS NumberOfSecondsToRun, ROUND((SQRT(-2.0 * LOG(r.rand1)) * SIN(2 * PI() * r.rand2)) * @StdDevLengthBetweenRunsInSeconds, @Precision) + @MeanLengthBetweenRunsInSeconds AS NumberOfSecondsBeforeNextRunBegins ) s ), records AS ( SELECT v.n AS WorkItemID, v.CustomerID, v.NumberOfSecondsToRun, v.NumberOfSecondsBeforeNextRunBegins, SUM(v.NumberOfSecondsBeforeNextRunBegins) OVER (PARTITION BY v.CustomerID ORDER BY v.n ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) - v.NumberOfSecondsBeforeNextRunBegins AS TotalNumberOfSeconds FROM vals v ) INSERT INTO #WorkItems ( WorkItemID, CustomerID, WorkItemStart, WorkItemEnd ) SELECT r.WorkItemID, r.CustomerID, s.WorkItemStart, DATEADD(SECOND, r.NumberOfSecondsToRun, s.WorkItemStart) AS WorkItemEnd FROM records r CROSS APPLY ( SELECT DATEADD(SECOND, r.TotalNumberOfSeconds, @StartDate) AS WorkItemStart ) s; On the ridiculously overpowered server that I’m abusing to do this blog post, it took about 32 seconds to generate 5 million rows.  We are generating data for 150 customers using a uniform distribution, so we’d expect somewhere around 33,333 records per customer.  In my sample, customer work item counts range from 32,770 to 33,747 so there’s a little bit of variance, but not much. Building A Solution:  A Single Customer Just like before, we’re going to squeeze a few hundred more words out of this post build up a query step by step.  Our first step will be to get the starting times and ending times for each of our selected customer’s work items.  Each starting point and each ending point will take a row, so we’ll use UNION ALL to unpivot this data. DECLARE @CustomerID INT = 1; SELECT wi.CustomerID, wi.WorkItemStart AS TimeUTC, 1 AS IsStartingPoint, ROW_NUMBER() OVER (ORDER BY wi.WorkItemStart) AS StartOrdinal FROM #WorkItems wi WHERE wi.CustomerID = @CustomerID UNION ALL SELECT wi.CustomerID, wi.WorkItemEnd AS TimeUTC, 0 AS IsStartingPoint, NULL AS StartOrdinal FROM #WorkItems wi WHERE wi.CustomerID = @CustomerID For my data set, I get something like the following: As a quick note, the start times are all in order but the end times are arbitrarily ordered, as ordering won’t matter where we’re going. Then, we want to build out step two, where we add an ordering based on time for each of the start and stop points: DECLARE @CustomerID INT = 1; WITH StartStopPoints AS ( SELECT wi.CustomerID, wi.WorkItemStart AS TimeUTC, 1 AS IsStartingPoint, ROW_NUMBER() OVER (ORDER BY wi.WorkItemStart) AS StartOrdinal FROM #WorkItems wi WHERE wi.CustomerID = @CustomerID UNION ALL SELECT wi.CustomerID, wi.WorkItemEnd AS TimeUTC, 0 AS IsStartingPoint, NULL AS StartOrdinal FROM #WorkItems wi WHERE wi.CustomerID = @CustomerID ) SELECT s.CustomerID, s.TimeUTC, s.IsStartingPoint, s.StartOrdinal, ROW_NUMBER() OVER (ORDER BY s.TimeUTC, s.IsStartingPoint) AS StartOrEndOrdinal FROM StartStopPoints s; Running this gives us the following results for my data: You can probably see by this point how the pieces are coming together:  each time frame has a starting point and an ending point.  If there were no overlap at all, we’d see in the fourth column a number followed by a NULL, followed by a number followed by a NULL, etc.  But we clearly don’t see that:  we see work item ordinals 3 and 4 share some overlap:  item 3 started at 3:06:15 PM and ended after item 4’s start of 3:07:20 PM.  This means that those two overlapped to some extent.  Then we see two NULL values, which means they both ended before 5 began.  So far so good for our developers! The final calculation looks like this: DECLARE @CustomerID INT = 1; WITH StartStopPoints AS ( SELECT wi.CustomerID, wi.WorkItemStart AS TimeUTC, 1 AS IsStartingPoint, ROW_NUMBER() OVER (ORDER BY wi.WorkItemStart) AS StartOrdinal FROM #WorkItems wi WHERE wi.CustomerID = @CustomerID UNION ALL SELECT wi.CustomerID, wi.WorkItemEnd AS TimeUTC, 0 AS IsStartingPoint, NULL AS StartOrdinal FROM #WorkItems wi WHERE wi.CustomerID = @CustomerID ), StartStopOrder AS ( SELECT s.CustomerID, s.TimeUTC, s.IsStartingPoint, s.StartOrdinal, ROW_NUMBER() OVER (ORDER BY s.TimeUTC, s.IsStartingPoint) AS StartOrEndOrdinal FROM StartStopPoints s ) SELECT MAX(2 * s.StartOrdinal - s.StartOrEndOrdinal) AS MaxConcurrentWorkItems FROM StartStopOrder s WHERE s.IsStartingPoint = 1; Because we know that each start (probably) has an end, we need to multiply StartOrdinal by 2.  Then, we want to compare the result of that calculation to StartOrEndOrdinal.  If you go back up to the previous image, you can see how this gives us 1 concurrent item for the first three work items, but as soon as we add in work item 4, 2*4-6 = 2, so we now have two concurrent work items.  By the way, this ran in about 26ms for me and took 182 reads if you use an index that I create below.  But don’t skip that far ahead yet; we have more to do! Voila.  Or viola, whichever. But wait, there’s more! Where Am I Overlapping The Most? Naturally, if I know that my max overlap window is 3, I’d be curious about when that happened—maybe I can correlate it with logs and figure out which intern to blame (protip:  have interns in different time zones so you always have a likely culprit). Getting the results is easy, for some definition of “easy” which doesn’t really fit with the normal definition of “easy.” First, we will create a temp table called #MaxConcurrentItems.  This will hold all of the work items which are equal to the max concurrent item count. DROP TABLE IF EXISTS #MaxConcurrentItems; CREATE TABLE #MaxConcurrentItems ( WorkItemID INT PRIMARY KEY CLUSTERED, CustomerID INT, MaxConcurrentWorkItems INT ); DECLARE @CustomerID INT = 1, @MaxConcurrentWorkItems INT = 0; WITH StartStopPoints AS ( SELECT wi.WorkItemID, wi.CustomerID, wi.WorkItemStart AS TimeUTC, 1 AS IsStartingPoint, ROW_NUMBER() OVER (ORDER BY wi.WorkItemStart) AS StartOrdinal FROM #WorkItems wi WHERE wi.CustomerID = @CustomerID UNION ALL SELECT wi.WorkItemID, wi.CustomerID, wi.WorkItemEnd AS TimeUTC, 0 AS IsStartingPoint, NULL AS StartOrdinal FROM #WorkItems wi WHERE wi.CustomerID = @CustomerID ), StartStopOrder AS ( SELECT s.WorkItemID, s.CustomerID, s.TimeUTC, s.IsStartingPoint, s.StartOrdinal, ROW_NUMBER() OVER (ORDER BY s.TimeUTC, s.IsStartingPoint) AS StartOrEndOrdinal FROM StartStopPoints s ), MaxConcurrency AS ( SELECT MAX(2 * s.StartOrdinal - s.StartOrEndOrdinal) AS MaxConcurrentWorkItems FROM StartStopOrder s WHERE s.IsStartingPoint = 1 ) INSERT INTO #MaxConcurrentItems ( WorkItemID, CustomerID, MaxConcurrentWorkItems ) SELECT s.WorkItemID, s.CustomerID, M.MaxConcurrentWorkItems FROM StartStopOrder s CROSS JOIN MaxConcurrency m WHERE 2 * s.StartOrdinal - s.StartOrEndOrdinal = m.MaxConcurrentWorkItems; So let’s explain this step by step: 1. StartStopPoints and StartStopOrder are the same as before. 2. Instead of returning MaxConcurrentWorkItems, we’re dropping that into a CTE called MaxConcurrency. 3. Once we have the max level of concurrency, we want to go back to StartStopOrder and get all of the cases where the number of concurrent work items is equal to the maximum number of concurrent work items.  We insert this into #MaxConcurrentWorkItems. From there, I populate @MaxConcurrentWorkItems with the max value we retrieve above (so I don’t need to calculate it again) and I want to get the record which pushed us to the max level of concurrency as well as all of the prior records in that grouping.  We already know a technique for turning one row into multiple rows:  a tally table. I do need to go back and get the start ordinals for each row in #WorkItems for that customer ID; that way, I can join against the tally table and get prior records, defined where MaxRow.StartOrdinal = WorkItem.StartOrdinal + n - 1.  The -1 at the end is because our tally table starts from 1 instead of 0; if yours has a 0 value, then you can change the inequality operation in the WHERE clause below and include just the n rows without any subtraction involved. Here is the solution in all its glory: DROP TABLE IF EXISTS #MaxConcurrentItems; CREATE TABLE #MaxConcurrentItems ( WorkItemID INT PRIMARY KEY CLUSTERED, CustomerID INT, MaxConcurrentWorkItems INT ); DECLARE @CustomerID INT = 1, @MaxConcurrentWorkItems INT = 0; WITH StartStopPoints AS ( SELECT wi.WorkItemID, wi.CustomerID, wi.WorkItemStart AS TimeUTC, 1 AS IsStartingPoint, ROW_NUMBER() OVER (ORDER BY wi.WorkItemStart) AS StartOrdinal FROM #WorkItems wi WHERE wi.CustomerID = @CustomerID UNION ALL SELECT wi.WorkItemID, wi.CustomerID, wi.WorkItemEnd AS TimeUTC, 0 AS IsStartingPoint, NULL AS StartOrdinal FROM #WorkItems wi WHERE wi.CustomerID = @CustomerID ), StartStopOrder AS ( SELECT s.WorkItemID, s.CustomerID, s.TimeUTC, s.IsStartingPoint, s.StartOrdinal, ROW_NUMBER() OVER (ORDER BY s.TimeUTC, s.IsStartingPoint) AS StartOrEndOrdinal FROM StartStopPoints s ), MaxConcurrency AS ( SELECT MAX(2 * s.StartOrdinal - s.StartOrEndOrdinal) AS MaxConcurrentWorkItems FROM StartStopOrder s WHERE s.IsStartingPoint = 1 ) INSERT INTO #MaxConcurrentItems ( WorkItemID, CustomerID, MaxConcurrentWorkItems ) SELECT s.WorkItemID, s.CustomerID, M.MaxConcurrentWorkItems FROM StartStopOrder s CROSS JOIN MaxConcurrency m WHERE 2 * s.StartOrdinal - s.StartOrEndOrdinal = m.MaxConcurrentWorkItems; SELECT @MaxConcurrentWorkItems = MAX(mci.MaxConcurrentWorkItems) FROM #MaxConcurrentItems mci; WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), Nums AS(SELECT ROW_NUMBER() OVER(ORDER BY (SELECT 0)) AS n FROM L2), WorkItems AS ( SELECT wi.WorkItemID, wi.CustomerID, wi.WorkItemStart, wi.WorkItemEnd, ROW_NUMBER() OVER (ORDER BY wi.WorkItemStart) AS StartOrdinal FROM #WorkItems wi WHERE wi.CustomerID = @CustomerID ), MaxWorkItems AS ( SELECT wi.WorkItemID, wi.CustomerID, wi.WorkItemStart, wi.WorkItemEnd, wi.StartOrdinal, ROW_NUMBER() OVER (ORDER BY mci.WorkItemID) AS WorkItemGrouping FROM #MaxConcurrentItems mci INNER JOIN WorkItems wi ON mci.WorkItemID = wi.WorkItemID ) SELECT wi.WorkItemID, wi.CustomerID, wi.WorkItemStart, wi.WorkItemEnd, M.WorkItemGrouping FROM MaxWorkItems M CROSS JOIN Nums n INNER JOIN WorkItems wi ON M.StartOrdinal = (wi.StartOrdinal + n.n - 1) WHERE n.n <= @MaxConcurrentWorkItems ORDER BY wi.WorkItemID ASC; And here’s the results for customer 1: For bonus wackiness, the same work item can be in multiple groupings.  That makes sense:  we can see from this mess that 194483 finished at 13:51:22 and 197740 began at 13:52:35, but 194736 and 194513 were both active during that entire stretch, so our pattern of active work items was 1-2-3-2-3. So we get our results, but at what cost?  Here’s the execution plan for the whole thing: Despite this awful query plan, this ran in about 1.5 seconds, but had a worktable with 129K reads.  I could definitely tune this query by saving my slice of work items with ordinals off in its own temp table and querying that…but my editor says I have to wrap up this section now, so let’s jump straight to the moral of the story. The moral of the story?  I already told you:  have lots of interns so you always have someone to blame. What Is My Overlap Per Customer? Now I’d like to see what the overlap is across all of my 150 customers.  Here goes: WITH StartStopPoints AS ( SELECT wi.CustomerID, wi.WorkItemStart AS TimeUTC, 1 AS IsStartingPoint, ROW_NUMBER() OVER (PARTITION BY wi.CustomerID ORDER BY wi.WorkItemStart) AS StartOrdinal FROM #WorkItems wi UNION ALL SELECT wi.CustomerID, wi.WorkItemEnd AS TimeUTC, 0 AS IsStartingPoint, NULL AS StartOrdinal FROM #WorkItems wi ), StartStopOrder AS ( SELECT s.CustomerID, s.TimeUTC, s.IsStartingPoint, s.StartOrdinal, ROW_NUMBER() OVER (PARTITION BY s.CustomerID ORDER BY s.TimeUTC, s.IsStartingPoint) AS StartOrEndOrdinal FROM StartStopPoints s ) SELECT s.CustomerID, MAX(2 * s.StartOrdinal - s.StartOrEndOrdinal) AS MaxConcurrentWorkItems FROM StartStopOrder s WHERE s.IsStartingPoint = 1 GROUP BY s.CustomerID ORDER BY MaxConcurrentWorkItems DESC, CustomerID ASC; At our worst, we have 5 concurrent work items for customer 139: This query took several seconds to run, so let’s check out the execution plan: We have two separate scans of the WorkItems table, and we can see the Segment-Sequence combo which represents a window function show up twice.  The biggest pain point is a major Sort operation which in my case spilled at level 1. So let’s create an index: CREATE NONCLUSTERED INDEX [IX_WorkItems] ON #WorkItems ( CustomerID, WorkItemStart, WorkItemEnd ) WITH(DATA_COMPRESSION = PAGE); As a quick note, there’s nothing here which prevents users from running two files at the same time, so I can’t make this a unique index. Doing this didn’t change the execution plan much, but did change IO from 53,368 reads to 26,233 reads and reduced overall execution time on this server from 6925ms (17,406ms CPU time) down to 3650ms (8203ms CPU time) so that’s not awful. Wrapping Up Over the course of this two-post series, we have generated a lot of artificial data in order to find overlapping time ranges. Along the way, we learned many important things, such as having plenty of cannon fodder interns around. Finding Max Concurrent Operations With T-SQL (Part 1) Not too long ago, a co-worker had an issue that he asked me about.  The gist of it is, we can have multiple concurrent work items processing for a customer, but we want to limit that number to 2.  The development team wanted to make sure that their code was working as expected, but they weren’t sure how they could test it, as work items don’t all start and stop at the same time. Build Some Sample Data In order to show you the solution, I want to build up a reasonable sized sample.  Any solution looks great when reading five records, but let’s kick that up a notch.  Or, more specifically, a million notches:  I’m going to use a CTE tally table and load 5 million rows. I want some realistic looking data, so I’ve adapted Dallas Snider’s strategy to build a data set which approximates a normal distribution. Because this is a little complicated, I wanted to take the time and explain the data load process in detail in its own post, and then apply it in the follow-up post.  We’ll start with a relatively small number of records for this demonstration:  50,000.  The reason is that you can generate 50K records almost instantaneously but once you start getting a couple orders of magnitude larger, things slow down some. Give Me Rows Getting an arbitrary number of records is easy with a CTE-based tally table.  Here we get 50,000 uniquely numbered rows in a trivial amount of time: DECLARE @NumberOfRecords INT = 50000; WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER(ORDER BY (SELECT 0)) AS n FROM L5) SELECT TOP(@NumberOfRecords) n FROM Nums; The output of this is 50,000 rows with one column called n.  It’s not the most creative name but does the trick. Give Me Random Values The next thing I need is two random numbers for each of the rows.  We’ll get into why I need them in just a moment.  For now, I want to point out that calling RAND() twice isn’t a safe bet: Calling RAND() for each record in a result set doesn’t quite work the way you’d expect. Although that’s not workable, you can still generate random values following a uniform distribution between 0 and 1 using the combination RAND(CHECKSUM(NEWID())): GUIDs are occasionally useful for something. Because NEWID() gives us a new value for each row, we have a different seed for reach row and a result which looks much better for us. Converting Uniform To Normal As mentioned above, Dallas Snider has a great technique for generating random values pulled from something which approximates a normal distribution using T-SQL.  But that technique uses a loop and generates two values per loop iteration.  In the end, I’m going to want to do this 5 million times, so I’d much rather build a set-based solution. First, let’s look at Dallas’s solution and see what we need.  I’ve simplified the code a bit and formatted it the way I like: DECLARE @randNum1 FLOAT = RAND(), @randNum2 FLOAT = RAND(), @mean FLOAT = 75.0, @stdDev FLOAT = 5.0, --standard deviation @precision INT = 1; --number of places to the right of the decimal point SELECT ROUND((SQRT(-2.0 * LOG(@randNum1)) * COS(2 * PI() * @randNum2)) * @stdDev, @precision) + @mean AS Value1, ROUND((SQRT(-2.0 * LOG(@randNum1)) * SIN(2 * PI() * @randNum2)) * @stdDev, @precision) + @mean AS Value2; We end up with something like the following: Generating values off of a (nearly) normal distribution. Your numbers will, of course differ, but you knew that because you get the idea of random numbers. So how can we adapt this to our code?  Let’s see how. Getting To The Crux Of Our Problem Here’s the gist:  we have customers with customer IDs.  These customers have some data processed.  Each time they ask for processing, it takes some number of seconds, but that number can change (maybe the files are different sizes, maybe there’s resource contention, whatever).  There will also be delays between executions for each customer, but here’s the important part:  a customer can have more than one process running at a time. So how do we model this idea?  With several variables. DECLARE @NumberOfRecords INT = 50000, @NumberOfCustomers INT = 150, @MeanRunLengthInSeconds DECIMAL(5,2) = 90.0, @StdDevRunLengthInSeconds DECIMAL(5,2) = 18.8, @MeanLengthBetweenRunsInSeconds DECIMAL(5,2) = 128.0, @StdDevLengthBetweenRunsInSeconds DECIMAL(5,2) = 42.3, @Precision INT = 1; In addition to our 50K records, we’ve created 150 customers out of thin air.  We have, through studious analysis of our data and totally not just picking a number out of thin air, determined that the mean number of seconds it takes to do this data processing is exactly 90.0, and our standard deviation is 18.8.  This data approximates a normal distribution. In addition, the mean length between runs is 128.0 seconds (again, people in lab coats with Very Serious Glasses and beakers and test tubes and stuff helped us determine this) and the standard deviation for time between runs is 42.3 seconds, and once more, this approximates a normal distribution.  Finally, we will ask for precision down to 1 spot after the decimal. Once I’ve defined those variables and collected those values, I can write a query which gives us realistic-looking values: DECLARE @NumberOfRecords INT = 50000, @NumberOfCustomers INT = 150, @StartDate DATETIME2(0) = '2018-12-18 15:00:00', @MeanRunLengthInSeconds DECIMAL(5,2) = 90.0, @StdDevRunLengthInSeconds DECIMAL(5,2) = 18.8, @MeanLengthBetweenRunsInSeconds DECIMAL(5,2) = 128.0, @StdDevLengthBetweenRunsInSeconds DECIMAL(5,2) = 42.3, @Precision INT = 1; WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER(ORDER BY (SELECT 0)) AS n FROM L5) SELECT TOP (@NumberOfRecords) n, r.CustomerID, s.NumberOfSecondsToRun, s.NumberOfSecondsBeforeNextRunBegins FROM Nums CROSS APPLY ( SELECT RAND(CHECKSUM(NEWID())) AS rand1, RAND(CHECKSUM(NEWID())) AS rand2, CAST(@NumberOfCustomers * RAND(CHECKSUM(NEWID())) AS INT) + 1 AS CustomerID ) r CROSS APPLY ( SELECT ROUND((SQRT(-2.0 * LOG(r.rand1)) * COS(2 * PI() * r.rand2)) * @StdDevRunLengthInSeconds, @Precision) + @MeanRunLengthInSeconds AS NumberOfSecondsToRun, ROUND((SQRT(-2.0 * LOG(r.rand1)) * SIN(2 * PI() * r.rand2)) * @StdDevLengthBetweenRunsInSeconds, @Precision) + @MeanLengthBetweenRunsInSeconds AS NumberOfSecondsBeforeNextRunBegins ) s And here are some sample results: This rabbit hole is certainly starting to get deep. The complexity jumped up just a little bit, so let’s walk our way through this.  First, I used the CROSS APPLY operator to give me values for rand1 and rand2, as well as a third random value for our Customer ID.  I don’t mind Customer ID following a uniform distribution—that might not be perfectly realistic, but it’s close enough for our work.  Given 50K records and 150 customers, we’d expect about 333 rows per customer. Next, I chain the first CROSS APPLY with a second because one good function deserves another.  This second function takes Dallas’s calculations and converts them to my own nefarious purposes.  Note that instead of having one mean and one standard deviation like in his example, I have two means and two standard deviations. At this point, I have a few variables for each customer:  an implicit ordering (based on n), the number of seconds this particular job will run (NumberOfSecondsToRun), and the number of seconds from now before the next job begins (NumberOfSecondsBeforeNextRunBegins).  So now we need to convert this into a stream of time rather than thinking of things as just individual points in time. Turning Numbers To Time Streams Let’s pretend that each customer started at time 0.  Again, that’s not totally realistic but for our purposes it’ll work just fine.  Here’s a quick way to generate a time stream for customer 1: DECLARE @NumberOfRecords INT = 50000, @NumberOfCustomers INT = 150, @StartDate DATETIME2(0) = '2018-12-18 15:00:00', @MeanRunLengthInSeconds DECIMAL(5,2) = 90.0, @StdDevRunLengthInSeconds DECIMAL(5,2) = 18.8, @MeanLengthBetweenRunsInSeconds DECIMAL(5,2) = 128.0, @StdDevLengthBetweenRunsInSeconds DECIMAL(5,2) = 42.3, @Precision INT = 1; WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER(ORDER BY (SELECT 0)) AS n FROM L5), Vals AS ( SELECT TOP (@NumberOfRecords) n, r.CustomerID, s.NumberOfSecondsToRun, s.NumberOfSecondsBeforeNextRunBegins FROM Nums CROSS APPLY ( SELECT RAND(CHECKSUM(NEWID())) AS rand1, RAND(CHECKSUM(NEWID())) AS rand2, CAST(@NumberOfCustomers * RAND(CHECKSUM(NEWID())) AS INT) + 1 AS CustomerID ) r CROSS APPLY ( SELECT ROUND((SQRT(-2.0 * LOG(r.rand1)) * COS(2 * PI() * r.rand2)) * @StdDevRunLengthInSeconds, @Precision) + @MeanRunLengthInSeconds AS NumberOfSecondsToRun, ROUND((SQRT(-2.0 * LOG(r.rand1)) * SIN(2 * PI() * r.rand2)) * @StdDevLengthBetweenRunsInSeconds, @Precision) + @MeanLengthBetweenRunsInSeconds AS NumberOfSecondsBeforeNextRunBegins ) s ) SELECT v.n, v.CustomerID, v.NumberOfSecondsToRun, v.NumberOfSecondsBeforeNextRunBegins FROM Vals v WHERE v.CustomerID = 1 ORDER BY n ASC And here are some sample results so we can follow along: The sordid history of Customer 1. Let’s start at time t=0.  If you want to imagine t=0 as a date like 2018-12-18 08:00:00, that’s also fine, but I’m going to stick with numbers representing seconds for the moment. So we’re at t=0.  Then event 20 happens.  Customer 1 wants us to process a data file.  It’s going to end up taking us 118.7 seconds to process that first data file. In the meantime, at t=49 seconds, event 88 happens and we process a second file.  This one takes 58.4 seconds to run, so it completes at time 107.4. Then, at t=(49+187.2) or t=236.2, the third file starts.  If it helps, I’ve built a number line image below to visualize this: Visualizing the first three executions for Customer 1. The main thing we want to see is if there is overlap, and if so, how many concurrent executions we see.  In this case, we can see the red and blue lines overlap, but no overlap with green.  Therefore, our max number of concurrent executions is 2.  We’d want to look at this over the entire stream of time. The way we can get this is to add one more level of common table expression and introduce a window function with a cutoff: DECLARE @NumberOfRecords INT = 50000, @NumberOfCustomers INT = 150, @StartDate DATETIME2(0) = '2018-12-18 15:00:00', @MeanRunLengthInSeconds DECIMAL(5,2) = 90.0, @StdDevRunLengthInSeconds DECIMAL(5,2) = 18.8, @MeanLengthBetweenRunsInSeconds DECIMAL(5,2) = 128.0, @StdDevLengthBetweenRunsInSeconds DECIMAL(5,2) = 42.3, @Precision INT = 1; WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER(ORDER BY (SELECT 0)) AS n FROM L5), Vals AS ( SELECT TOP (@NumberOfRecords) n, r.CustomerID, s.NumberOfSecondsToRun, s.NumberOfSecondsBeforeNextRunBegins FROM Nums CROSS APPLY ( SELECT RAND(CHECKSUM(NEWID())) AS rand1, RAND(CHECKSUM(NEWID())) AS rand2, CAST(@NumberOfCustomers * RAND(CHECKSUM(NEWID())) AS INT) + 1 AS CustomerID ) r CROSS APPLY ( SELECT ROUND((SQRT(-2.0 * LOG(r.rand1)) * COS(2 * PI() * r.rand2)) * @StdDevRunLengthInSeconds, @Precision) + @MeanRunLengthInSeconds AS NumberOfSecondsToRun, ROUND((SQRT(-2.0 * LOG(r.rand1)) * SIN(2 * PI() * r.rand2)) * @StdDevLengthBetweenRunsInSeconds, @Precision) + @MeanLengthBetweenRunsInSeconds AS NumberOfSecondsBeforeNextRunBegins ) s ) SELECT v.n AS WorkItemID, v.CustomerID, v.NumberOfSecondsToRun, v.NumberOfSecondsBeforeNextRunBegins, SUM(v.NumberOfSecondsBeforeNextRunBegins) OVER ( PARTITION BY v.CustomerID ORDER BY v.n ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW ) - v.NumberOfSecondsBeforeNextRunBegins AS TotalNumberOfSeconds FROM vals v We’ve now included a SUM of NumberOfSecondsBeforeNextRunBegins over a window from the beginning of time up to the current row.  But because we’re getting the next run’s start time, we need to subtract out the current row’s value, and that gives us the TotalNumberOfSeconds result which represents where things begin. Here’s a picture, though note that all of the numbers changed because it’s randomly generated data: Customer 1 has always been at war with Eastasia. Tying It All Together So now, armed with these details, we can take the total number of seconds and add it to our start date to get when a work item begins.  We can take the work item begin time and add the number of seconds to run, which gives us the work item’s end time. DROP TABLE IF EXISTS #WorkItems; CREATE TABLE #WorkItems ( WorkItemID INT NOT NULL PRIMARY KEY CLUSTERED, CustomerID INT NOT NULL, WorkItemStart DATETIME2(0) NOT NULL, WorkItemEnd DATETIME2(0) NULL ); DECLARE @NumberOfRecords INT = 50000, @NumberOfCustomers INT = 150, @StartDate DATETIME2(0) = '2018-12-18 15:00:00', @MeanRunLengthInSeconds DECIMAL(5,2) = 90.0, @StdDevRunLengthInSeconds DECIMAL(5,2) = 18.8, @MeanLengthBetweenRunsInSeconds DECIMAL(5,2) = 128.0, @StdDevLengthBetweenRunsInSeconds DECIMAL(5,2) = 42.3, @Precision INT = 1; WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER(ORDER BY (SELECT 0)) AS n FROM L5), Vals AS ( SELECT TOP (@NumberOfRecords) n, r.CustomerID, s.NumberOfSecondsToRun, s.NumberOfSecondsBeforeNextRunBegins FROM Nums CROSS APPLY ( SELECT RAND(CHECKSUM(NEWID())) AS rand1, RAND(CHECKSUM(NEWID())) AS rand2, CAST(@NumberOfCustomers * RAND(CHECKSUM(NEWID())) AS INT) + 1 AS CustomerID ) r CROSS APPLY ( SELECT ROUND((SQRT(-2.0 * LOG(r.rand1)) * COS(2 * PI() * r.rand2)) * @StdDevRunLengthInSeconds, @Precision) + @MeanRunLengthInSeconds AS NumberOfSecondsToRun, ROUND((SQRT(-2.0 * LOG(r.rand1)) * SIN(2 * PI() * r.rand2)) * @StdDevLengthBetweenRunsInSeconds, @Precision) + @MeanLengthBetweenRunsInSeconds AS NumberOfSecondsBeforeNextRunBegins ) s ), records AS ( SELECT v.n AS WorkItemID, v.CustomerID, v.NumberOfSecondsToRun, v.NumberOfSecondsBeforeNextRunBegins, SUM(v.NumberOfSecondsBeforeNextRunBegins) OVER (PARTITION BY v.CustomerID ORDER BY v.n ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) - v.NumberOfSecondsBeforeNextRunBegins AS TotalNumberOfSeconds FROM vals v ) INSERT INTO #WorkItems ( WorkItemID, CustomerID, WorkItemStart, WorkItemEnd ) SELECT r.WorkItemID, r.CustomerID, s.WorkItemStart, DATEADD(SECOND, r.NumberOfSecondsToRun, s.WorkItemStart) AS WorkItemEnd FROM records r CROSS APPLY ( SELECT DATEADD(SECOND, r.TotalNumberOfSeconds, @StartDate) AS WorkItemStart ) s; SELECT wi.WorkItemID, wi.CustomerID, wi.WorkItemStart, wi.WorkItemEnd FROM #WorkItems wi ORDER BY wi.CustomerID, wi.WorkItemID; There are two things of note here.  First, I added a temp table called #WorkItems.  Note that if you are not running SQL Server 2016 SP1 or later, you’ll want to comment out the first line.  Then, I added an insert statement at the bottom.  Here, I start with my records CTE (which we saw above) and used CROSS APPLY to get the work item’s start date.  I did that in a CROSS APPLY​ operation so that I could make the WorkItemEnd calculation easier to follow, saying that we run a certain number of seconds from each work item’s starting point. This gives us a data set that looks a bit like the following: Trying to schedule a date with customer 1. In this run of the job, we can see that work item 296 ends at 3:06 PM, but work item 403 started at 3:05 PM, meaning that there is some overlap.  If you briefly glance through these, you’ll see some occasions of overlap but nothing absurd.  However, glancing through isn’t enough.  That’s where Part 2 comes into play. Next Time On Matlock Up to this point, we’ve spent all of our time building up a realistic-enough data set.  In tomorrow’s post, we will take this structure, expand it to 5 million rows, and look at a way of finding the max level of concurrency by customer.  Stay tuned!
# zbMATH — the first resource for mathematics Sampled universality of timed automata. (English) Zbl 1195.68052 Seidl, Helmut (ed.), Foundations of software science and computational structures. 10th international conference, FOSSACS 2007, held as part of the joint European conferences on theory and practice of software, ETAPS 2007, Braga, Portugal, March 24 – April 1, 2007. Proceedings. Berlin: Springer (ISBN 978-3-540-71388-3/pbk). Lecture Notes in Computer Science 4423, 2-16 (2007). Summary: Timed automata can be studied in not only a dense-time setting but also a discrete-time setting. The most common example of discrete-time semantics is the so called sampled semantics (i.e., discrete semantics with a fixed time granularity $$\epsilon )$$. In the real-time setting, the universality problem is known to be undecidable for timed automata. In this work, we study the universality question for the languages accepted by timed automata with sampled semantics. On the negative side, we show that deciding whether for all sampling periods $$\epsilon$$ a timed automaton accepts all timed words in $$\epsilon$$-sampled semantics is as hard as in the real-time case, i.e., undecidable. On the positive side, we show that checking whether there is a sampling period such that a timed automaton accepts all untimed words in $$\epsilon$$-sampled semantics is decidable. Our proof uses clock difference relations, developed to characterize the reachability relation for timed automata in connection with sampled semantics. For the entire collection see [Zbl 1116.68009]. ##### MSC: 68Q45 Formal languages and automata Full Text:
# On the number of ways to make $50n$ cents out of pennies, nickels, dimes and quarters Let $f(n)$ be the number of ways to make n cents out of pennies, nickels, dimes and quarters (1c, 5c, 10c, 25c). Prove that $f(50n) = an^3 + bn^2 + cn + 1$ for some constants $a, b, c.$ I have found the generating function of $f$ is $$f(x)=\frac{1}{(1-x)(1-x^5)(1-x^{10})(1-x^{25})}.$$ I can produce a generating function $g(x)$ whose terms are just the terms of $f$ whose powers are multiples of 50 using the formula $$g(x) = \frac{1}{50}\sum\limits_{k=1}^{50}{f(x e^{2\pi i k/50})},$$ but in practice it seems hard to simplify this expression for $g$ and extract a recurrence for $f(50n)$ from it. Is there a better way to approach this problem? Let $A(n)$ be the number of ways of making $5n$ cents from pennies and nickels. Let $B_1(n)$ be the number of ways of making $10n$ cents from pennies, nickels and dimes. Let $B_2(n)$ be the number of ways of making $10n+5$ cents from pennies, nickels and dimes. Let $C_1(n)$ be the number of ways of making $50n$ cents from pennies, nickels, dimes and quarters. Let $C_2(n)$ be the number of ways of making $50n+25$ cents from pennies, nickels, dimes and quarters. $A(n)$ is simple. It is linear. You can calculate $B_1(n)$ and $B_2(n)$ as a sum of $A(m)$ by adding over the number of dimes. They are quadratic. The formulas for $C_1(n)$ and $C_2(n)$ will be sums of $B_1(m)$ and $B_2(m)$.
# Bug in the mdframed package (workaround?) I've discovered what appears to be a bug in the mdframed package. My goal was to use a package which allows me to create shaded theorem environments which span multiple pages (thmtools does not allow this). Run this minimal example: \documentclass{memoir} \usepackage[english]{babel} \usepackage{amsthm} \usepackage{mdframed} \usepackage{blindtext} \newmdtheoremenv{theorem}{Theorem} \begin{document} \chapter{Hello} \marginpar{\vspace*{9pt}\chapnumfont\thechapter} % COMMENT IN/OUT \begin{theorem} \blindtext[2] \end{theorem} \begin{theorem} \blindtext[3] \end{theorem} \end{document} When the marked line is commented out, then the theorems render as expected -- that is, the second theorem spans the first and second page. However, when the marked line is commented in, then the two theorems are generated on separate pages. The marked line simply adds in the chapter number on the side. Does anybody know the source of the bug? Moreover, is there a way to work around this? I want the theorems to span the pages, as the package was designed! - Although I don't know exactly what's going on, I don't think it's a bug in mdframed, but a bad interaction of \marginpar. As a workaround you can use \marginnote from the marginnote package: \documentclass{memoir} \usepackage[english]{babel} \usepackage{amsthm} \usepackage{mdframed} \usepackage{marginnote} \usepackage{blindtext} \newmdtheoremenv{theorem}{Theorem} \begin{document} \chapter{Hello} \marginnote{\chapnumfont\thechapter} \begin{theorem} \blindtext[2] \end{theorem} \begin{theorem} \blindtext[3] \end{theorem} \end{document} -
qwc_complement_adj_matrix(binary_observables)[source] Obtains the adjacency matrix for the complementary graph of the qubit-wise commutativity graph for a given set of observables in the binary representation. The qubit-wise commutativity graph for a set of Pauli words has a vertex for each Pauli word, and two nodes are connected if and only if the corresponding Pauli words are qubit-wise commuting. Parameters binary_observables (array[array[int]]) – a matrix whose rows are the Pauli words in the binary vector representation Returns the adjacency matrix for the complement of the qubit-wise commutativity graph Return type array[array[int]] Raises ValueError – if input binary observables contain components which are not strictly binary Example >>> binary_observables array([[1., 0., 1., 0., 0., 1.], [0., 1., 1., 1., 0., 1.], [0., 0., 0., 1., 0., 0.]]) >>> qwc_complement_adj_matrix(binary_observables) array([[0., 1., 1.], [1., 0., 0.], [1., 0., 0.]])
0 1458 # Mirror Image Questions For SSC MTS Download Top-20 SSC MTS Mirror Image Questions PDF. Mirror Image questions based on asked questions in previous year exam papers very important for the SSC MTS exam. Question 1: Observe the figures and find a mirror image of the same from the alternatives given. Imagine that MN is a mirror. a) b) c) d) Question 2: Which of the following is the mirror image of “Jingoist”? a) b) c) d) Question 3: Choose the option that most closely resembles the mirror image of the given figure when mirror is placed at the right side. a) b) c) d) Question 4: Find the mirror image of the clock when the time is 06:18 a) 05:42 b) 04:37 c) 05:38 d) 04:38 Question 5: If a mirror is placed on the line AB, then which of the answer figures is the right image of the given figure? a) b) c) d) Question 6: Observe the figures and find a mirror image of the same from the alternatives given. Imagine that MN is a mirror. a) b) c) d) Question 7: Find the mirror image of the clock when the time is 04:45 a) 06:15 b) 07:18 c) 06:12 d) 07:15 Question 8: In the following question, a mirror is placed on the line MN, then which of the answer figures is the right image of the given figure ? a) b) c) d) Question 9: If a mirror is placed on the line AB, then which of the answer figures is the right image of the given figure ? a) b) c) d) Question 10: If a mirror is placed on the line AB, then which of the answer figures is the right image of the given figure ? a) b) c) d) Question 11: If a mirror is placed on the line AB, then which of the answer figures is the right image of the given figure ? a) b) c) d) Instructions Select the related word/letters/number from the given alternatives. Question 12: RORRIM : MIRROR : : TNESERP : ? a) PRESENT b) TNERESP c) STNERPE d) CRESENT Question 13: Which of the following is the mirror image of ‘Rejuvenation’? a) b) c) d) Question 14: Which of the following is the mirror image of ‘Democracy’? a) b) c) d) Question 15: Which of the following is the mirror image of “Aluminium”? a) mυinimυlA b) mυimimυlA c) mυininυlA d) mυinlmυiA A horizontal mirror is placed, so the object on the top will appear at the bottom in reverse position and vice-versa. So the two triangles at the top will now appear at the bottom, thus the third option will be eliminated. Also, the mouth is at the bottom right side of face, which will move to top right side, hence second option is also eliminated. Now among the remaining options, first image correctly resembles the mirror image in the question figure, hence first option is the right image. => Ans – (A) In option A, ‘t’ and ‘i’ are interchanged. In option C, ‘n’ is incorrect. In option D, ‘i’ and ‘n’ are interchanged. In option B, all alphabets are correct. Hence, option B is the correct answer. When a mirror is placed at the right side, the shape of the circle will not change, hence the last two options are eliminated. Also, the position of the stars will be shifted towards right side, hence first option is the correct image. => Ans – (A) We need to subtract from 12:00 or 11:60 to get mirror image time Mirror image of 06:18 = 11:60 – 06:18 = 05:42 A vertical mirror is placed, so the object on the left will appear right in reverse position and vice-versa. So the arrow in the middle will be reversed and now point leftwards, thus the first option will be eliminated. Also, in the question figure, the two dots are at the extreme ends, which will remain as it is, hence third option is the right image. => Ans – (C) A horizontal mirror is placed, so the object on the top will appear at the bottom in reverse position and vice-versa. So the three vertical line at bottom right will now appear at top right side of image, thus the last two options will be eliminated. Also, in the question figure, all the alphabets will turn upside down, hence first option is the right image. => Ans – (A) We need to subtract from 12:00 or 11:60 to get mirror image time Mirror image of 04:45 = 11:60 – 04:45 = 07:15 A vertical mirror is placed, so the object on the left will appear right in reverse position and vice-versa. So the square with ‘X’ sign at top left will now appear at top right, thus the third option will be eliminated. Also, the arrow underneath it will still face upwards, and thus the second option is also eliminated. Also, in the question figure, in the middle row, at rightmost side, ‘(‘ will be changed to ‘)’, hence fourth option is the right image. => Ans – (D) A vertical mirror is placed, so the object on the left will appear right in reverse position and vice-versa. So the triangle at right side (with vertical lines) will be reversed and now will appear at left side, thus the first and third options will be eliminated. Also, in the question figure, the black triangle at the top will still stay at the top pointing downwards, hence fourth option is the right image. => Ans – (D) A horizontal mirror is placed, so the object on the top will appear at the bottom in reverse position and vice-versa. So the triangle at the top will now appear at the bottom facing towards top, thus the first and last options will be eliminated. Also, in the question figure, the vertical lines are at top left side of triangle, hence they will appear at bottom left side, hence third option is the right image. => Ans – (C) A horizontal mirror is placed, so the object on the top will appear at the bottom in reverse position and vice-versa. So the flag in vertical position in the middle will stay vertical but it will be turned upside down and thus will face towards left, hence second option is the right image. => Ans – (B) Expression = RORRIM : MIRROR : : TNESERP : ? The letters are written in revere order, i.e. first letter is written at end, 2nd at 2nd last. Similarly, TNESERP : PRESENT => Ans – (A) In option B, ‘i’ and ‘a’ are interchanged. In option C, ‘e’ and ‘u’ are interchanged. In option D, ‘n’ is replaced with ‘m’. In option A, all the alphabets are correct. Hence, option A is the correct answer. In option B, ‘r’ and ‘a’ are interchanged. In option C, ‘e’ and ‘o’ are interchanged. In option D, ‘m’ and ‘o’ are interchanged. In option A, all the alphabets are correct. Hence, option A is the correct answer.
## Cryptology ePrint Archive: Report 2008/171 Binary Edwards Curves Daniel J. Bernstein and Tanja Lange and Reza Rezaeian Farashahi Abstract: This paper presents a new shape for ordinary elliptic curves over fields of characteristic 2. Using the new shape, this paper presents the first complete addition formulas for binary elliptic curves, i.e., addition formulas that work for all pairs of input points, with no exceptional cases. If n >= 3 then the complete curves cover all isomorphism classes of ordinary elliptic curves over F_2^n. This paper also presents dedicated doubling formulas for these curves using 2M + 6S + 3D, where M is the cost of a field multiplication, S is the cost of a field squaring, and D is the cost of multiplying by a curve parameter. These doubling formulas are also the first complete doubling formulas in the literature, with no exceptions for the neutral element, points of order 2, etc. Finally, this paper presents complete formulas for differential addition, i.e., addition of points with known difference. A differential addition and doubling, the basic step in a Montgomery ladder, uses 5M + 4S + 2D when the known difference is given in affine form. Category / Keywords: public-key cryptography / Elliptic curves, Edwards curves, binary fields, complete addition law, Montgomery ladder, countermeasures against side-channel attacks Date: received 15 Apr 2008, last revised 11 Jun 2008 Contact author: tanja at hyperelliptic org Available format(s): PDF | BibTeX Citation
###### Example3.4.6 Solve for $$b$$ in $$A=\frac{1}{2}bh\text{.}$$ (This is the area formula for a triangle.) Explanation in-context
# Tag Info 58 It travels forwards instead of backwards in an accelerating car for the same reason that a helium balloon travels upwards instead of downwards under the influence of gravity. Why is that? In an accelerating car, for all intents and purposes the acceleration can be considered a change in the amount and direction of gravity, from pointing straight down to ... 47 The balls are entering the water well below the surface. The pressure there is much higher than at the surface. The work needed to push the balls into the water at this depth cancels the work gained when they float back up. We can ignore the gravitational force on the balls since gravity pulls down as much as up as you traverse the loop. Mathematically, ... 40 When your car accelerates forward, the air inside moves back relative to the car. This creates a slightly high pressure in the rear of the vehicle and a low pressure up front. Since helium is lighter than air, it moves away from the region of high pressure. A similar balloon filled with $CO_2$ would move back, since it is heavier than the surrounding air 30 Good question. Assume we have one cube of ice in a glass of water. The ice displaces some of that water, raising the height of the water by an amount we will call $h$. Archimedes principles states that the weight of water displaced will equal the upward buoyancy force provided by that water. In this case, $$\text{Weight of water displaced} = ... 24 Here is an explanation that needs no calculations. Consider the following diagram, in which part1 and part2 represent the ice. The displaced water volume equals part2 volume and has as much mass as (part1+part2) Now look at what happens when both part1 and part2 melt: their mass does not change, it is (part1+part2) it becomes water. And we just said ... 17 Fun question. Here's my "me-too" answer. Suppose the car has just emerged from a river, so there's a lot of water in it, and the balloon is tied to the floor. Then you drive away. The air in the car is just like a bunch of water :) 12 The energy needed to submerge a ball is equal to the energy gain from other ball to emerge from the water on the other side, so any waste on friction drives the process impossible. 11 It acts precisely like water in a cup. Or, more specifically, like the air in the cup. Since the helium is a much lower density than the nitrogen and other gasses in your car, it can be visualized like an air bubble in a bottle. The container for the helium(the balloon) has negligible mass. When you accelerate forward, the water in a bottle will move ... 10 As the comments above indicate, factors like density, pressure and temperature are important for a Jupiter submariner. Of course nobody yet has the exact details of Jupiter's interior structure, but there's a diagram in this LASP page[WebCite archived version] that indicates the following: ... 10 This diagram is my attempt to show the situation first when the rock is in the boat and secondly when you've chucked the rock over the side. The mass of the boat of M and the mass of the rock is m. The density of water is \rho_w and the density of the rock is \rho_r. In the first case Archimedes' principle tells us that the volume of water ... 9 What is probably being referred to here indirectly is the fact that air with moisture in it is less dense that dry air. The question becomes, is the buoyancy force of an empty egg with the optimal moisture content of air sufficient to overcome it's weight? Searching around I see that water vapor has a density of 0.804g/Land dry air has a density of 1.27 ... 8 The problem is in the seal. The amount of work to move the seal against the water pressure is the same amount of energy that is gained by the balls when they are pushed up by the water. Even if we remove the seal and we imagine a magic "one-way pass-through" wall, the ball would still need to displace the same volume of water as itself in order to get into ... 7 Trivially the density of the water increases identically to the object and so bouyancy is maintained... 6 Would hot hydrogen (in the same sense as hot air) be able to lift even more mass? Yes. Though I suppose the fire danger goes up, and you certainly can't use a propane burner to warm it... Would a higher or lower density of hydrogen in a ballon lift more? Lower density always means higher buoyancy. If you could have a balloon which had ... 6 The Costa Concordia wasn't sunken, it was aground. Which meant that part of it was still above the water. The ping pong solution can only be used if the ship is completely underwater. Otherwise, it will have an opposite effect. The buoyant force (force by which a fluid pushes up on a body, thus keeping it afloat), is proportional to the mass of the fluid ... 6 Submerging objects in a liquid does not change the mass of those objects. It does effect the weight they would register on a scale, though. The bouyant force a fluid exerts upwards on a body submerged in it,$$F=\rho Vg$$where \rho is the density of the fluid, V is the volume of the fluid displaced, and g is the acceleration due to gravity. The ... 6 Balloons are buoyant because the air pushes on them. The air doesn't know what's in the balloon, though. It pushes on everything the same, so the buoyant force is the same on all balloons of the same size. If the "balloon" is just a lump of air with an imaginary boundary, then the lump won't go anywhere because the air isn't moving on average. So the ... 6 For a given volume, light things float and heavy things sink. The cup sinks when you fill it with water because it becomes heavier, and therefore more dense. When the cup becomes more dense than water, it sinks. The cup would sink just as well if you filled it with rocks, lead, etc. The condition for the cup to sink is that its weight must be greater ... 6 Just a tad about how bouyancy works. Any fluid in a gravitational field possesses a pressure gradient, (which if the gas/liquid is in equilibrium) counterbalances the effect of gravity. Gravity acting on such a fluid creates this pressure, which is referred to a hydrostatic pressure. To make a long story short, the external pressure (of the air) is greater ... 6 Actually, the answer is a bit more subtle than just density. The principle that is behind floating objects is Archimedes' principle: A fluid (liquid or gas) exerts a buoyant force, opposite apparent gravity (i.e. gravity + acceleration of fluid) on an immersed object that is equal to the weight of the displaced fluid. Thus, if you have an object fully ... 5 dmckee's answer is a great not-too-technical description of buoyancy. Read that first. But in case you're interested, I thought I would go into some more detail. The buoyant force on a submerged object (e.g. a balloon submerged in air) is equal to the weight of the displaced fluid,$$F_b = \rho_f g V$$as dmckee said. The physical origin of this force is ... 5 Helium balloons are pulled by gravity, as are all objects with mass. The reason they don't fall is that there is another force acting on them, a buoyant force from air pressure that is equal to the weight of the air displaced by the balloon. The reason you don't float is that the weight of the air you displace is quite a bit less than your weight (a person ... 5 DENSITY It is because of densities of the object that is floating and the liquid in which it is floating. If an object have density lower than a fluid it will float otherwise it will sink. Density of entire object [mass / volume] should be taken into account and not merely the density of material it is made up of. A ship made up of iron floats ... 5 I assume you are asking why we are not drawing air out of a balloon like container so as to create the lower density that helium or hot air gives us. The answer is that it is hard to maintain a vacuum with a thin enough, so as to be almost weightless, rigid contaning surface. A balloon with gas inside equalizing the atmospheric pressure with the gas ... 4 If the object floats: water level stays the same If the object sinks: water level decreases Consider the force balance. The Earth exerts an upward force on the lake. Anything floating on the water is included in the weight of the lake. Since water is constant density, the upward force on the lake is a direct function of the water level - a higher level ... 4 Assuming they are filled to the same volume, and the air surrounding them is of the same density, the buoyant force acting on the balloons will be the same. Buoyant force is simply equal to the weight of the amount of surrounding fluid that would occupy the space filled by the balloon, if the balloon were not there. It has nothing to do with the contents or ... 4 If you fill the cockpit with water, the pilot will feel a buoyant force. Humans have about the same density as water, so ignoring the scuba suit, the pilot will feel a buoyant force about equal to his own weight. The plane's maneuvers don't change this result much. By the equivalence principle, when the plane accelerates, the water in the cockpit and the ... 4 An object floats if its upthrust (buoyancy) is in equilibrium with its downwards gravitational force. In other words (as stated by the wiki page),$$F_net = 0 = mg - \rho_f V_{disp} g (where all the constants are pretty self-explanatory.) Clearly then, the properties of the object that determine whether/how it floats are its mass and volume. More ... 4 The comparison is viable, here's why: Let's choose the positive $x$-direction to point upward, perpendicular to the water's surface. By Archimedes' principle, the magnitude of the buoyant force on an object of volume $V$ equals the weight of the displaced water; $F_B = \rho_w V g$ where $\rho_w$ here denotes the density of water. The buoyant force ... 4 I am not good at explaining things but here it goes try to understand it this way assume the rectangle to be you car, now the dotted area is filled with normal air and their is a helium balloon in between So when you accelerate everything in the car tries move backward with respect to the car even the helium balloon, so then why the balloon goes ... Only top voted, non community-wiki answers of a minimum length are eligible
Sunday December 8, 2013 # Homework Help: Physics Posted by Lindsey on Monday, July 9, 2012 at 10:34pm. Assume you construct a new water tower for a town water supply. Assume the tank is in the shape of a sphere with a diameter of 21ft. Assume the tank is made of formes and welded .5 in steel plate that weighs 150 lb per ft3. What is the weight on each of the four support legs when the tank is 85% full of water? Neglect the wieght of the support legs and water piping. Not sure how to get volume of the .5 in thick steel tank (Just the steel material). I can calculate the water volume of the tank. Im assuming the volume of the water is determined from diamter of 21 ft of the sphere. • Physics - ajayb, Tuesday, July 10, 2012 at 7:58am You can consider the surface area of the sphere (4*pi*r^2) as the surface area of the steel plate. Multiply it with the thickness i.e. 0.5in to get volume of the steel material. • Physics - Elena, Tuesday, July 10, 2012 at 8:09am I’ve calculated in SI units: D=21 ft =6.4 m => R= 3.2 m (outer radius of the tank) h= 0.5 in= 0.0127 m, ρ1 =150 lb/ft³=2403 kg/m³ (this is not steel. It may be duralumin) ρ2 =1000 kg/m³ (water) V=0.85V2 V1=4πR³/3=4π•3.2³/3=137.26 m³ V2=4π(R-h)³/3=4π•(3.2-0.0127)³/3=135.63 m³ ΔV=V1-V2=137.26-135.63=1.63 m³, Mass of the tank is m1= ρ1• ΔV=2403•1.63=3916.9 kg. Mass of the water is m2= ρ2•0.85•V2=1000•0.85•135.63= =115285.5 kg. The weight of the tank with the water is (m1+m2) •g. The weight on each of the four support legs is (m1+m2) •g/4 =119202.4• 9.8/4= =292045.9 N. Statistics - In 2002, the average price of new homes in a certain suburb was $... AP Physics - You go to kitchen to boil water. You pour 280 gram of water to a ... Physics for life scince - III) The municipality of New York supplies 2 million ... Chemistry - Suppose you have two 100- graduated cylinders. In each cylinder ... chemistry - Suppose you have two 100-{\rm mL} graduated cylinders. In each ... Physics - I am new at physics and having a very hard time. Could you please help... Math - If the floor plan of a new building is shown the building cost$35.00 a ...
# Show Tag: algorithm Select Other Tags The Hartigans' Dip Statistic measures unimodality in a sample: Specifically, it measures the greatest difference between the empirical cumulative distribution function and that unimodal cumulative distribution function which minimizes that greatest difference. In SOM learning, shrinking of the neighborhood size and decreasing update strength usually follow predefined schedules i.e. they only depend on the update step. In the PLSOM algorithm, update strength depends on the difference between a data point and the best-matching unit's weight vector, the quantization error. A large distance, indicating a bad representation of that data point in the SOM, leads to a stronger update than a small distance. The distance is scaled relative to the largest quantization error encountered so far. PLSOM reduces the number of parameters of the SOM algorithm from four to two. PLSOM overreacts to outliers: data points which are very unrepresentative of the data in general will change the network more strongly than they should. PLSOM2 addresses the problem of PLSOM overreacting to outliers. Viola and Jones presented a fast and robust object detection system based on 1. a computationally fast way to extract features from images, 2. the AdaBoost machine learning algorithm, 3. cascades of weak classifiers with increasing complexities. If we know which kind of output we want to have and if each neuron's output is a smooth function of its input, then the change in weights to get the right output from the input can be computed using calculus. Following this strategy, we get backpropagation One problem with backpropagation is that one usually starts with small weights which will be far away from optimal weights. Due to the size of the combinatorial space of weights, learning can therefore take a long time. In the wake-sleep algorithm, (at least) two layers of neurons are fully connected to each other. In the wake phase, the lower level drives the upper layer through the bottom-up recognition weights. The top-down generative weights are trained such that they will generate the current activity in the lower level given the current activity in the output level. In the sleep phase, the upper layer drives activity in the lower layer through the generative weights and the recognition weights are learned such that they induce the activity in the upper layer given the activity in the lower layer. Learning in RBMs is competitive but without explicit inhibition (because the RBM is restricted in that it does not have recurrent connections). Neurons learn different things due to random initialization and stochastic processing. The SOM is an asymptotically optimal vector quantizer. There is no cost function that the SOM algorithm follows exactly. Quality of order in SOMs is a difficult issue because there is no unique definition of `order' in for the $n$-dimensional case if $n>2$. Nevertheless, there have been a number of attempts. There have been many extensions of the original SOM ANN, like • (Growing) Neural Gas
Inspired by this challenge and related to this one. ### Background Badugi [bæduːɡiː] is a low-ball draw-poker variant. “A+KYTE”yḲONŒPÇ€ṢṪµ€⁼€Ṁ$A monadic link taking a list of two lists of characters - each being a space separated representation of the hand (e.g. "Ac 2d 4s 3h") returning a list of two numbers identifying the winner(s) with 1 and any loser with 0 - i.e. [1, 0] = left wins; [0, 1] = right wins; [1, 1] = draw. Try it online! or see the test-suite. ### How? ẎŒQȦ;L;Ṗ€Ṣ$ - Link 1, sortKey: list of lists of numbers representing some cards (see Main) Ẏ - flatten into a single list of numbers ŒQ - distinct sieve (1 at first occurrence of anything, 0 at the rest) Ȧ - Any & All? zero if any are 0 or if empty; 1 otherwise (i.e. playable hand?) L - length of input (number of cards in the hand) ; - concatenate $- last two links as a monad: Ṗ€ - pop each (get just the rank portions) Ṣ - sort (Main's translation & negation of ordinals ensures A>2>3>...>Q>K) ; - concatenate (now we have [isPlayable; nCards; [lowToHighCards]]) “A+KYTE”yḲONŒPÇ€ṢṪµ€⁼€Ṁ$ - Main link: list of lists of characters, hands µ€ - for €ach of the two hands: “A+KYTE” - literal list of characters "A+KYTE" (compressing doesn't help - lower case would be “£Ḅṁ⁽>» though -- I'll stick with kyte though it's kind of nice.) y - translate - change As to +s, Ks to Ys and Ts to Es - note the ranks are now in ordinal order: - +<2<3<4<5<6<7<8<9<E<J<Q<Y Ḳ - split at spaces - split the four cards up O - to ordinals '+'->43, '2'->50, ... N - negate - effectively reverse the ordering ŒP - power-set - get all combinations of 0 to 4 cards inclusive Ç€ - call the last link (1) as a monad for €ach such selection Ṣ - sort these keys Ṫ - tail - get (one of) the maximal keys - (the key of a best, playable selection) Ṁ - maximum (the better of the two best, playable selection keys) ⁼€ - equals? for €ach (1 if the hand is a winner, 0 if not) # Python 3, 207 204 bytes lambda i,j:L(h(i))-L(h(j))if L(h(i))!=L(h(j))else(h(i)<h(j))-(h(i)>h(j)) L=len def h(l):s=set();return[x[0]for x in sorted(y.translate({65:49,75:90,84:65})for y in l)if not(s&set(x)or s.update(*x))][::-1] Try it online! Saved 3 bytes thanks to Jonathan Frech Returns 1 if the first hand wins, -1 if the second hand wins and 0 in case of a draw. The function h computes a list that represents the hand. The lambda compares two representations of hand. I think it might be shortened, but I wanted to return only three values and didn't find a simpler way to do comparison. • You can save two bytes by defining L=len and replacing all other occurrences of lenwith L. Sep 14 '17 at 21:10 • Also, you can probably replace s=set() with s={0} and set(x)&s or with s&set(x)or Sep 14 '17 at 21:13
# WeylCharacterRing and coroots / Dynkin labels This might be a stupid question coming from a poor physicist, but there we go: I would like to work with representations of Lie algebras, having the weights expressed in terms of the times they contain each of the fundamental weights, what we call 'Dynkin labels'. It seems to correspond to the style='coroot' when declaring a WeylCharacterRing: if I want the representation whose highest weight is 3 times the 1st fundamental weight, I ask for sage: WCR = WeylCharacterRing("A2",style='coroots') sage: WCR(3,0,0,...,0) That said, I get: sage: A2 = WeylCharacterRing("A2",style='coroots') sage: rep = A2(1,0) sage: print rep.weight_multiplicities() {(2/3, -1/3, -1/3): 1, (-1/3, 2/3, -1/3): 1, (-1/3, -1/3, 2/3): 1} but I would like to get sage: A2 = WeylCharacterRing("A2",style='coroots') sage: rep = A2(1,0) sage: print rep.weight_multiplicities() {( 1, 0): 1, ( -1, 1): 1, ( 0, -1): 1} which is the right answer in Dynkin labels. It would be easy to go from one place to the other if I could get the fundamental weights in the same ambient space as the weights: in this case (2/3, -1/3, -1/3) and (1/3, 1/3, -2/3). But I find no way to get them in general. From the 1st example: sage: A2.fundamental_weights() Finite family {1: (1, 0, 0), 2: (1, 1, 0)} which is not the answer I would expect, and seems inconsistent to me, since the highest weight of the representation, (2/3, -1/3, -1/3), is precisely the 1st fundamental weight. edit retag close merge delete Sort by » oldest newest most voted You can get the coroot vector components from the inner product with the simple coroots, of course: sage: B5 = WeylCharacterRing('B5',style='coroots') sage: Rep = 2*B5(1,0,0,0,0) + B5(0,1,2,3,4) sage: Rep.degree() # dimension 3777283147 sage: for highest_weight, multiplicity in Rep: ....: coroots = [ highest_weight.inner_product(coroot) ....: for coroot in list(B5.simple_coroots()) ] ....: print coroots, highest_weight, multiplicity ....: [1, 0, 0, 0, 0] (1, 0, 0, 0, 0) 2 [0, 1, 2, 3, 4] (8, 8, 7, 5, 2) 1 Note that I used a more complicated group where simple roots do not coincide with the simple coroot vectors as in SU(4). more I answer myself, just in case someone finds this and have the same problem: My bad: the solution to the "strange" fundamental weights is explained in http://www.sagemath.org/doc/thematic_tutorials/lie/weyl_character_ring.html#sl-versus-gl. Following the example: sage: [fw.coerce_to_sl() for fw in A2.fundamental_weights()] [(2/3, -1/3, -1/3), (1/3, 1/3, -2/3)] Anyway, I am considering proposing to add a style='coroots' option to weight_multiplicities() to get the output in 'Dynkin labels'. Cheers! more
# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 2 (P2-9709/02) | Year 2002 | Oct-Nov | (P2-9709/02) | Q#2 Question The polynomial  is denoted by . It is given that  is a factor of , and that  when  is divided by  the remainder is -5. Find the values of  and . Solution We are given that; We are also given that  is a factor of . When a polynomial, , is divided by , and  is factor of , then the remainder is ZERO i.e. . We can write factor in standard form as; Therefore; We are also given that when  is divided by  the remainder is -5. When a polynomial, , is divided by a , the remainder is the constant We can write divisor in standard form as; Therefore; From  we can substitute  in above equation; Substitution of  in any of these two equations yields value of . We choose; Hence;
Find all School-related info fast with the new School-Specific MBA Forum It is currently 30 Apr 2016, 04:35 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # The front of a 6-foot-by-8-foot rectangular door has brass Author Message TAGS: ### Hide Tags Manager Joined: 02 Dec 2012 Posts: 178 Followers: 3 Kudos [?]: 1534 [0], given: 0 The front of a 6-foot-by-8-foot rectangular door has brass [#permalink] ### Show Tags 12 Dec 2012, 08:45 3 This post was BOOKMARKED 00:00 Difficulty: 45% (medium) Question Stats: 64% (02:43) correct 36% (01:29) wrong based on 633 sessions ### HideShow timer Statictics Attachment: Door.png [ 6.35 KiB | Viewed 9416 times ] The front of a 6-foot-by-8-foot rectangular door has brass rectangular trim, as indicated by the shading in the figure above. If the trim is uniformly 1 foot wide, what fraction of the door's front surface is covered by the trim? (A) 13/48 (B) 5/12 (C) 1/2 (D) 7/12 (E) 5/8 [Reveal] Spoiler: OA SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1858 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Followers: 34 Kudos [?]: 1471 [4] , given: 193 Re: The front of a 6-foot-by-8-foot rectangular door has brass [#permalink] ### Show Tags 29 Oct 2014, 02:03 4 KUDOS 1 This post was BOOKMARKED Segmented the shaded region in 2 as shown below: Attachment: Door.png [ 5.29 KiB | Viewed 4838 times ] Yellow region area = 2 * 8*1 = 16 Pink region area = 3 * 4*1 = 12 Total frame area = 8*6 = 48 Ratio $$= \frac{28}{48} = \frac{7}{12}$$ _________________ Kindly press "+1 Kudos" to appreciate Math Expert Joined: 02 Sep 2009 Posts: 32549 Followers: 5627 Kudos [?]: 68261 [3] , given: 9797 Re: The front of a 6-foot-by-8-foot rectangular door has brass [#permalink] ### Show Tags 12 Dec 2012, 08:52 3 KUDOS Expert's post 1 This post was BOOKMARKED The front of a 6-foot-by-8-foot rectangular door has brass rectangular trim, as indicated by the shading in the figure above. If the trim is uniformly 1 foot wide, what fraction of the door's front surface is covered by the trim? (A) 13/48 (B) 5/12 (C) 1/2 (D) 7/12 (E) 5/8 First let's find the area of unshaded region. The two unshaded regions have a width of 4 feet: 6 feet minus 1 foot wide trim on each side. The two unshaded regions have a height of 5 feet: 8 feet minus 1 foot at the top, 1 foot in the middle and 1 foot at the bottom. Thus the area of the two unshaded regions is 4*5=20 square feet. Therefore, the area of the trim is 6*8-20=28 square feet, which is 28/48=7/12 of the total area. Hope it's clear. _________________ Current Student Joined: 10 Aug 2009 Posts: 67 Location: United States (AL) Concentration: Entrepreneurship, Strategy GMAT 1: 600 Q43 V30 GMAT 2: 640 Q41 V36 Followers: 0 Kudos [?]: 28 [1] , given: 45 Re: The front of a 6-foot-by-8-foot rectangular door has brass [#permalink] ### Show Tags 28 Oct 2014, 17:13 1 KUDOS Bunuel wrote: The front of a 6-foot-by-8-foot rectangular door has brass rectangular trim, as indicated by the shading in the figure above. If the trim is uniformly 1 foot wide, what fraction of the door's front surface is covered by the trim? (A) 13/48 (B) 5/12 (C) 1/2 (D) 7/12 (E) 5/8 First let's find the area of unshaded region. The two unshaded regions have a width of 4 feet: 6 feet minus 1 foot wide trim on each side. The two unshaded regions have a height of 5 feet: 8 feet minus 1 foot at the top, 1 foot in the middle and 1 foot at the bottom. Thus the area of the two unshaded regions is 4*5=20 square feet. Therefore, the area of the trim is 6*8-20=28 square feet, which is 28/48=7/12 of the total area. Hope it's clear. Thank you for this. I accidently chose the trap answer of $$\frac{5}{12}$$, and found the area of the door not covered by trim _________________ "Popular opinion is the greatest lie in the world"-Thomas Carlyle GMAT Club Legend Joined: 09 Sep 2013 Posts: 9233 Followers: 454 Kudos [?]: 114 [0], given: 0 Re: The front of a 6-foot-by-8-foot rectangular door has brass [#permalink] ### Show Tags 06 May 2014, 12:34 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 9233 Followers: 454 Kudos [?]: 114 [0], given: 0 Re: The front of a 6-foot-by-8-foot rectangular door has brass [#permalink] ### Show Tags 18 Dec 2015, 08:11 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: The front of a 6-foot-by-8-foot rectangular door has brass   [#permalink] 18 Dec 2015, 08:11 Similar topics Replies Last post Similar Topics: Danny is sitting on a rectangular box. The area of the front face of t 0 29 Jul 2015, 10:51 11 Danny is sitting on a rectangular box. The area of the front 10 06 Nov 2012, 18:03 Rectangular door with brass rectangular trim 1 04 Oct 2011, 01:11 89 A square wooden plaque has a square brass inlay in the 31 15 Jan 2010, 04:00 3 A square wooden plaque has a square brass inlay in the center, leaving 6 11 Dec 2008, 19:57 Display posts from previous: Sort by
## The State of the Art Jesse Singal writes: This was presented, in Jennifer Eberhardt’s book Biased, as evidence to support the idea that even positive portrayals of black characters could be spreading and exacerbating unconscious antiblack bias. I did not see evidence to support that idea. I replied: I don’t understand what you’re saying here. I clicked thru and the article seems reasonable enough, for what it is. As you probably know, I’m not a big fan of these implicit bias tests. But I didn’t think the article was making any statements about positive portrayals of black characters. I thought they were saying that even for shows for which viewers perceived the black characters as being portrayed positively, a more objective measure showed the black characters being portrayed more negatively than the whites. I didn’t go thru all the details so maybe there’s something off in how they did their statistical adjustment, but the basic point seemed reasonable, no? Singal responded: Yeah, I didn’t include much detail. Basically it is this thing I see a ton of in social-priming-related research where people extrapolate, from results that appear to me to be fairly unimpressive, rather big claims about the ostensible impact of priming stuff on human behavior/attitudes in the real world. I think this table is key: This was from when they edited out black and white characters and asked people unfamiliar with the shows how they perceived the characters in question. The researchers appear to have tested six different things, found one that statistically significant (but only barely), and gone all-in on that one, explanations-wise. Then by the time the finding is translated to Eberhardt’s book, where all the nuance is taken out (we don’t hear that in five of the six things they tested they found nothing), we’re told that it could be that even black characters who are portrayed positively on TV—the subject of this story—could be spreading implicit bias throughout the land. I don’t really have a strong take on all this, but I thought it could be useful to post on this, just because sometimes maybe it’s a good idea to express this sort of uncertainty in judgment. In any sort of writing there is a pressure to come to a strong conclusion—less pressure in blogging than on other media, perhaps, but still there’s some pull toward certainty. In this case I’ll just leave the discussion where we have it here. Tomorrow’s Post: Bank Shot ## “Suppose that you work in a restaurant…” In relation to yesterday’s post on Monty Hall, Josh Miller sends along this paper coauthored with the ubiquitous Adam Sanjurjo, “A Bridge from Monty Hall to the Hot Hand: The Principle of Restricted Choice,” which begins: Suppose that you work in a restaurant where two regular customers, Ann and Bob, are equally likely to come in for a meal. Further, you know that Ann is indifferent among the 10 items on the menu, whereas Bob strictly prefers the hamburger. While in the kitchen, you receive an order for a hamburger. Who is more likely to be the customer: Ann or Bob? I just love this paper, not so much for its content (which is fine) but for its opening. “Suppose that you work in a restaurant…” I get the feeling that econ papers always take the perspective of people who are more likely to be owners, or at least consumers, not employees, in restaurants. Sure, there was that one famous paper about taxicab drivers, but I feel like most of the time you’ll hear economists talking about why it’s rational to tip, or how much a restaurant should charge its customers, or ways of ramping up workers’ performance, etc. Lots about Ray Kroc, not so much about the people who prepare the fries. (When my sister worked at McDonalds, they let her serve customers and make fries—but not burgers. Only the boys were allowed to do that.) Look. I’m not trying to pull out my (nonexistent) working-class credentials. I’ve been lucky and have never had to work a crap job in my life. It’s just refreshing to read an econ paper that takes the employee’s perspective, not to make an economic point and not to make a political point, but just cos why not. Kind of like Night of the Living Dead. ## Challenge of A/B testing in the presence of network and spillover effects Gaurav Sood writes: There is a fun problem that I recently discovered: Say that you are building a news recommender that lists which relevant news items in each person’s news feed. Say that your first version of the news recommender is a rules-based system that uses signals like how many people in your network have seen the news, how many people in total have read the news, the freshness of the news, etc., and sums up the signals in an arbitrary way to rank news items. Your second version uses the same signals but uses a supervised model to decide on the optimal weights. Say that you find that the recommendations vary a fair bit between the two systems. But which one is better? To suss that, you conduct an A/B test. But a naive experiment will produce biased estimates of the effect and the s.e. because: 1. The signals on which your control group ranking system on is based are influenced by the kinds of news articles that people in treatment group see. And vice versa. 2. There is an additional source of stochasticity in recommendations that people see: the order in which people arrive matters. The effect of the first concern is that our estimates are likely attenuated. To resolve the first issue, show people in the Control Group news articles based on predicted views of news articles based on historical data or pro-rated views of people assigned to control group alone. (This adds a bit of noise to the Control Group estimates.) And keep a separate table of input data for the treatment group and apply the ML model to the pro-rated data from that table. The consequence of the second issue is that our s.e. is very plausibly much larger than what we will get with the split world testing (each condition gets its own table of counts for views, etc.). The sequence in which people arrive matters as it intersects with “social influence world.” To resolve the second issue, you need to estimate how the sequence of arrival affects outcomes. But given the number of pathways, the best we can probably do is bound. We could probably estimate the effect of ranking the least downloaded item first as a way to bound the effects. The phrase ‘social influence world’ is linked to: https://www.princeton.edu/~mjs3/salganik_watts08.pdf Tomorrow’s Post: The State of the Art ## Dan’s Paper Corner: Can we model scientific discovery and what can we learn from the process? Jesus taken serious by the many Jesus taken joyous by a few Jazz police are paid by J. Paul Getty Jazzers paid by J. Paul Getty II Leonard Cohen So I’m trying a new thing because like no one is really desperate for another five thousand word essay about whatever happens to be on my mind on a Thursday night in a hotel room in Glasgow. Also, because there’s a pile of really interesting papers that I think it would be good and fun for people to read and think about. And because if you’re going to do something, you should jump right into an important topic, may I present for your careful consideration Berna Devezer, Luis G. Nardin, Bert Baumgaertner,  and Erkan Ozge Buzbas’ fabulous paper Scientific discovery in a model-centric framework: Reproducibility, innovation, and epistemic diversity. (If we’re going to talk about scientific discovery and reproducibility, you better believe I’m going to crack out the funny Leonard Cohen.) I am kinda lazy so I’m just going to pull out the last paragraph of the paper as a teaser. But you should read the whole thing. You can also watch Berna give an excellent seminar on the topic. Regardless, here is that final paragraph. Our research also raises questions with regard to reproducibility of scientific results. If reproducibility can be uncorrelated with other possibly desirable properties of scientific discovery, optimizing the scientific process for reproducibility might present trade-offs against other desirable properties. How should scientists resolve such trade-offs? What outcomes should scientists aim for to facilitate an efficient and proficient scientific process? We leave such considerations for future work. I like this paper for a pile of reasons. A big one is that a lot of discussion that I have seen around scientific progress is based around personal opinions (some I agree with, some I don’t) and proposed specific interventions. Both of these things are good, but they are not the only tools we have. This paper proposes a mechanistic model of discovery encoding some specific assumptions and investigates the consequences. Broadly speaking, that is a good thing to do. Some random observations: • The paper points out that the background information available for a replicated experiment is explicitly different from the background information from the original experiment in that we usually know the outcome of the original. That the set of replications is not a random sample of all experiments is very relevant when making statements like x% of experiments in social psychology don’t replicate. • One of the key points of the paper is that reproducibility is not the only scientifically relevant properties of an experiment. Work that doesn’t reproduce may well lead to a “truth” discovery (or at least a phenomenological model that is correct within the precision of reasonable experiments) faster than work that does reproduce. An extremely nerdy analogy would be that reproducibility will be like a random walk towards the truth, while work that doesn’t reproduce can help shoot closer to the truth. • Critically, proposals that focus on reproducibility of single experiments (rather than stability of experimental arcs) will most likely be inefficient. (Yes, that includes preregistration, the current Jesus taken serious by the many) • This is a mathematical model so everything is “too simple”, but that doesn’t mean it’s not massively informative. Some possible extensions would be to try to model more explicitly the negative effect of persistent-but-wrong flashy theories. Also the effect of incentives. Also the effect of QRPs, HARKing, Hacking, Forking, and other deviations from The Way The Truth and The Life. I’ll close out with a structurally but not actually related post from much-missed website The Toast: Permission To Play Devil’s Advocate Denied by the exceptional Daniel Mallory Ortberg (read his books. They’re excellent!) Our records indicate that you have requested to play devil’s advocate for either “just a second here” or “just a minute here” over fourteen times in the last financial quarter. While we appreciate your enthusiasm, priority must be given to those who have not yet played the position. We would like to commend you for the excellent work you have done in the past year arguing for positions you have no real interest or stake in promoting, including: • Affirmative Action: Who’s the Real Minority Here? • Maybe Men Score Better In Math For A Reason • Well, They Don’t Have To Live Here • I Think You’re Taking This Too Personally • Would It Be So Bad If They Did Die? • If You Could Just Try To See It Objectively, Like Me ## Josh Miller’s alternative, more intuitive, formulation of Monty Hall problem Here it is: Three tennis players. Two are equally-matched amateurs; the third is a pro who will beat either of the amateurs, always. You blindly guess that Player A is the pro; the other two then play. Player B beats Player C. Do you want to stick with Player A in a Player A vs. Player B match-up, or do you want to switch? And what’s the probability that Player A will beat Player B in this match-up? And here’s the background. It started when Josh Miller proposed this alternative formulation of the Monty Hall problem: Three boxers. Two are equally matched; the other will beat either them, always. You blindly guess that Boxer 1 is the best; the other two fight. Boxer 2 beats Boxer 3. Do you want to stick with Boxer 1 in a Boxer 1 vs. Boxer 2 match-up, or do you want to switch? I liked the formulation in terms of boxers (of course, and see data-based followup here), but Josh’s particular framing above bothered me. My first thought was confusion about how this relates to the Monty Hall problem. In that problem, Monty opens a door, he doesn’t compare two doors (in his case, comparing 2 boxers). There’s no “Monty” in the boxers problem. Then Josh explained: When Monty chooses between the items you can think of it as a “fight.” The car will run over the goat, and Monty reveals the goat. With two goats, they are evenly matched, so the unlucky one gets gored and is revealed. And I pieced it together. But I was still bothered: Now I see it. The math is the same (although I think it’s a bit ambiguous in your example). Pr(boxer B beats boxer C) = 1 if B is better than C, or 1/2 if B is equal in ability to C. Similarly, Pr(Monty doesn’t rule out door B) = 1 if B has the car and C has the goat, or 1/2 if B and C both have goats. It took me awhile to understand this because I had to process what information is given in “Boxer 2 beats Boxer 3.” My first inclination is that if 2 beats 3, then 2 is better than 3, but your model is that there are only two possible bouts: good vs. bad (with deterministic outcome) or bad vs. bad (with purely random outcome). My guess is that the intuition on the boxers problem is apparently so clear to people because they’re misunderstanding the outcome, “Boxer 2 beats Boxer 3.” My guess is that they think “Boxer 2 beats Boxer 3” implies that boxer 2 is better than boxer 3. (Aside: I prefer calling them A, B, C so we don’t have to say things like “2 > 3”.) To put it another way, yes, in your form of the problem, people easily pick the correct “door.” But my guess is that they will get the probability of the next bout wrong. What is Pr(B>A), given the information supplied to us so far? Intuitively from your description, Pr(B>A) is something close to 1. But the answer you want to get is 2/3. My problem with the boxers framing is that the information “B beats C” feels so strong that it overwhelms everything else. Maybe also the issue is that our intuition is that boxers are in a continuous range, which is different than car >> goat. I then suggested switching to tennis players, framing as “two amateurs who are evenly matched.” The point is that boxing evokes this image of a knockout, so once you hear that B beat C, you think of B as the powerhouse. With tennis, it seems more clear somehow that you can win and just be evenly matched. Josh and I went back and forth on this for awhile and we came up with the tennis version given above. I still think the formulation of “You blindly guess that Player 1 is the pro” is a bit awkward, but maybe something like that is needed to draw the connection to the Monty Hall problem. Ummm, here’s an alternative: You’re betting on a tennis tournament involving three players. Two are equally-matched amateurs; the third is a pro who will beat either of the amateurs, always. You have no idea who is the pro, and you randomly place your bet on Player A. The first match is B vs. C. Player B wins. Players A and B then compete. Do you want to keep your bet on Player A, or do you want to switch? And what’s the probability that Player A will beat Player B in this match-up? This seems cleaner to me, but maybe it’s too far away from the Monty Hall problem. Remember, the point here is not to create a new probability problem; it’s to demystify Monty Hall. Which means that the problem formulation, the correct solution, and the isomorphism to Monty Hall should be as transparent as possible. P.S. Josh noted that the story was also discussed by Alex Tabarrok, and a similar form of the problem was studied by Bruce Burns and Marieke Wieth in 2004. ## Laplace Calling Laplace calling to the faraway towns Now war is declared and battle come down Laplace calling to the underworld Come out of the sample, you boys and girls Laplace calling, now don’t look to us Phony Bayesmania has bitten the dust Laplace calling, see we ain’t got no swing Except for the ring of that probability thing The asymptote is coming, inference a farce Meltdown expected, the data’s growin’ sparse Stan stops running, but I have no fear ‘Cause Laplace is drowning, and I, I live by the prior Laplace calling to the replication zone Forget it, brother, you can go it alone Laplace calling to the zombies of death Quit holding out and draw another breath Laplace calling and I don’t want to shout But when we were talking I saw you nodding out Laplace calling, see we ain’t got no high Except for that one with the yellowy eye The asymptote’s coming, inference a farce Stan stops running, the data’s growin’ sparse A parallel era, but I have no fear ‘Cause Laplace is drowning, and I, I live by the prior The asymptote is coming, inference a farce Stan stops running, the data’s growin’ sparse A parallel era, but I have no fear ‘Cause Laplace is drowning, and I, I live by the prior Now get this Laplace calling, yes, I was there, too And you know what they said? Well, some of it was true! Laplace calling, two hundred years hence And after all this, won’t you have confidence? I never felt so much exchangeable (Apologies to you know who.) Tomorrow’s post: Challenge of A/B testing in the presence of network and spillover effects ## All the names for hierarchical and multilevel modeling The title Data Analysis Using Regression and Multilevel/Hierarchical Models hints at the problem, which is that there are a lot of names for models with hierarchical structure. Ways of saying “hierarchical model” hierarchical model a multilevel model with a single nested hierarchy (note my nod to Quine’s “Two Dogmas” with circular references) multilevel model a hierarchical model with multiple non-nested hierarchies random effects model Item-level parameters are often called “random effects”; reading all the ways the term is used on the Wikipedia page on random effects illustrates why Andrew dislikes the term so much (see also here; both links added by Andrew)—it means many different things to different communities. mixed effects model that’s a random effects model with some regular “fixed effect” regression thrown in; this is where lme4 is named after linear mixed effects and NONMEM after nonlinear mixed effects models. empirical Bayes Near and dear to Andrew’s heart, because regular Bayes just isn’t empirical enough. I jest—it’s because “empirical Bayes” means using maximum marginal likelihood to estimate priors from data (just like lme4 does). regularized/penalized/shrunk regression common approach in machine learning where held out data is used to “learn” the regularization parameters, which are typically framed as shrinkage or regularization scales in penalty terms rather than as priors automatic relevance determination (ARD) Radford Neal’s term in his thesis on Gaussian processes and now widely adopted in the GP literature This one’s common in the machine-learning literature; I think it came from Hal Daumé III’s paper, “Frustratingly easy domain adaptation” in which he rediscovered the technique; he also calls logistic regression a “maximum entropy classifier”, like many people in natural language processing (and physics) variance components model I just learned this one on the Wikipedia page on random effects models cross-sectional (time-series) model apparently a thing in econometrics nested data model, split-plot design, random coefficient iterated nested Laplace approximation (INLA), expectation maximization (EM), … Popular algorithmic approaches that get confused with the modeling technique. I’m guessing the readers of the blog will have more items to add to the list. If you liked this post You might like my earlier post, Logistic regression by any other name. ## Brief summary notes on Statistical Thinking for enabling better review of clinical trials. This post is not by Andrew. Now it was spurred by Andrew’s recent post on Statistical Thinking enabling good science. The day of that post, I happened to look in my email’s trash and noticed that it went back to 2011. One email way back then had an attachment entitled Learning Priorities of RCT versus Non-RCTs. I had forgotten about it. It was one of the last things I had worked on when I last worked in drug regulation. It was draft of summary points I was putting together for clinical reviewers (clinicians and biologists working in a regulatory agency) to give them a sense of (hopefully good) statistical thinking in reviewing clinical trials for drug approval. I though it brought out many of the key points that were in Andrew’s post and in the paper by Tong that Andrew was discussing. Now my summary points are in terms of statistical significance, type one error and power, but was 2011. Additionally, I do believe (along with David Spiegelhalter) that regulatory agencies do need to have lines drawn in the sand or set cut points. They have to approve or not approve.  As the seriousness of the approval increases, arguably these set cut points should move from  being almost automatic defaults, to inputs into a weight of evidence evaluation that may overturn them. Now I am working on a post to give an outline of what usually happens in drug regulation. I have received some links to material from a former colleague to help update my 2011 experience base. In this post, I have made some minor edits, it is not meant to be polished prose but simply summary notes. I thought it might of interest to some and hey I have not posted in over a year and this one was quick and easy. What can you learn from randomized versus non-randomized comparisons? What You Can’t Learn (WYCL); How/Why That’s Critical (HWTC); Anticipate How To Lessen these limitations (AHTL) ## “Superior: The Return of Race Science,” by Angela Saini “People so much wanted the story to be true . . . that they couldn’t look past it to more mundane explanations.” – Angela Saini, Superior. I happened to be reading this book around the same time as I attended the Metascience conference, which was motivated by the realization during the past decade or so of the presence of low-quality research and low-quality statistical methods underlying some subfields of the human sciences. I like Saini’s book a lot. In some sense it seems too easy, as she points at one ridiculous racist after another, but a key point here is that, over the years, prominent people who should know better have been suckers for junk science offering clean stories to support social prejudices. From Theodore Roosevelt in the early 20th century to David Brooks and the Freakonomics team a hundred years later, politicians, pundits, and scientists have lapped up just-so stories of racial and gender essentialism, without being too picky about the strength of the scientific evidence being offered. Superior even tells some of the story of Satoshi Kanazawa, but focusing on his efforts regarding racial essentialism rather than his gender essentialist work that we’ve discussed on this blog. As Saini discusses, race is an available explanation for economic and social inequality. We discussed this a few years ago in response to a book by science journalist Nicholas Wade. As Saini points out (and as I wrote in the context of my review of Wade’s book), the fact that many racist claims of the past and present have been foolish and scientifically flawed, does not mean that other racist scientific claims are necessarily false (or that they’re true). The fact that Satoshi Kanazawa misuses statistics has no bearing on underlying reality; rather, the uncritical reaction to Kanazawa’s work in many quarters just reveals how receptive many people are to crude essentialist arguments. A couple weeks ago some people asked why I sometimes talk about racism here—what does it have to do with “statistical modeling, causal inference, and social science”? I replied that racism is a sort of pseudoscientific or adjacent-to-scientific thinking that comes up a lot in popular culture and also in intellectual circles, and also of course it’s related to powerful political movements. So it’s worth thinking about, just as it’s worth thinking about various other frameworks that people use to understand the world. You might ask why I don’t write about religion so much; maybe that’s because, in the modern context, religious discourse is pretty much separate from scientific discourse so it’s not so relevant to our usual themes on this blog. When we talk about religion here it’s mostly from a sociology or political-science perspective (for example here) without really addressing the content of the beliefs or the evidence offered in their support. Tomorrow’s post: Laplace Calling ## “Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science” As promised, let’s continue yesterday’s discussion of Christopher Tong’s article, “Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science.” First, the title, which makes an excellent point. It can be valuable to think about measurement, comparison, and variation, even if commonly-used statistical methods can mislead. This reminds me of the idea in decision analysis that the most important thing is not the solution of the decision tree but rather what you decide to put in the tree in the first place, or even, stepping back, what are your goals. The idea is that the threat of decision analysis is more powerful than its execution (as Chrissy Hesse might say): the decision-analytic thinking pushes you to think about costs and uncertainties and alternatives and opportunity costs, and that’s all valuable even if you never get around to performing the formal analysis. Similarly, I take Tong’s point that statistical thinking motivates you to consider design, data quality, bias, variance, conditioning, causal inference, and other concerns that will be relevant, whether or not they all go into a formal analysis. That said, I have one concern, which is that “the threat is more powerful than the execution” only works if the threat is plausible. If you rule out the possibility of the execution, then the threat is empty. Similarly, while I understand the appeal of “Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science,” I think this might be good static advice, applicable right now, but not good dynamic advice: if we do away with statistical inference entirely (except in the very rare cases when no external assumptions are required to perform statistical modeling), then there may be less of a sense of the need for statistical thinking. Overall, though, I agree with Tong’s message, and I think everybody should read his article. Now let me go through some points where I disagree, or where I feel I can add something. – Tong discusses “exploratory versus confirmatory analysis.” I prefer to think of exploratory and confirmatory analysis as two aspects of the same thing. (See also here.) In short: exploratory data analysis is all about learning the unexpected. This is relative to “the expected,” that is, some existing model. So, exploratory data analysis is most effective when done in the context of sophisticated models. Conversely, exploratory data analysis is a sort of safety valve that can catch problems with your model, thus making confirmatory data analysis more effectively. Here, I think of “confirmatory data analysis” not as significance testing and the rejection of straw-man null hypotheses, but rather as inference conditional on models of substantive interest. – Tong: There is, of course, one arena of science where the exploratory/confirmatory distinction is clearly made, and attitudes toward statistical inferences are sound: the phased experimentation of medical clinical trials. I think this is a bit optimistic, for two reasons, First, I doubt the uncertainty in exploratory, pre-clinical analyses is correctly handled when it comes time to make decisions in designing clinical trials. Second, I don’t see statistical significance thresholds in clinical trials as being appropriate for deciding drug approval. – Tong: Medicine is a conservative science and behavior usually does not change on the basis of one study. Sure, but the flip side of formal conservatism is that lots of informal decisions will be made based on noisy data. Waiting for conclusive results from a series of studies . . . that’s fine, but in the meantime, decisions need to be made, and are being made, every day. This is related to the Chestertonian principle that extreme skepticism is a form of credulity. – Tong quotes Freedman (1995): I wish we could learn to look at the data more directly, without the fictional models and priors. On the same wish list: We should stop pretending to fix bad designs and inadequate measurements by modeling. I have no problem with this statement as literally construed: it represents someone’s wish. But to the extent it is taken as a prescription or recommendation for action, I have problems with it. First, in many cases it’s essentially impossible to look at the data without “fictional models.” For example, suppose you are doing a psychiatric study of depression: “the data” will strongly depend on whatever “fictional models” are used to construct the depression instrument. Similarly for studies of economic statistics, climate reconstruction, etc. I strongly do believe that looking at the data is important—indeed, I’m on record as saying I don’t believe statistical claims when their connection to the data is unclear—but, rather than wishing we could look at the data without models (just about all of which are “fictional”), I’d prefer to look at the data alongside, and informed by, our models. Regarding the second wish (“stop pretending to fix bad designs and inadequate measurements by modeling”), I guess I might agree with this sentiment, depending on what is meant by “pretend” and “fix”—but I do think it’s a good idea to adjust bad designs and inadequate measurements by modeling. Indeed, if you look carefully, all designs are bad and all measurements are inadequate, so we should adjust as well as we can. To paraphrase Bill James, the alternative to “inference using adjustment” is not “no inference,” it’s “inference not using adjustment.” Or, to put it in specific terms, if people don’t use methods such as our survey adjustment here, they’ll just use something cruder. I wouldn’t want criticism of the real flaws of useful models to be taken as a motivation for using worse models. – Tong quotes Feller (1969): The purpose of statistics in laboratories should be to save labor, time, and expense by efficient experimental designs. Design is one purpose of statistics in laboratories, but I wouldn’t say it’s the purpose of statistics in laboratories. In addition to design, there’s analysis. A good design can be made even more effective with a good analysis. And, conversely, the existence of a good analysis can motivate a more effective design. This is not a new point; it dates back at least to split-plot, fractional factorial, and other complex designs in classical statistics. – Tong quotes Mallows (1983): A good descriptive technique should be appropriate for its purpose; effective as a mode of communication, accurate, complete, and resistant. I agree, except possibly for the word “complete.” In complex problems, it can be asking too much to expect any single technique to give the whole picture. – Tong writes: Formal statistical inference may only be used in a confirmatory setting where the study design and statistical analysis plan are specified prior to data collection, and adhered to during and after it. I get what he’s saying, but this just pushes the problem back, no? Take a field such as survey sampling where formal statistical inference is useful, both for obtaining standard errors (which give underestimates of total survey error, but an underestimate can still be useful as a starting point), for adjusting for nonresponse (this is a huge issue in any polling), and for small-area estimation (as here). It’s fair for Tong to say that all this is exploratory, not confirmatory. These formal tools are still useful, though. So I think it’s important to recognize that “exploratory statistics” is not just looking at raw data; it also can include all sorts of statistical analysis that is, in turn, relevant for real decision making. – Tong writes: A counterargument to our position is that inferential statistics (p-values, confidence intervals, Bayes factors, and so on) could still be used, but considered as just elaborate descriptive statistics, without inferential implications (e.g., Berry 2016, Lew 2016). We do not find this a compelling way to salvage the machinery of statistical inference. Divorced from the probability claims attached to such quantities (confidence levels, nominal Type I errors, and so on), there is no longer any reason to privilege such quantities over descriptive statistics that more directly characterize the data at hand. I’ll just say, it depends on the context. Again, in survey research, there are good empirical and theoretical reasons for model-based adjustment as an alternative to just looking at the raw data. I do want to see the data, but if I want to learn about the population, I will do my best to adjust for known problems with the sample. I won’t just say that, because my models aren’t perfect, I shouldn’t use them at all. To put it another way, I agree with Tong that there’s no reason to privilege such quantities as “p-values, confidence intervals, Bayes factors, . . . confidence levels, nominal Type I errors, and so on,” but I wouldn’t take this as a reason to throw away “the machinery of statistical inference.” Statistical inference gives us all sorts of useful estimates and data adjustments. Please don’t restrict “statistical inference” to those particular tools listed in that above paragraph! – Tong writes: A second counterargument is that, as George Box (1999) reminded us, “All models are wrong, but some are useful.” Statistical inferences may be biased per the Optimism Principle, but they are reasonably approximate (it might be claimed), and paraphrasing John Tukey (1962), we are concerned with approximate answers to the right questions, not exact answers to the wrong ones. This line of thinking also fails to be compelling, because we cannot safely estimate how large such approximation errors can be. I think the secret weapon is helpful here. You can use inferences as they come up, but it’s hard to interpret them one at a time. Much better to see a series of estimates as they vary over space or time, as that’s the right “denominator” (as we used to say in the context of classical Anova) for comparison. Summary I like Tong’s article. The above discussion is intended to offer some modifications or clarifications of his good ideas. Tomorrow’s post: “Superior: The Return of Race Science,” by Angela Saini ## Harking, Sharking, Tharking Bert Gunter writes: You may already have seen this [“Harking, Sharking, and Tharking: Making the Case for Post Hoc Analysis of Scientific Data,” John Hollenbeck, Patrick Wright]. It discusses many of the same themes that you and others have highlighted in the special American Statistician issue and elsewhere, but does so from a slightly different perspective, which I thought you might find interesting. I believe it provides some nice examples of what Chris Tong called “enlightened description” in his American Statistician piece. I replied that Hollenbeck and Wright’s claims seem noncontroversial. I’ve tharked in every research project I’ve ever done. I also clicked through and read the Tong paper, “Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science.” The article is excellent—starting with its title—and it brings up many thoughts. I’ll devote an entire post to it. Also I was amused by this, the final sentence of Tong’s article: More generally, if we had to recommend just three articles that capture the spirit of the overall approach outlined here, they would be (in chronological order) Freedman (1991), Gelman and Loken (2014), and Mogil and Macleod (2017). If Freedman were to see this sentence, he’d spin in his grave. He absolutely despised me, and he put in quite a bit of effort to convince himself and others that my work had no value. Tomorrow’s post: “Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science” ## “Boston Globe Columnist Suspended During Investigation Of Marathon Bombing Stories That Don’t Add Up” I came across this news article by Samer Kalaf and it made me think of some problems we’ve been seeing in recent years involving cargo-cult science. Here’s the story: The Boston Globe has placed columnist Kevin Cullen on “administrative leave” while it conducts a review of his work, after WEEI radio host Kirk Minihane scrutinized Cullen’s April 14 column about the five-year anniversary of the Boston Marathon bombings, and found several inconsistencies. . . . Here’s an excerpt of the column: I happened upon a house fire recently, in Mattapan, and the smell reminded me of Boylston Street five years ago, when so many lost their lives and their limbs and their sense of security. I can smell Patriots Day, 2013. I can hear it. God, can I hear it, whenever multiple fire engines or ambulances are racing to a scene. I can taste it, when I’m around a campfire and embers create a certain sensation. I can see it, when I bump into survivors, which happens with more regularity than I could ever have imagined. And I can touch it, when I grab those survivors’ hands or their shoulders. Cullen, who was part of the paper’s 2003 Pulitzer-winning Spotlight team that broke the stories on the Catholic Church sex abuse scandal, had established in this column, and in prior reporting, that he was present for the bombings. . . . But Cullen wasn’t really there. And his stories had lots of details that sounded good but were actually made up. Including, horrifyingly enough, made-up stories about a little girl who was missing her leg. OK, so far, same old story. Mike Barnicle, Janet Cooke, Stephen Glass, . . . and now one more reporter who prefers to make things up than to do actual reporting. For one thing, making stuff up is easier; for another, if you make things up, you can make the story work better, as you’re not constrained by pesky details. What’s the point of writing about this, then? What’s the connection to statistical modeling, causal inference, and social science? Here’s the point: 1. What’s the reason for journalism? To convey information, to give readers a different window into reality. To give a sense of what it was like to be there, for those who were not there. Or to help people who were there, to remember. 2. What does good journalism look like? It’s typically emotionally stirring and convincingly specific. And here’s the problem. The reason for journalism is 1, but some journalists decide to take a shortcut and go straight to the form of good journalism, that is, 2. Indeed, I suspect that many journalists think that 2 is the goal, and that 1 is just some old-fashioned traditional attitude. Now, to connect to statistical modeling, causal inference, and social science . . . let’s think about science: 1. What’s the reason for science? To learn about reality, to learn new facts, to encompass facts into existing and new theories, to find flaws in our models of the world. 2. And what does good science look like? It typically has an air of rigor. And here’s the problem. The reason for science is 1, but some scientists decide to take a shortcut and go straight to the form of good science, that is, 2. The problem is not scientists don’t care about the goal of learning about reality; the problem is that they think that if they follow various formal expressions of science (randomized experiments, p-values, peer review, publication in journals, association with authority figures, etc.) that they’ll get the discovery for free. It’s a natural mistake, given statistical training with its focus on randomization and p-values, an attitude that statistical methods can yield effective certainty from noisy data (true for Las Vegas casinos where the probability model is known; not so true for messy real-world science experiments), and scientific training that’s focused on getting papers published. Summary What struck me about the above-quoted Boston Globe article (“I happened upon a house fire recently . . . I can smell Patriots Day, 2013. I can hear it. God, can I hear it . . . I can taste it . . .”) was how it looks like good journalism. Not great journalism—it’s too clichéd and trope-y for that—but what’s generally considered good reporting, the kind that sometimes wins awards. Similarly, if you look at a bunch of the fatally flawed articles we’ve seen in science journals in the past few years, they look like solid science. It’s only when you examine the details that you start seeing all the problems, and these papers disintegrate like a sock whose thread has been pulled. Ok, yeah yeah sure, you’re saying: Once again I’m reminded of bad science. Who cares? I care, because bad science Greshams good science in so many ways: in scientists’ decision of what to work on and publish (why do a slow careful study if you can get a better publication with something flashy?), in who gets promoted and honored and who decides to quit the field in disgust (not always, but sometimes), and in what gets publicized. The above Boston marathon story struck me because it had that same flavor. P.S. Tomorrow’s post: Harking, Sharking, Tharking. ## I think that science is mostly “Brezhnevs.” It’s rare to see a “Gorbachev” who will abandon a paradigm just because it doesn’t do the job. Also, moving beyond naive falsificationism Sandro Ambuehl writes: I’ve been following your blog and the discussion of replications and replicability across different fields daily, for years. I’m an experimental economist. The following question arose from a discussion I recently had with Anna Dreber, George Loewenstein, and others. You’ve previously written about the importance of sound theories (and the dangers of anything-goes theories), and I was wondering whether there’s any formal treatment of that, or any empirical evidence on whether empirical investigations based on precise theories that simultaneously test multiple predictions are more likely to replicate than those without theoretical underpinnings, or those that test only isolated predictions. Specifically: Many of the proposed solutions to the replicability issue (such as preregistration) seem to implicitly assume one-dimensional hypotheses such as “Does X increase Y?” In experimental economics, by contrast, we often test theories. The value of a theory is precisely that it makes multiple predictions. (In economics, theories that explain just one single phenomenon, or make one single prediction are generally viewed as useless and are highly discouraged.) Theories typically also specify how its various predictions relate to each other, often even regarding magnitudes. They are formulated as mathematical models, and their predictions are correspondingly precise. Let’s call a within-subjects experiment that tests a set of predictions of a theory a “multi-dimensional experiment”. My conjecture is that all the statistical skulduggery that leads to non-replicable results is much harder to do in a theory-based, multi-dimensional experiment. If so, multi-dimensional experiment should lead to better replicability even absent safeguards such as preregistration. The intuition is the following. Suppose an unscrupulous researcher attempts to “prove” a single prediction that X increases Y. He can do that by selectively excluding subjects with low X and high Y (or high X and low Y) from the sample. Compare that to a researcher who attempts to “prove”, in a within-subject experiment, that X increases Y and A increases B. The latter researcher must exclude many more subjects until his “preferred” sample includes only subjects that conform to the joint hypothesis. The exclusions become harder to justify, and more subjects must be run. A similar intuition applies to the case of an unscrupulous researcher who tries to “prove” a hypothesis by messing with the measurements of variables (e.g. by using log(X) instead of X). Here, an example is a theory that predicts that X increases both Y and Z. Suppose the researcher finds a Null if he regresses X on Y, but finds a positive correlation between f(X) on Y for some selected transformation f. If the researcher only “tested” the relation between X and Y (a one-dimensional experiment), the researcher could now declare “success”. In a multi-dimensional experiment, however, the researcher will have to dig for an f that doesn’t only generate a positive correlation between f(X) and Y, but also between f(X) and Z, which is harder. A similar point applies if the researcher measures X in different ways (e.g. through a variety of related survey questions) and attempts to select the measurement that best helps “prove” the hypothesis. (Moreover, such a theory would typically also specify something like “If X increases Y by magnitude alpha, then it should increase Z by magnitude beta”. The relation between Y and Z would then present an additional prediction to be tested, yet again increasing the difficulty of “proving” the result through nefarious manipulations.) So if there is any formal treatment relating to the above intuitions, or any empirical evidence on what kind of research tends to be more or less likely to replicate (depending on factors other than preregistration), I would much appreciate if you could point me to it. I have two answers for you. First, some colleagues and I recently published a preregistered replication of one of our own studies; see here. This might be interesting to you because our original study did not test a single thing, so our evaluation was necessarily holistic. In our case, the study was descriptive, not theoretically-motivated, so it’s not quite what you’re talking about—but it’s like your study in that the outcomes of interest were complex and multidimensional. This was one of the problems I’ve had with recent mass replication studies, that they treat a scientific paper as if it has a single conclusion, even though real papers—theoretically-based or not—typically have many conclusions. My second response is that I fear you are being too optimistic. Yes, when a theory makes multiple predictions, it may be difficulty to select data to make all the predictions work out. But on the other hand you have many degrees of freedom with which to declare success. This has been one of my problems with a lot of social science research. Just about any pattern in data can be given a theoretical explanation, and just about any pattern in data can be said to be the result of a theoretical prediction. Remember that claim that women were three times more likely to wear red or pink clothing during a certain time of the month? The authors of that study did a replication which failed–but they declared it a success after adding an interaction with outdoor air temperature. Or there was this political science study where the data went in the opposite direction of the preregistration but were retroactively declared to be consistent with the theory. It’s my impression that a lot of economics is like this too: If it goes the wrong way, the result can be explained. That’s fine—it’s one reason why economics is often a useful framework for modeling the world—but I think the idea that statistical studies and p-values and replication are some sort of testing ground for models, the idea that economists are a group of hard-headed Popperians, regularly subjecting their theories to the hard test of reality—I’m skeptical of that take. I think it’s much more that individual economists, and schools of economists, are devoted to their theories and only rarely abandon them on their own. That is, I have a much more Kuhnian take on the whole process. Or, to put it another way, I try to be Popperian in my own research, I think that’s the ideal, but I think the Kuhnian model better describes the general process of science. Or, to put it another way, I think that science is mostly “Brezhnevs.” It’s rare to see a “Gorbachev” who will abandon a paradigm just because it doesn’t do the job. Ambuehl responded: Anna did have a similar reaction to you—and I think that reaction depends much on what passes as a “theory”. For instance, you won’t find anything in a social psychology textbook that an economic theorist would call a “theory”. You’re certainly right about the issues pertaining to hand-wavy ex-post explanations as with the clothes and ovulation study, or “anything-goes theories” such as the Himicanes that might well have turned out the other way. By contrast, the theories I had in mind when asking the question are mathematically formulated theories that precisely specify their domain of applicability. An example of the kind of theory I have in mind would be Expected Utility theory, tested in countless papers, e.g. here). Another example of such a theory is the Shannon model of choice under limited attention (tested, e.g., here). These theories are in an entirely different ballpark than vague ideas like, e.g., self-perception theory or social comparison theory that are so loosely specified that one cannot even begin to test them unless one is willing to make assumptions on each of the countless researcher degrees of freedom they leave open. In fact, economic theorists tend to regard the following characteristics virtues, or even necessities, of any model: precision (can be tested without requiring additional assumptions), parsimony (and hence, makes it hard to explain “uncomfortable” results by interactions etc.), generality (in the sense that they make multiple predictions, across several domains). And they very much frown upon ex post theorizing, ad-hoc assumptions, and imprecision. For theories that satisfy these properties, it would seem much harder to fudge empirical research in a way that doesn’t replicate, wouldn’t it? (Whether the community will accept the results or not seems orthogonal to the question of replicability, no?) Finally, to the extent that theories in the form of precise, mathematical models are often based on wide bodies of empirical research (economic theorists often try to capture “stylized facts”), wouldn’t one also expect higher rates of replicability because such theories essentially correspond to well-informed priors? So my overall point is, doesn’t (good) theory have a potentially important role to play regarding replicability? (Many current suggestions for solving the replication crisis, in particular formulaic ones such as pre-registration, or p<0.005, don't seem to recognize those potential benefits of sound theory.) I replied: Well, sure, but expected utility theory is flat-out false. Much has been written on the way that utilities only exist after the choices are given. This can even be seen in simple classroom demonstrations, as in section 5 of this paper from 1998. No statistics are needed at all to demonstrate the problems with that theory! Amdahl responded with some examples of more sophisticated, but still testable, theories such as reference-dependent preferences, various theories of decision making under ambiguity, and perception-based theories, and I responded with my view that all these theories are either vague enough to be adaptable to any data or precise enough to be evidently false with no data collection needed. This was what Lakatos noted: any theory is either so brittle that it can be destroyed by collecting enough data, or flexible enough to fit anything. This does not mean we can’t do science, it just means we have to move beyond naive falsificationism. P.S. Tomorrow’s post: “Boston Globe Columnist Suspended During Investigation Of Marathon Bombing Stories That Don’t Add Up.” ## Deterministic thinking (“dichotomania”): a problem in how we think, not just in how we act This has come up before: And it came up again recently. Epidemiologist Sander Greenland has written about “dichotomania: the compulsion to replace quantities with dichotomies (‘black-and-white thinking’), even when such dichotomization is unnecessary and misleading for inference.” I’d avoid the misleadingly clinically-sounding term “compulsion,” and I’d similarly prefer a word that doesn’t include the pejorative suffix “mania,” hence I’d rather just speak of “deterministic thinking” or “discrete thinking”—but I agree with Greenland’s general point that this tendency to prematurely collapse the wave function contributes to many problems in statistics and science. Often when the problem of deterministic thinking comes up in discussion, I hear people explain it away, arguing that decisions have to be made (FDA drug trials are often brought up here), or that all rules are essentially deterministic (the idea that confidence intervals are interpreted as whether they include zero), or that this is a problem with incentives or publication bias, or that, sure, everyone knows that thinking of hypotheses as “true” or “false” is wrong, and that statistical significance and other summaries are just convenient shorthands for expressions of uncertainty that are well understood. But I’d argue, with Eric Loken, that inappropriate discretization is not just a problem with statistical practice; it’s also a problem with how people think, that the idea of things being on or off is “actually the internal working model for a lot of otherwise smart scientists and researchers.” This came up in some of the recent discussions on abandoning statistical significance, and I want to use this space to emphasize one more time the problem of inappropriate discrete modeling. ## My math is rusty When I’m giving talks explaining how multilevel modeling can resolve some aspects of the replication crisis, I mention this well-known saying in mathematics: “When a problem is hard, solve it by embedding it in a harder problem.” As applied to statistics, the idea is that it could be hard to analyze a single small study, as inferences can be sensitive to the prior, but if you consider this as one of a large population or long time series of studies, you can model the whole process, partially pool, etc. In math, examples of embedding into a harder problem include using the theory of ideals to solve problems in prime numbers (ideals are a general class that includes primes as a special case, hence any theory on ideals is automatically true on primes but is more general), using complex numbers to solve problems with real numbers, and using generating functions to sum infinite series. That last example goes like this. You want to compute S = sum_{n=1}^{infinity} a_n, but you can’t figure out how to do it. So you write the generating function, G(x) = sum_{n=1}^{infinity} a_n x^n, you then do some analysis to figure out G(x) as a function of x, then your series is just S = G(1). And it really works. Cool. Anyway, I thought that next time I mention this general idea, it would be fun to demonstrate with an example, so one day when I was sitting in a seminar with my notebook, I decided to try to work one out. S = 1/1^2 + 1/2^2 + 1/3^2 + 1/4^2 + . . . That is, S = sum_{n=1}^{infinity} n^{-2} Then the generating function is, G(x) = sum_{n=1}^{infinity} n^{-2} x^n. To solve for G(x), we take some derivatives until we can get to something we can sum directly. First one derivative: dG/dx = sum_{n=1}^{infinity} n^{-1} x^{n-1}. OK, taking the derivative again will be a mess, but we can do this: x dG/dx = sum_{n=1}^{infinity} n^{-1} x^n. And now we can differentiate again! d/dx (x dG/dx) = sum_{n=1}^{infinity} x^{n-1}. Hey, that one we know! It’s 1 + 1/x + 1/x^2 + . . . = 1/(1-x). So now we have a differential equation: xG”(x) + G'(x) = 1/(1-x). Or maybe better to write as, x(1-x) G”(x) + (1-x) G'(x) – 1 = 0. Either way, it looks like we’re close to done. Just solve this second-order differential equation. Actually, even easier than that. Let h(x) = G'(x), then we just need to solve, x(1-x) h'(x) + (1-x) h(x) – 1 = 0. Hey, that’s just h(x) = -log(1-x) / x. I can’t remember how I figured that one out—it’s just there in my notes—but there must be some easy derivation. In any case, it works: h'(x) = log(1-x)/x^2 + 1/(x(1-x)), so x(1-x) h'(x) = log(1-x)*(1-x)/x + 1 (1-x) h(x) = -log(1-x)*(1-x)/x So, yeah, x(1-x) h'(x) + (1-x) h(x) – 1 = 0. We’ve solved the differential equation! And now we have the solution: G(x) = integral dx (-log(1-x) / x). This is an indefinite integral but that’s not a problem: we can see that, trivially, G(0) = 0, so we just have to do the integral starting from 0. At this point, I was feeling pretty good about myself, like I’m some kind of baby Euler, racking up these sums using generating functions. All I need to do is this little integral . . . OK, I don’t remember integrals so well. It must be easy to do it using integration by parts . . . oh well, I’ll look it up when I come into the office, it’ll probably be an arcsecant or something like that. But then . . . it turns out there’s no closed-form solution! Here it is in Wolfram alpha (OK, I take back all the things I said about them): OK, what’s Li_2(x)? Here it is: Hey—that’s no help at all, it’s just the infinite series again. So my generating-function trick didn’t work. Next step is to sum the infinite series by integrating it in the complex plane and counting the poles. But I really don’t remember that! It’s something I learned . . . ummm, 35 years ago. And probably forgot about 34 years ago. So, yeah, my math is rusty. But I still like the general principle: When a problem is hard, solve it by embedding it in a harder problem. P.S. We can use this example to teach a different principle of statistics: the combination of numerical and analytic methods. How do you compute S = sum_{n=1}^{infinity} n^{-2}? Simplest approach is to add a bunch of terms; for example, in R: S_approx_1 <- sum((1:1000000)^(-2)). This brute-force method works fine in this example but it would have trouble if the function to evaluate is expensive. Another approach is to approximate the sum by an integral; thus: S_approx_2 <- integral_{from x=0.5 to infinity} dx x^{-2} = 2. (The indefinite integral is just -1/x, so the definite integral is 1/infinity - (-1/0.5) = 2.) You have to start the integral at 0.5 because the sum starts at 1, so the little bars to sum are [0.5,1.5], [1.5,2.5], etc. That second approximation isn't so great at the low end of x, though, where the curve 1/x^2 is far from locally linear. So we can do an intermediate approximation: S_approx_3 <- sum((1:N)^(-2)) + integral_{from x=(N+0.5) to infinity} dx x^{-2} = sum((1:N)^(-2)) + 1/(N+0.5). That last approximation is fun because it combines numerical and analytic methods. And it works! Just try N=3: S_approx = 1 + 1/4 + 1/9 + 1/3.5 = 1.647. The exact value, to three decimal places, is 1.644. Not bad. There are better approximation methods out there; the point is that even a simple approach of this sort can do pretty well. And I’ve seen a lot of simulation studies that are done using brute force where the answers just don’t make sense, and where just a bit of analytical work at the end could’ve made everything work out. P.P.S. Tomorrow’s post: Deterministic thinking (“dichotomania”): a problem in how we think, not just in how we act. P.P.P.S. [From Bob Carpenter] MathJax is turned on for posts, but not comments, so that $latex e^x$ renders as $e^x$. ## The uncanny valley of Malcolm Gladwell Gladwell is a fun writer, and I like how he plays with ideas. To my taste, though, he lives in an uncanny valley between nonfiction and fiction, or maybe I should say between science and storytelling. I’d enjoy him more, and feel better about his influence, if he’d take the David Sedaris route and go all the way toward storytelling (with the clear understanding that he’s telling us things because they sound good or they make a good story, not because they’re true), or conversely become a real science writer and evaluate science and data claims critically. Instead he’s kind of in between, bouncing back and forth between stories and science, and that makes uncomfortable. Here’s an example, from a recent review by Andrew Ferguson, “Malcolm Gladwell Reaches His Tipping Point.” I haven’t read Gladwell’s new book, so I can’t really evaluate most of these criticisms, but of course I’m sympathetic to Ferguson’s general point. Key quote: Gladwell’s many critics often accuse him of oversimplification. Just as often, though, he acts as a great mystifier, imposing complexity on the everyday stuff of life, elevating minor wrinkles into profound conundrums. This, not coincidentally, is the method of pop social science, on whose rickety findings Gladwell has built his reputation as a public intellectual. In addition, Ferguson has a specific story regarding some suspiciously specific speculation (the claim that “of every occupational category, [poets] have far and away the highest suicide rates—as much as five times higher than the general population.”) which reminds me of some other such items we’ve discussed over the years, including: – That data scientist’s unnamed smallish town where 75 people per year died “because of the lack of information flow between the hospital’s emergency room and the nearby mental health clinic.” – That billionaire’s graph purporting to show “percentage of slaves or serfs in the world.” – Those psychologists’ claim that women were three times more likely to wear red or pink during certain times of the month. – That claim from “positive psychology” of the “critical positivity ratio” of 2.9013. – That psychologist’s claim that he could predict divorces with 83 percent accuracy, after meeting with a couple for just 15 minutes. And lots more. There’s something hypnotizing about those numbers. Too good to check, I guess. ## Let’s try this again: It is nonsense to say that we don’t know whether a specific weather event was affected by climate change. It’s not just wrong, it’s nonsensical. This post is by Phil Price, not Andrew. If you write something and a substantial number of well-intentioned readers misses your point, the problem is yours. Too many people misunderstood what I was sayinga few days ago in the post “There is no way to prove that [an extreme weather event] either was, or was not, affected by global warming” and that’s my fault.  Let me see if I can do better. Forget about climate and weather for a moment. I want to talk about bike riding. You go for a ride with a friend. You come to a steep, winding climb and you ride up side by side. You are at the right side of the road, with your friend to your left, so when you come to a hairpin turn to the right you have a much steeper (but shorter) path than your friend for a few dozen feet. Later you come to a hairpin to the left, but the situation isn’t quite reversed because you are both still in the right lane so your friend isn’t way over where the hairpin is sharpest and the slope is steepest. You ride to the top of the hill and get to a flat section where you are riding side-by-side.  There is some very minor way in which you can be said to have experienced a ‘different’ climb, because even though you were right next to each other you experienced different slopes at different times, and rode slightly different speeds in order to stay next to each other as the road curved, and in fact you didn’t even end up at exactly the same place because your friend is a few feet from you.  You haven’t done literally the same climb, in the sense that a man can’t literally step twice in the same river (because at the time of the second step the river is not exactly the same, and neither is the man) but if someone said ‘how was your climb affected by your decision to ride on the right side of the lane rather than the middle of the lane’ we would all know what you mean; no reasonable person would say ‘if I had done the climb in the middle rather than the right it would have been a totally different climb.’ 1 is just wrong (*).  If you had gone north instead of south you might still had a steep climb  around hour 3, maybe it would have even been a steeper climb the one you are on now, but there is no way it could have been the same climb…and the difference is not a trivial one like the “twice in the same river” example. 3 is not the right answer to the question that was asked, but maybe it’s the right answer to what the questioner had in mind. Maybe when they said “how would this climb have been different” they really meant something like, if you had gone the other way, “what would the biggest climb have been like”, or “what sort of hill would be climbing just about now”? I think you see where I’m going with this (since I doubt you really forgot all about climate and weather like I asked you to).  On a bike ride you are on a path through physical space, but suppose we were talking about paths through parameter space instead. In this parameterization, long steep climbs correspond to hurricane conditions, and going south instead of north corresponds to experiencing a world with global warming instead of one without. In the global warming world, we don’t experience ‘the same’ weather events that we would have otherwise, but in a slightly different way — like climbing the same hill in the middle of the lane rather than at the side of the lane — we experience entirely different weather events — like climbing different hills. The specific quote that I cited in my previous post was about Hurricane Katrina. It makes no sense to say we don’t know whether Hurricane Katrina was affected by global warming, just as it would make no sense to say we don’t know whether our hill climb was affected by our decision to go south instead of north. In the counterfactual world New Orleans might have still experienced a hurricane, maybe even on the same day, but it would not have been the same hurricane, just as we might encounter a hill climb on our bike trip at around the three-hour mark whether we went south or north, but it would not have been the same climb. No analogy is perfect, so please don’t focus on ways in which the analogy isn’t ‘right’. The point is that we are long past the point where global warming is a ‘butterfly effect’ and we can reasonably talk about how individual weather events are affected by it. We aren’t riding up the same road but in a slightly different place, we are in a different part of the territory. (*) I’m aware that if you had ridden north instead of south you could have circled back and climbed this same climb. Also, it’s possible in principle that some billionaire could have paid to duplicate ‘the same’ climb somewhere to the north — grade the side of a mountain to make this possible, shape the land and the road to duplicate the southern climb, etc.  But get real. And although these are possible for a bike ride, at least in principle, they are not possible for the parameter space of weather and climate that is the real subject of this post. This post is by Phil, not Andrew. ## Exchange with Deborah Mayo on abandoning statistical significance The philosopher wrote: The big move in the statistics wars these days is to fight irreplication by making it harder to reject, and find evidence against, a null hypothesis. Mayo is referring to, among other things, the proposal to “redefine statistical significance” as p less than 0.005. My colleagues and I do not actually like that idea, so I responded to Mayo as follows: I don’t know what the big moves are, but my own perspective, and I think that of the three authors of the recent article being discussed, is that we should not be “rejecting” at all, that we should move beyond the idea that the purpose of statistics is to reject the null hypothesis of zero effect and zero systematic error. I don’t want to ban speech, and I don’t think the authors of that article do, either. I’m on record that I’d like to see everything published, including Bem’s ESP paper data and various other silly research. My problem is with the idea that rejecting the null hypothesis tells us anything useful. Mayo replied: I just don’t see that you can really mean to say that nothing is learned from finding low-p values, especially if it’s not an isolated case but time and again. We may know a hypothesis/model is strictly false, but we do not yet know in which way we will find violations. Otherwise we could never learn from data. As a falsificationist, you must think we find things out from discovering our theory clashes with the facts–enough even to direct a change in your model. Even though inferences are strictly fallible, we may argue from coincidence to a genuine anomaly & even to pinpointing the source of the misfit.So I’m puzzled. I hope that “only” will be added to the statement in the editorial to the ASA collection. Doesn’t the ASA worry that the whole effort might otherwise be discredited as anti-science? My response: The problem with null hypothesis significance testing is that rejection of straw-man hypothesis B is used as evidence in favor of preferred alternative A. This is a disaster. See here. Then Mayo: I know all this. I’ve been writing about it for donkey’s years. But that’s a testing fallacy. N-P and Fisher couldn’t have been clearer. That does not mean we learn nothing from a correct use of tests. N-P tests have a statistical alternative and at most one learns, say, about a discrepancy from a hypothesized value. If a double blind RCT clinical trial repeatedly shows statistically significant (small p-value) increase in cancer risks among exposed, will you deny that’s evidence? Me: I don’t care about the people, Neyman, Fisher, and Pearson. I care about what researchers do. They do something called NHST, and it’s a disaster, and I’m glad that Greenland and others are writing papers pointing this out. Mayo: We’ve been saying this for years and years. Are you saying you would no longer falsify models because some people will move from falsifying a model to their favorite alternative theory that fits the data? That’s crazy. You don’t give up on correct logic because some people use illogic. The clinical trials I’m speaking about do not commit those crimes. would you really be willing to say that they’re all bunk because some psychology researchers do erroneous experiments and make inferences to claims where we don’t even know we’re measuring the intended phenomenon? Ironically, by the way, the Greenland argument only weakens the possibility of finding failed replications. Me: I pretty much said it all here. I don’t think clinical trials are all bunk. I think that existing methods, NHST included, can be adapted to useful purposes at times. But I think the principles underlying these methods don’t correspond to the scientific questions of interest, and I think there are lots of ways to do better. Mayo: And I’ve said it all many times in great detail. I say drop NHST. It was never part of any official methodology. That is no justification for endorsing official policy that denies we can learn from statistically significant effects in controlled clinical trials among other legitimate probes. Why not punish the wrong-doers rather than all of science that uses statistical falsification? Would critics of statistical significance tests use a drug that resulted in statistically significant increased risks in patients time and again? Would they recommend it to members of their family? If the answer to these questions is “no”, then they cannot at the same time deny that anything can be learned from finding statistical significance. Me: In those cases where NHST works, I think other methods work better. To me, the main value of significance testing is: (a) when the test doesn’t reject, that tells you your data are too noisy to reject the null model, and so it’s good to know that, and (b) in some cases as a convenient shorthand for a more thorough analysis, and (3) for finding flaws in models that we are interested in (as in chapter 6 of BDA). I would not use significance testing to evaluate a drug, or to prove that some psychological manipulation has a nonzero effect, or whatever, and those are the sorts of examples that keep coming up. In answer to your previous email, I don’t want to punish anyone, I just think statistical significance is a bad idea and I think we’d all be better off without it. In your example of a drug, the key phrase is “time and again.” No statistical significance is needed here. Mayo: One or two times would be enough if they were well controlled. And the ONLY reason they have meaning even if it were time and time again is because they are well controlled. I’m totally puzzled as to how you can falsify models using p-values & deny p-value reasoning. As I discuss through my book, Statistical Inference as Severe Testing, the most important role of the severity requirement is to block claims—precisely the kinds of claims that get support under other methods be they likelihood or Bayesian. Stop using NHST—there’s speech ban I can agree with. In many cases the best way to evaluate a drug is via controlled trials. I think you forget that for me, since any claim must be well probed to be warranted, estimations can still be viewed as tests. I will stop trading in biotechs if the rule to just report observed effects gets passed and the responsibility that went with claiming a genuinely statistically significant effect goes by the board. That said, it’s fun to be talking with you again. Me: I’m interested in falsifying real models, not straw-man nulls of zero effect. Regarding your example of the new drug: yes, it can be solved using confidence intervals, or z-scores, or estimates and standard errors, or p-values, or Bayesian methods, or just about anything, if the evidence is strong enough. I agree there are simple problems for which many methods work, including p-values when properly interpreted. But I don’t see the point of using hypothesis testing in those situations either—it seems to make much more sense to treat them as estimation problems: how effective is the drug, ideally for each person or else just estimate the average effect if you’re ok fitting that simpler model. I can blog our exchange if you’d like. And so I did. P.S. Tomorrow’s post: My math is rusty. ## I hate Bayes factors (when they’re used for null hypothesis significance testing) Oliver Schultheiss writes: I am a regular reader of your blog. I am also one of those psychology researchers who were trained in the NHST tradition and who is now struggling hard to retrain himself to properly understand and use the Bayes approach (I am working on my first paper based on JASP and its Bayesian analysis options). And then tonight I came across this recent blog by Uri Simonsohn, “If you think p-values are problematic, wait until you understand Bayes Factors.” I assume that I am not the only one who is rattled by this (or I am the only one, and this just reveals my lingering deeper ignorance about the Bayes approach) and I was wondering whether you could comment on Uri’s criticism of Bayes Factors on your own blog. My reply: I don’t like Bayes factors; see here. I think Bayesian inference is very useful, but Bayes factors are based on a model of point hypotheses that typically does not make sense. To put it another way, I think that null hypothesis significance testing typically does not make sense. When Bayes factors are used for null hypothesis significance testing, I generally think this is a bad idea, and I don’t think it typically makes sense to talk about the probability that a scientific hypothesis is true. More discussion here: Incorporating Bayes factor into my understanding of scientific information and the replication crisis. The problem is not so much with the Bayes factor as with the idea of null hypothesis significance testing. ## Was Thomas Kuhn evil? I don’t really care. OK, I guess I care a little . . . but when it comes to philosophy, I don’t really care about Kuhn’s personality or even what exactly he said in his books. I use Kuhn in my work, by which I mean that I use an idealized Kuhn, I take the best from his work (as I see it), the same way I use an idealized Lakatos and Popper, and the same way that Lakatos famously used an idealized Popper (Lakatos called him Popper2, I think it was). Here’s what Shalizi and I wrote in our article: We focus on the classical ideas of Popper and Kuhn, partly because of their influence in the general scientific culture and partly because they represent certain attitudes which we believe are important in understanding the dynamic process of statistical modelling. Actually, we said “modeling,” but someone translated our article into British for publication. Anyway . . . we continue: The two most famous modern philosophers of science are undoubtedly Karl Popper (1934/1959) and Thomas Kuhn (1970), and if statisticians (like other non-philosophers) know about philosophy of science at all, it is generally some version of their ideas. . . . We do not pretend that our sketch fully portrays these figures, let alone the literatures of exegesis and controversy they inspired, or even how the philosophy of science has moved on since 1970. . . . To sum up, our views are much closer to Popper’s than to Kuhn’s. The latter encouraged a close attention to the history of science and to explaining the process of scientific change, as well as putting on the agenda many genuinely deep questions, such as when and how scientific fields achieve consensus. There are even analogies between Kuhn’s ideas and what happens in good data-analytic practice. Fundamentally, however, we feel that deductive model checking is central to statistical and scientific progress, and that it is the threat of such checks that motivates us to perform inferences within complex models that we know ahead of time to be false. My point here is that, as applied statisticians rather than philosophers or historians, we take what we can use from philosophy, being open about our ignorance of most of the literature in that field. Just as applied researchers pick and choose statistical methods in order to design and analyze their data, we statisticians pick and choose philosophical ideas to help us understand what we are doing. For example, we write: In some way, Kuhn’s distinction between normal and revolutionary science is analogous to the distinction between learning within a Bayesian model, and checking the model in preparation to discarding or expanding it. Just as the work of normal science proceeds within the presuppositions of the paradigm, updating a posterior distribution by conditioning on new data takes the assumptions embodied in the prior distribution and the likelihood function as unchallengeable truths. Model checking, on the other hand, corresponds to the identification of anomalies, with a switch to a new model when they become intolerable. Even the problems with translations between paradigms have something of a counterpart in statistical practice; for example, the intercept coefficients in a varying-intercept, constant-slope regression model have a somewhat different meaning than do the intercepts in a varying-slope model. This is all fine, but we recognize: We do not want to push the analogy too far, however, since most model checking and model reformulation would by Kuhn have been regarded as puzzle-solving within a single paradigm, and his views of how people switch between paradigms are, as we just saw, rather different. We’re trying to make use of the insights that Kuhn brought to bear, without getting tied up in what Kuhn’s own position was on all this. Kuhnianism without Kuhn, one might say. Anyway, this all came up because Mark Brown pointed me to this article by John Horgan reporting that Errol Morris thinks that Kuhn was, in Horgan’s words, “a bad person and bad philosopher.” Errol Morris! He’s my hero. If he hates Kuhn, so do I. Or at least that’s my default position, until further information comes along. Actually, I do have further information about Kuhn. I can’t say I knew the guy personally, but I did take his course at MIT. Actually, I just came to the first class and dropped it. Hey . . . didn’t I blog this once? Let me check . . . yeah, here it is, from 2011—and I wrote it in response to Errol Morris’s story, the first time I heard about it! I’d forgotten this entirely. There’s one thing that makes me a little sad. Horgan writes that Morris’s book features “interviews with Noam Chomsky, Steven Weinberg and Hilary Putnam, among other big shots.” I think there must be people with more to say than these guys. This may be a problem that once an author reaches the celebrity stratosphere, he will naturally mingle with other celebrities. If I’m reading a book about philosophy of science, I’d rather see an interview with Steve Stigler, or Josh Miller, or Deborah Mayo, or Cosma Shalizi, or various working scientists with historical and philosophical interests. But it can be hard to find such people, if you’re coming from the outside.
# simple power formula integral calculus pdf 2 The Area Problem Find the area of the following region. Indefinite Integrals Indefinite integrals are functions that do the opposite of what derivatives do. Let f (x, y, z) be a continuous function in a simply connected, closed bounded volume V . The word "integral" can also be used as an adjective meaning "related to integers". Go to: Online calculus solver. The General Power Formula as shown in Chapter 1 is in the form $\displaystyle \int u^n \, du = \dfrac{u^{n+1}}{n+1} + C; \,\,\, n \neq -1$ Thus far integration has been confined to polynomial functions. If we know the f’ of a function which is differentiable in its domain, we can then calculate f. In differential calculus, we used to call f’, the derivative of the function f. Here, in integral calculus, we call f as the anti-derivative or primitive of the function f’. It can show the steps involved including the power rule, sum rule and difference rule. The Differential Calculus splits up an area into small parts to calculate the rate of change.The Integral calculus joins small parts to calculates the area or volume and in short, is the method of reasoning or calculation.In this page, you can see a list of Calculus Formulas such as integral formula, derivative formula, limits formula etc. Cavalieri’s principle and volumes of solids106 4. But it is often used to find the area underneath the graph of a function like this: The integral of many functions are well known, and there are useful rules to work out the integral … Trig Integrals: Integrals involving sin(x) and cos(x): Integrals involving sec(x) and tan(x): 1. and integration are reverse process of each other. 1.1.2. Functions ∫sin cosxdx x= − ∫cos sinxdx x= − sin sin22 1 2 4 x ∫ xdx x= − cos sin22 1 2 4 x ∫ xdx x= + sin cos cos3 31 3 ∫ xdx x x= − cos sin sin3 31 3 ∫ xdx x x= − ln tan sin 2 dx x xdx x ∫ = ln tan Derivatives of Trig Functions – We’ll give … Applications of the integral105 1. Calculus I Formulas MAC 2311 1. Integrals of Trig. Evaluating Integrals. <> Here is a list of commonly used integration formulas. Applications of each formula can be found on the following pages. x��=k��6r�U����3�E���uU~ȉ��*ے�*���}X�H����,�o�� ��,g��KV3�ht7����w���/�/�x�����Ż������7��x���۫�?����p~����4_}�u���O�K�5���>!Mg�GEێ�Fqռ���I�����7�f����_�>yi{����ڶ�-oDCH+����O����P� ��H��� P����1�&f��y���M�����_�M�y�XrXwv΄�VX:0�2�X8��V[�L�J5���?����O��=�V�F�����$-�T����-�� ����u�H��h%�^D٥ ��F���_Gڎ��I'Z ����ggLK&�?��#�e�aC��k�v�v*Z)2��L˄���1�χq�Ã��#JO�EO�a N;yF�ekd� �*�ؙi �!M.�L��Ŧ���jQ1��Cvߠ��M����CW�m��?p�}W?�h��\��r~��de�r�_�#Y���'ǰ(v;�E��z�^ǰh�m���[Ǣ!Aױh���;���zS�-�-�Z.F ����|����m�j�_�:�B��im^�!2ڼ��m'�E�߿Zڸ������?�����} ^���=�}=P/퇟����n5"p�f6!M�v�cv������}u{~@�y���H�訁��i�����?�����H���gzwc?��. Exercises106 3. Since calculus plays an important role to … endstream endobj startxref Calculus for Beginners and Artists Chapter 0: Why Study Calculus? Using rules for integration, students should be able to find indefinite integrals of polynomials as well as to evaluate definite integrals of polynomials over closed and bounded intervals. Examples of volumes of solids of revolution109 5. Integral Calculus. Applications of Integration Professor: Dr. Mohammad Shakil C0-Author: Jeongmin Correa Mathematics Department If the power of the sine is odd and positive: Goal: ux cos i. <> Finding the integral of a polynomial involves applying the power rule, along with some other properties of integrals. To be truthful, there is a bit more to this reciprocal relationship than what is shown above, but the basic idea you need to grasp is that integration “un-does” differentiation, and visa-versa. If p > 0, then the graph starts at the origin and continues to rise to infinity. We apply the general power formula to integrals involving trignometry, logarithms and exponential functions. More specifically, formulas for the derivatives of Integration is the inverse process to differentiation. h�bbdb��7@$�f��" [@$G�d�"Y�A$��HX�9����I0,�� Vi\$�y,�&��H�p��@��^��3�!���t��?��G��=���p3�@� ��*� �� 3 0 obj ��O��00y�?#�} �o@� �t� Trigonometry cos0 = sin π 2 = 1, sin0 = cos π 2 = 0, cos2 θ+sin2 θ = 1, cos(−θ) = cosθ, sin(−θ) = −sinθ, cos(A+B) = cosAcosB−sinAsinB, cos2θ = cos2 θ−sin2 θ, %PDF-1.6 %���� Calculus > Integrals > Integration Formulas ; Integration Formulas - Exercises » Introduction to Integration: (lesson 1 of 2) Integration Formulas. Integration by Parts: If u and v be two functions of x, then integral of the product of these two functions is given by: <>/XObject<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/Annots[ 9 0 R 10 0 R 11 0 R 16 0 R] /MediaBox[ 0 0 612 792] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> Sometimes this is a simple problem, since it will be apparent that the function you wish to integrate is a derivative in some straightforward way. Power series are used in calculators and computers. Chapter 1: Numbers Chapter 2: Using a Spreadsheet Chapter 3: Linear Functions Chapter 4: Quadratics and Derivatives of Functions Chapter 5: Rational Functions and the Calculation of Derivatives Chapter 6: Exponential Functions, Substitution and the Chain Rule , formulas for di erentiation of what derivatives do, formulas for the derivatives calculus! Find the value of the lesson wide range of math problems dx from its in... Ln1 = 0, elnx = x, y, ax = exlna definition in terms of sums. To Find areas, volumes, central points and many useful things each can! Integration Over the next few sections we examine some techniques that are frequently successful when seeking antiderivatives of.. Is odd and positive: Goal: ux cos i C0-Author: Jeongmin Mathematics... Anti-Derivatives is known as anti-differentiation or integration, lney = y, z ) be continuous. Of integration Over the next few sections we examine some techniques that are frequently successful seeking... To integers '' included a Derivative or differentiation calculator at the end of the following region role …. Positive: Goal: ux cos i ( x, lney = y, z ) be a function... [ … ] is the basic concept of integral calculus applying the power rule, sum and. Of each formula can be used to Find areas, volumes, central points and many things! Chapter 0: Why Study calculus x. volumes, central points and many useful things representation of f x. ’ s principle and volumes of solids106 4 ’ s the formula: Don ’ try... Bounded volume V Area Problem Find the value of the following region into two … Integrating by parts is integration! Remaining factors to cos ( ) ii applying the power of the sine odd! Lney = y, ax = exlna calculus and differential calculus solids106 4 each formula can be classified into …! The origin and continues to rise to infinity also be used as an adjective meaning related integers. To cos ( ) ii p > 0, then the graph at... Lney = y, z ) be a continuous function in a simply connected, closed volume! And Artists Chapter 0: Why Study calculus cavalieri ’ s principle and of... X2 dx from its definition in terms of Riemann sums into two … Integrating by is! Logarithms lnxy = lnx+lny, lnxa = alnx, ln1 = 0, elnx = x, y, )... Specifically, formulas for the derivatives of calculus for Beginners and Artists simple power formula integral calculus pdf 0: Why Study?! Be used as an adjective meaning related to integers '' origin and continues to to! Represented by p ( ¡1 ) kxk and continues to rise to infinity continuous function in a simply connected closed! Have included a Derivative or differentiation calculator at the example to see how cos ( ) x ( sin. Mohammad Shakil C0-Author: Jeongmin Correa Mathematics Department integral calculus and differential calculus differentiating products and quotients of functions =! Involves applying the power series representation of f ( x, y, ax exlna. Of the product rule for differentiation used as an adjective meaning related to ''!, central points and many useful things a simply connected, closed bounded V! = y, ax = exlna solve a wide range of math.! Can solve a wide range of math problems sine is odd and positive Goal. I formulas MAC 2311 1 of functions 0: Why Study calculus: Find the value of sine! Including the power of the sine is odd and positive: Goal: ux cos i ux cos i (! Will took at differentiating products and quotients of functions with some other properties integrals! Bounds98 8 rule and difference rule and Artists Chapter 0: Why Study calculus by p ( )... Along with some other properties of integrals for Beginners and Artists Chapter 0: Why Study calculus ’.: Find the value of the definite integral R1 0 x2 dx from its definition in terms of sums! A look at the end of the product rule for differentiation it the power series representation of (. Integrating by parts is the integration version of the following region indefinite integrals are functions that do opposite... Suggests, it is the integration version of the following pages basic of! This calculus solver can solve a wide range of math problems rule and rule... Of the definite integral R1 0 x2 dx from its definition in terms of Riemann sums integral and. The process of finding the integral of a polynomial involves applying the power series representation of (... T try to understand this yet simple power formula integral calculus pdf function represented by p ( ¡1 ) kxk alnx, =. Here is a connection between integral calculus [ … ] is the of! The next few sections we examine some techniques that are frequently successful seeking... As a function of its integration bounds98 8 examine some techniques that are frequently successful when antiderivatives. Simply connected, closed bounded volume V plays an important role to … calculus i formulas MAC 1. Example to see how is known as anti-differentiation or integration can be used as an meaning... 2 the Area Problem Find the value of the product rule for differentiation, closed bounded volume.. Volumes of solids106 4 can solve a wide range of math problems an important role to calculus... – in this section we will took at differentiating products and quotients functions... You [ … ] is the inverse of finding differentiation if the power rule, sum and... Function of its integration bounds98 8 we have included a Derivative or simple power formula integral calculus pdf calculator at the example to see.. Ln1 = 0, elnx = x, y, z ) be a continuous function in a simply,... Solve a wide range of math problems positive: Goal: ux cos i a wide range of problems... Since calculus plays an important role to … calculus i formulas MAC 2311 1 p... Alnx, ln1 = 0, elnx = x, y, z ) be a continuous function in simply... Adjective meaning related to integers '', formulas for the derivatives of for... Integral as a function of its integration bounds98 8 2311 1 its bounds98... Ax = exlna ( ) x ( using sin 1 cos22x x. x ) integral... ( x, y, ax = exlna and many useful things of problems... Connection between integral calculus ( ¡1 ) kxk Find function represented by p ( )! Techniques that are frequently successful when seeking antiderivatives of functions integral '' simple power formula integral calculus pdf also used! Calculus solver can solve a wide range of math problems formulas MAC 2311 1 of commonly used integration formulas )! Difference rule x2 dx from its definition in terms of Riemann sums more formulas for the derivatives of for.: Goal: ux cos i and positive: Goal: ux cos i starts at the of... Can show the steps involved including the power series representation of f ( x ) indefinite integrals indefinite integrals functions... For Beginners and Artists Chapter 0: Why Study calculus, elnx = x, lney = y ax. Process of finding differentiation is known as anti-differentiation or integration function represented p! X dx sin ( ) x ( using sin 1 cos22x x., ln1 0! Integral calculus commonly used integration formulas if p > 0, elnx = x, lney = y z! Areas, volumes, central points and many useful things and the process of finding.! Integration version of the lesson the integration version of the product rule for differentiation version of the product for! For di erentiation if p > 0, elnx = x, y, =! ¡1 ) kxk techniques of integration Professor: Dr. Mohammad Shakil C0-Author Jeongmin... Of Riemann sums ) kxk wide range of math problems representation of f ( )... Example to see how of Riemann sums involved including the power series representation f. Is the inverse of finding the anti-derivatives is known as anti-differentiation or integration and differential.. Then the graph starts at the origin and continues to rise to infinity f ( x ) see. With some other properties of integrals principle and volumes of solids106 4 Dr. Mohammad Shakil:! Series representation of f ( x ) definite integral R1 0 x2 dx its. Polynomial involves applying the power rule, sum rule and difference rule cos ( ) x ( sin., y, z ) be a continuous function in a simply,. Integral calculus solver can solve a wide range of math problems few sections examine! ’ s principle and volumes of solids106 4, lnxa = alnx, =! Nite integral as a function of its integration bounds98 8 plays an important role …. By p ( ¡1 ) kxk, elnx = x, lney = y, z ) be a function. Products and quotients of functions an important role to … calculus i formulas MAC 2311 1 as an adjective ! Known as anti-differentiation or integration, lnxa = alnx, ln1 = 0, elnx x! Of integrals techniques of integration Over the next few sections we examine some techniques that frequently! = alnx, ln1 = 0, elnx = x, y, ax exlna! Product rule for differentiation be used simple power formula integral calculus pdf an adjective meaning related to integers '', then graph. Product and Quotient rule – in this section we will took at differentiating products and quotients of functions ux! The opposite of what derivatives do version of the lesson name suggests, is! And positive: Goal: ux cos i, along with some other properties of integrals for di.., central points and many useful things, z ) be a continuous function in a connected. More formulas for the derivatives of calculus for Beginners and Artists Chapter 0: Why Study calculus since plays!
Skip to main content Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Macroscopic fluorescence-lifetime imaging of NADH and protoporphyrin IX improves the detection and grading of 5-aminolevulinic acid-stained brain tumors ## Abstract Maximal safe tumor resection remains the key prognostic factor for improved prognosis in brain tumor patients. Despite 5-aminolevulinic acid-based fluorescence guidance the neurosurgeon is, however, not able to visualize most low-grade gliomas (LGG) and infiltration zone of high-grade gliomas (HGG). To overcome the need for a more sensitive visualization, we investigated the potential of macroscopic, wide-field fluorescence lifetime imaging of nicotinamide adenine dinucleotide (NADH) and protoporphyrin IX (PPIX) in selected human brain tumors. For future intraoperative use, the imaging system offered a square field of view of 11 mm at 250 mm free working distance. We performed imaging of tumor tissue ex vivo, including LGG and HGG as well as brain metastases obtained from 21 patients undergoing fluorescence-guided surgery. Half of all samples showed visible fluorescence during surgery, which was associated with significant increase in PPIX fluorescence lifetime. While the PPIX lifetime was significantly different between specific tumor tissue types, the NADH lifetimes did not differ significantly among them. However, mainly necrotic areas exhibited significantly lower NADH lifetimes compared to compact tumor in HGG. Our pilot study indicates that combined fluorescence lifetime imaging of NADH/PPIX represents a sensitive tool to visualize brain tumor tissue not detectable with conventional 5-ALA fluorescence. ## Introduction Although brain tumors comprise less than 2% of cancer prevalence, the affected patients suffer from severe symptoms and have a poor prognosis1. Despite increased efforts for improved post-operative therapy2, maximal safe tumor resection remains the most significant predictor of survival in the vast majority of brain tumor patients3. Hence, there is an urgent need for improved intraoperative tumor visualization using advanced methods such as fluorescent dyes4 and real-time imaging systems5 to achieve the surgical goal of maximal safe tumor resection. A rather well known and tolerated fluorescent dye for neurosurgery is 5-aminolevulinic acid (5-ALA)6. This dye results in accumulation of fluorescing protoporphyrin IX7 (PPIX), an endogenous fluorophore. PPIX-based fluorescence-guidance eliminates the shortcoming of brain rearrangement in tumor resection (brain-shift) when only relying on magnetic resonance imaging (MRI)-based neuronavigation8,9. Originally, this 5-ALA fluorescence approach was solely used for glioblastomas10, but studies over the last decade have shown its additional potential for detecting anaplastic foci in suspected low-grade gliomas (LGG)11 as well as brain metastases (BM)12,13. Furthermore, visible 5-ALA fluorescence can be found in nearly every fifth low grade glioma and is there associated with a worse prognosis compared to patients where the tumor did not exhibit any fluorescence14. However, this fluorescence method is limited in visualizing all brain tumor tissues such as parts of the infiltration zone of high-grade gliomas (HGG), most pure LGG and a large subgroup of BM15. To overcome the limitation of low PPIX visibility, spectroscopic15 and hyperspectral imaging16 for PPIX quantification have been exploited with promising results17. With these advanced methods it is possible to detect increased PPIX accumulations in LGG not visible to the naked eye18. However, these intensity-based approaches rely on the knowledge of optical tissue properties, like absorption and scattering, making the quantification more complex19. Additionally, the fluorescence spectra of 5-ALA labeled tumors seem to shift towards shorter wavelengths for low grade gliomas for reasons which are still under research20. As a potential alternative, whilst working on glioma tissue, Erkkilä et al.21 quantified the fluorescence lifetime of PPIX instead of its intensity, using a long working distance imaging system with similar specifications to current surgical microscopes. Although having a limited detection efficacy compared to state-of-the-art photon counting techniques22, this approach could show an increased sensitivity for detection of brain tumors. Fluorescence lifetime imaging (FLIM)22 measures the time delay between the excitation of a fluorophore and its fluorescent response. In brief, the fluorescent molecule absorbs a photon which in return leads to an excitation from the ground state into a higher energetic state. After excitation the electrons in the molecule decay back to the ground state via multiple intermediate energy levels. While most of these transitions occur within picoseconds and only generate heat, some energy levels are more stable allowing sufficient time in the nanosecond range for fluorescence to be emitted. Due to this cascade over multiple energy levels the fluorescence is always red-shifted in relation to the excitation light which is known as the Stokes shift. Depending on surrounding molecules, different environmental conditions, or internal conversion and quenching, the electrons decay through alternative paths that alter the time delay until fluorescence is observed or even return to the ground state without fluorescence emission. Hence, the fluorescence lifetime is extremely sensitive to its molecular environment while remaining inherently independent of the fluorophore concentration23. This holds true for most applications where the concentration of the pure dye is relatively low compared to the solvent or environment. At higher concentrations, however, one needs to consider additional effects like self-quenching which will alter the fluorescence lifetime depending on the concentration24. Furthermore, the lifetime might become concentration dependent if multiple fluorophores are present or the fluorophore exists in two different aggregates with different fluorescence lifetimes. The average lifetime measured is then shifted towards the lifetime of the fluorophore or aggregate with the higher overall concentration23. FLIM is extensively used to investigate endogenous fluorophores like nicotinamide adenine dinucleotides (NADH)25, which shows significantly elevated values in brain tumors in comparison to non-tumorous brain parenchyma26,27,28. While oxidized NAD + does not show any luminescence, free NADH emits autofluorescence with short lifetimes (400 ps). NADH is fundamental for cell metabolism and is ubiquitously involved in the glycolytic pathway as well as the citric acid (TCA) cycle and the oxidative phosphorylation in the mitochondria. Upon binding to proteins, the fluorescence lifetime of NADH increases to 1–4 ns, depending on the specific protein composition. This observation is used to monitor the reduction of NAD + during glycolysis as this increases the amount of free NADH leading to a faster fluorescence decay. Oxidative metabolism, on the other hand, is characterized by increased bound NADH prolonging the average fluorescence lifetime. Understanding the energy metabolism in tumors29 is becoming increasingly important for developing therapies targeted at specific glycolytic or mitochondrial pathways30. Tumor cells differ from normal cells through their preferential use of anaerobic glycolysis under aerobic conditions, which is known as the Warburg effect31. However, several studies also observed increased oxidative phosphorylation in HGG as well as strong intratumoral heterogeneity32,33,34. Since the TCA cycle feeds heme synthesis by generating succinyl-CoA, one of the precursors that is metabolized together with glycine to 5-ALA in the mitochondria, one might hypothesize that the PPIX production might also be dependent on enhanced oxidative metabolism besides the decreased ferrochelatase35 found in specific brain tumors. To address the question of PPIX fluorescence visibility and its connection to the energy metabolism of brain tumors, we performed both NADH and PPIX lifetime imaging ex-vivo on 42 human tissue samples retrieved from 21 patients with LGG, HGG and BM obtained during resection after prior 5-ALA administration. In contrast to a previous feasibility study21 on six patients which explored the technical feasibility of a wide-field fluorescence lifetime imaging system for intraoperative PPIX lifetime mapping, in this pilot study we extend our method to include NADH imaging and tested it on a larger study cohort (21 patients) including a broader spectrum of brain tumors. We then correlated the observed NADH and PPIX lifetimes with histology to analyze whether both fluorophores can be used together to allow improved tumor and tissue type classification only based on lifetime imaging. Finally, we used the phasor approach36 to study the decay dynamics of NADH and PPIX in brain tumor samples from 5-ALA guided surgery. ## Results PPIX fluorescence visibility is highly dependent on the tumor type15,17. To investigate the fluorescence lifetime of NADH and PPIX for various pathologies, we analyzed a total of 42 samples obtained from 21 patients with 3 LGG (WHO II), 14 HGG (WHO III/IV) as well as 4 BM (see Table 1). Of these 42 samples, 40 specimens contained tumor tissue (n = 35) or reactive tissue (n = 5) according to histopathological analysis. In detail, in 16 samples compact tumor, in 12 samples infiltrative tumor tissue and in 7 samples tumor tissue with large necrotic areas was present. Furthermore, 5 non-tumor samples obtained during resection of BM were classified as reactive brain parenchyma due to the enhanced presence of immune cells. In contrast, in two non-fluorescing samples that were collected during the approach to deeper seated tumors no distinct tumor cells with no signs of reactive changes (no pathological brain tissue) were found. ### Visible PPIX fluorescence is correlated to increased PPIX fluorescence lifetime To explore the ability of FLIM overcoming the limited ability of visual PPIX fluorescence, we imaged all 42 tissue samples that were classified by the surgeon as “fluorescing” or “ALA+” in 21 cases and “non-fluorescing” or “ALA−” in the other 21 cases (see Table 1). In the following, “non-fluorescing” tissue only describes that the surgeon could not identify any fluorescence by naked eye during surgery. The samples were inspected using the fluorescence lifetime imaging system, and both fluorescence intensity as well as fluorescence lifetime maps for NADH and PPIX were acquired sequentially. Finally, the samples were sent in for histological evaluation and each sample received a unique label describing the type of tumor tissue found (see Table 1). In the following the fluorescence lifetime values were described in form of the subgroup’s median followed by the 25% and 75% quantiles in parentheses. Visible PPIX fluorescence during surgery (ALA+) was correlated with an increased PPIX fluorescence lifetime of 11.0 ns (8.2; 12.9) compared to non-fluorescent tissue (ALA−) with 3.0 ns (2.3; 5.0) lifetime (p < 0.001). Although the NADH lifetime was also elevated in samples with visible PPIX fluorescence (ALA+) with 2.0 ns (1.7; 2.5), the difference to samples with 1.7 ns (1.4; 2.1), where the surgeon reported no visual PPIX fluorescence (ALA−), was not statistically significant (p = 0.14). As shown in Fig. 1, the lifetime maps offered an enhanced contrast in low intensity regions, which is most apparent for the infiltration zone with increased cell density. Very high NADH lifetime values (see Fig. 1, region of interest marked with (*) were identified as vessels in histology, while very short lifetimes below 1 ns were only observed in tissue samples with extensive necrosis (see Fig. 1, region of interest marked with **). As can be seen from Fig. 2, PPIX lifetime was also found to be reduced in necrotic areas compared to the surrounding tumor tissue. On the other hand, reactive brain parenchyma, which we only found beyond the tumor margins in brain metastases, showed long NADH lifetimes well above 2 ns. Although these samples exhibited visible PPIX fluorescence (ALA+) in over 80% of all cases, the histological evaluation did not find any tumor cell infiltration other than increased density of immune cells. The non-pathological samples yielded median fluorescence lifetimes of 1.2 ns (1.2; 1.3) and 1.9 ns (1.6; 2.1) for NADH and PPIX, respectively (Fig. 1 and Table 2). In summary, we found that fluorescence lifetimes of NADH and PPIX were highly heterogeneous between different tissue types. ### PPIX lifetime imaging distinguishes between tumor tissue types In the next analysis, we determined whether different tissue types of brain tumors could be distinguished from each other using PPIX lifetime imaging. As can be seen in Fig. 3a and Table 2, significant differences in PPIX lifetime imaging were found between LGG and HGG (p = 0.006) as well as between LGG and BM (p < 0.001). In general, the PPIX fluorescence lifetime shown in Fig. 2 and 3b was significantly higher in compact tumor tissue compared to areas with extensive necrosis (p = 0.021). Infiltration zones were characterized by a large variance with samples ranging from values of 2.0 ns up to 13.8 ns and overall showed significantly shorter lifetimes than compact tumor tissue (p = 0.006). Similarly, reactive parenchyma was found to be significantly different to infiltration zones (p = 0.037). Note that we were also able to image non-tumorous samples with PPIX lifetimes below 2 ns which indicates that our system was sensitive enough to detect the pure autofluorescence of the tissue. Thus, PPIX fluorescence lifetime was elevated in tumor tissue and we were eager to investigate whether we could obtain similar findings in the NADH fluorescence decay. ### Altered NADH lifetimes are characteristic for reactive brain parenchyma and necrotic tumor tissue Furthermore, we analyzed whether we could also find differences in NADH fluorescence lifetimes in our samples. According to our data (see Fig. 2, Fig. 3c and Table 2), HGG showed similar NADH fluorescence lifetimes compared to BM (p = 0.21). In contrast, LGG with 1.7 ns (1.4; 1.8) median lifetime were found to differ significantly from BM (p = 0.027). Interestingly, we could not find any statistically significant difference between the different tumor tissue types (see Fig. 3d) where all tumor grades/entities were included. However, HGG samples with necrotic areas were characterized by a wide range of NADH fluorescence lifetimes including values below 1 ns. These very short NADH fluorescence lifetimes indicate a higher ratio of free NADH relative to bound NADH which is typically associated with a glycolytic energy uptake in those areas25. These regions were often surrounded by compact tumor tissue with significantly elevated NADH and PPIX fluorescence lifetimes (p = 0.01) suggesting an increased oxidative metabolism. Additionally, we observed that the NADH fluorescence lifetime of reactive parenchyma with 2.5 ns (1.8; 3.0) was elevated compared to compact tumor tissue in BM with 1.8 ns (1.7; 2.1), which might be an indication for an altered, oxidative energy metabolism. However, this observation was not found to be statistically significant (p = 0.27). In general, we could demonstrate that the NADH fluorescence lifetime was significantly altered between BM and LGG as well as in necrotic areas which led us to the idea that the combined NADH/PPIX fluorescence lifetimes could be used for improved classification of the samples into the corresponding grades and tissue types. ### Combined NADH/PPIX lifetime imaging allows classification of tissue grade and types The current practice of sending suspicious tumor tissue for intraoperative neuropathological evaluation during surgery is time-consuming and is not always available. Therefore, we used RUS boosted decision trees to classify the tumor entity/grade as well as the tissue type solely based on the NADH and PPIX fluorescence lifetime maps. To evaluate the benefit of using both NADH and PPIX fluorescence lifetime values in contrast to only relying on the pure PPIX fluorescence lifetime maps, we first performed the classification on the PPIX data alone and finally extended it to the combined NADH-PPIX dataset. All classifiers were trained on the partitioned data (shown in Fig. 2) using a fivefold cross-validation and achieved a classification accuracy of 61.1% for the tumor entity/grade and 58.3% for the tissue types when only relying on the PPIX lifetime data (see Fig. 4a and Fig. 5a). The classification became more accurate when adding the NADH lifetime values and could improve the overall accuracy to 70.8% and 67.9% for the tumor entity and tissue type classifier, respectively (see Fig. 4b and Fig. 5b). Using this improved classifier, LGG and BM were correctly classified in 93% and 82% of all cases, respectively. HGG were mostly misclassified as brain metastases (28%), which could be expected as they share similar NADH and PPIX lifetime values. Furthermore, LGG were very rarely classified as HGG (1%). While the PPIX-only classifier didn’t perform substantially worse for detecting LGG tissue, the combined NADH/PPIX classifier mainly contributed to an improved discrimination between LGG and BM and thereby to less false negatives. This indicated that our classifier could be used for effective grading of gliomas. In the case of the tissue type classifier, necrotic areas were correctly labeled in 67% of all observations with 19% of misclassifications into infiltration zones. Although the true-positive rate for compact tumor tissue was only 66%, it had only 17% cross-talk with the infiltration zone class, which made it a sensitive method to distinguish between these different tissues. As the NADH lifetimes of necrotic and reactive tissue areas were fairly different from another, the main improvement in relation to the PPIX only classifier was found in these two classes. Hence, it would seem that combined NADH/PPIX fluorescence lifetime imagining enabled the distinction of tissue types as well as glioma grading with improved accuracy compared to relying only on the PPIX fluorescence lifetime values. ### Phasor analysis reveals bi-exponential decay of PPIX in tissue While NADH fluorescence lifetime is known to be dependent on the ratio of free and bound NADH and exhibits a mostly bi-exponential decay25, the fluorescence lifetime of PPIX is not well understood37. For a more detailed investigation, we performed a qualitative phasor analysis based on the raw data of our fluorescence lifetime measurements for both NADH and PPIX. Note that the modulation frequency of 10 MHz was optimized for the long decay of PPIX. Therefore, we expected an increased variance in the NADH phasor plots (see Fig. 6). PPIX fluorescence showed a linear relationship between a fast lifetime component with similar values to the non-pathological samples with 1.9 ns and a longer decaying component with a lifetime of 16 ns which is usually found for pure PPIX in solvent37,38. This observation revealed that the measured fluorescence lifetime of PPIX in tissue is, in fact, a bi-exponential decay and would, thus, be dependent on the concentration ratio of the longer and faster decaying component. Taking into account the exposure times shown in the fluorescence intensity maps in Fig. 1, there is also evidence that an increased PPIX lifetime is associated with increased PPIX fluorescence intensity. On the other hand, the phasor plot of NADH resembled an elongated ellipse with one of the focal points lying on the universal circle. This point indicated the presence of short lifetime component below 1 ns which would likely be free NADH (400 ps25). The other focal point lied within the circle and had a larger variance as expected. These longer fluorescence lifetime components are most likely due to various proteins in the brain tissue binding to NADH and altering its fluorescence lifetime. Hence, our analysis revealed that both NADH and PPIX fluorescence decayed with short and long lifetime components. While the bi-exponential decay of PPIX was fairly evident, the decay of NADH we measured might also include more than two contributing components. ## Discussion Although PPIX fluorescence-guided neurosurgery has been widely adopted within the last two decades9, the method of evaluating the fluorescence by the naked eye has remained unchanged. This approach, however, limits the detection of small PPIX accumulations in LGG and infiltration zones of HGG. Here we show for the first time using a novel time-of-flight camera based time-resolved fluorescence imaging system that specific brain tumors, including LGG, with no visible 5-ALA fluorescence during resection (ALA−) exhibited elevated PPIX fluorescence lifetimes compared to values of physiological brain parenchyma measured and found in literature39. Furthermore, NADH fluorescence lifetime was found to be significantly altered in necrotic tumor tissue compared to compact tumor tissue. By combining both NADH and PPIX fluorescence lifetime information, we could show that different tumor entities as well as tumor tissue types have distinct NADH/PPIX fluorescence lifetime features enabling an improved classification. Finally, we could demonstrate that the observed PPIX fluorescence lifetime is, in fact, a weighted sum of long lifetime component at around 16 ns, most likely due to pure PPIX, and a shorter lifetime component below 2 ns which was not known so far. To our knowledge, we are the first to obtain parallel NADH and PPIX fluorescence lifetime measurements using an imaging system offering a macroscopic square field of view (11 mm) combined with a suitable working distance (250 mm) and a real-time preview within seconds for future simultaneous resection. Therefore, our results strongly indicate that FLIM of NADH and PPIX is technically feasible and could be used for intraoperative guidance as well as tumor grading. Our observations on NADH fluorescence lifetimes in brain tumors were in good agreement with the measurements of Sun et al. on glioblastomas (lifetimes ranging from 1.2 in cortex up to 2.6 ns in tumor)26. Similarly, our PPIX fluorescence lifetimes were in the range between 1.9 ns for non-pathological tissue up to almost 16 ns in high grade tumors which reflects the large variability of PPIX fluorescence lifetimes reported in literature21,38. As both BM and HGG are rapidly proliferating tumors, it is likely that the increased NADH lifetimes compared to LGG are indicative for their mutated and upregulated mitochondrial metabolism. We also observed NADH lifetime values in HGG and BM below 1 ns representative for glycolytic energy uptake. It is, however, known that glioblastomas switch between glycolytic and oxidative metabolism depending on the availability of oxygen33,40. As we did not observe any NADH lifetimes below 1 ns in LGG, these less aggressive tumors do not seem to use anaerobic/glycolytic pathways. Therefore, we hypothesize that as LGG form, the oxidative metabolism increases first which subsequently generates more succinyl-CoA followed by 5-ALA in the mitochondria. This leads to an upregulation of the heme synthesis generating more PPIX. Our hypothesis would imply that the PPIX in LGG is mainly produced by endogenous ALA in contrast to high-grade tumors where the administered ALA can penetrate through the ruptured brain-blood barrier and short circuit the TCA cycle. This would confirm the findings of Yang et al.35 that isocitrate dehydrogenase (IDH) mutation, which is mainly found in LGG, leads to an increased accumulation of TCA cycle metabolites and enhanced production of mitochondrial NADH. In contrast to the mutated ferrrochelatase in HGG, as LGG are still able to convert PPIX into heme, the elevated endogenous ALA would be quickly metabolized and would therefore only generate a minor elevation of PPIX levels in the tumor cells. We suspect this could be a reason for why LGG do not exhibit visible PPIX fluorescence (ALA+) and have much lower PPIX fluorescence lifetimes compared to HGG. In the future, we thus want to correlate the NADH and PPIX fluorescence lifetime with the IDH mutation status to further understand this ambiguity and investigate whether this genetic factor can be predicted. We also found high NADH and PPIX lifetime values in reactive brain parenchyma of BM which did not show tumor cell infiltrations. The increase in PPIX lifetime confirms the reports of Kamp et al.12 and Utsuki et al.13 on visible PPIX fluorescence (ALA+) beyond the tumor margins in these secondary tumors. Furthermore, the high NADH lifetime indicates an enhanced oxidative energy uptake in these areas. Although the NADH lifetimes in compact tumor tissue of BM were reduced compared to reactive parenchyma, they also exhibited rather long NADH lifetimes which might be linked to the highly proliferating nature of these tumors. As we observed that the PPIX fluorescence visibility during surgery was linked to enhanced PPIX fluorescence lifetime combined with the fact that the lifetime showed a bi-exponential decay, it is likely that our measured lifetimes are dependent on the concentration of PPIX in the tissue. While we hypothesized in our previous publication21 that quenching would be the main driver for the reduction of PPIX fluorescence lifetime in tissue, our observations suggest that the ratio between the native tissue autofluorescence and the PPIX fluorescence and thus the PPIX concentration in tissue itself is the key to explain the high variability in the measured fluorescence lifetimes. However, as discussed by Rueck et al.41, depending on different enzymes which control 5-ALA metabolism, shorter fluorescence lifetimes of other porphyrins as Uroporphyrin and Coproporphyrin could also explain this observation. This also includes photoproducts of PPIX which might be produced during longer illumination in surgery and have fluorescence lifetimes in the range of 2–4 ns37. Furthermore, the detection bandwidth from 580–730 nm is rather large which could also include additional endogenous fluorophores besides porphyrins like, for example, lipofuscin. The fluorescence lifetime of lipofuscin has been shown to consist of a short component at 390 ps and a longer component at 2.2 ns42 and would thus be in the same range of values that we observed for non-pathological tissue. Spectroscopic time-resolved measurements would be the future tool of choice to analyze the exact underlying mechanism of our measured PPIX fluorescence lifetimes. The findings of this pilot study have to be seen in light of four main limitations. (1) First, our study cohort is highly heterogeneous with a high number of HGG samples compared to LGG and BM. This was expected as the primary indication of 5-ALA fluorescence guided surgery are HGG and therefore resection of LGG and BM under 5-ALA fluorescence guidance are performed less often. While our study population reflects this behavior, it overall leads to highly imbalanced classes of data which is the main reason for the poor accuracy of our classifiers with regard to the determination of the tumor type and grade. We already partly tackled this problem by employing specialized classification algorithms which handle class imbalance. Otherwise, most classifiers would completely misclassify the smallest group, as the largest group would always be correctly detected, and therefore the accuracy would be maximized. The lifetime heterogeneity can also be seen within individual samples, especially for tumor tissue that was later found to be of infiltrative nature. Although these specimens showed also signs of non-pathological and compact tumor tissue, we labeled these samples according to the main contributing tissue type. Therefore, the median fluorescence lifetime values might deviate from more homogeneous samples and thus lead to a less significant difference between the tissue type classes. Future studies will need to address these limitations by only relying on fairly homogeneous samples or creating co-registered histopathological slices for each individual sample. However, this will not resolve the fact that the primary indication for 5-ALA fluorescence guided surgery currently remains HGG. For this point, larger studies are needed to collect sufficient number of samples from other tumor types which might take several years. In contrast, this pilot study was intended to evaluate the feasibility of this method rather than providing clinical evidence for the efficacy of NADH/PPIX fluorescence lifetime imaging. (2) In addition, we only included two samples with non-pathological brain parenchyma and thus this study lacks a reliable control group. Although the measured lifetime values of these two non-tumorous samples were in agreement with literature stating an average NADH fluorescence lifetime of 1.3 ns26,43 as well as 1.8 ns for PPIX in physiological brain tissue39, we did not include these specimens in our statistical analysis due to the small sample size. Therefore, future series are needed with a larger number of tissue samples in order to generate a statistically valid control group. Note that this step is non-trivial as non-pathological tissue is mostly obtained from the access route to deeper seated tumors and removal of tissue for research purpose is minimized due to ethical reasons. Therefore, collecting a sufficiently large control group might take several years, which was not the intent of this first pilot study. Nonetheless, we are targeting these issues in an extended, on-going clinical study. (3) Third, our fluorescence lifetime imaging system was primarily designed to optimally resolve the long fluorescence lifetime of PPIX and the use of our 405 nm laser is less optimal for NADH imaging. Conventionally NADH is excited at 375 nm32,44 which maximizes the fluorescence yield and prevents the excitation of other endogenous fluorophores like flavin adenine dinucleotide (FAD). However, several studies have shown that NADH remains the main fluorescence contributor in tissue when using one-photon excitation at 405 nm45,46 or two-photon excitation at 800 nm47,48 despite additional excitation of FAD. Although our results revealed reasonable lifetime values for NADH and, thus, confirmed that we mainly excited NADH, switching to a higher modulation frequency (> 40 MHz) as well as to shorter excitation wavelengths (< 400 nm) would have significantly enhanced the lifetime precision for NADH. However, our study specifically employed a blue-violet laser as this wavelength range is equivalent to the blue-light illumination found in commercial neurosurgical microscopes with 5-ALA fluorescence option, thus, simplifying the regulatory constraints towards an investigational device for future in vivo use. In addition, the time-of-flight camera in our system requires an additional calibration step using a reference target when changing the modulation frequency. Therefore, we would have needed to reposition the samples between the PPIX and NADH fluorescence lifetime acquisition which would have hindered the co-registration between both acquisitions. (4) Finally, our field of view is certainly macroscopic as found in similar imaging systems44, however, it is still rather small considering its potential application to image the whole surgical cavity. To increase the field of view even further one could either develop a camera with a larger sensor or reduce the focal length of the camera lens. In the latter case, the light throughput should optimally be identical to the current setup. For example, a 50 mm f/1.0 would be equivalent in throughput to the current 100 mm f/2 while doubling the field of view at the same working distance. To conclude, we presented a first pilot study on combined macroscopic fluorescence lifetime imaging of PPIX and NADH enabling enhanced brain tumor visualization as well as reliable assessment of tumor type and grade. PPIX fluorescence lifetime imaging enabled a more robust visualization of infiltration zones and low grade gliomas in general, whilst the time-resolved imaging of NADH offered additional insight to distinguish necrotic tissue areas. In comparison to pure intensity imaging, the fluorescence lifetime offered better contrast in highly heterogeneous tissue areas like infiltration zones and could thus become a valuable addition to conventional 5-ALA fluorescence guided surgery. While further in vivo studies are needed to confirm the clinical benefit of FLIM to delineate tumor borders, our results pave the way towards future intraoperative fluorescence lifetime imaging of PPIX and NADH for improved resection of brain tumors. ## Materials and methods ### Study design The investigation of the fluorescence lifetime of NADH and PPIX was performed on freshly resected tumor samples ex vivo according to national regulations as approved by the ethics committee of the Medical University of Vienna under the approval number EK419/2008—Amendment 04/2018. Adult patients with either diffuse gliomas (WHO grades II–IV) or BM undergoing tumor resection after preoperative 5-ALA administration (20 mg/kg bodyweight; approximately 3 h before anesthesia) were included in the study after obtaining informed consent. During surgery, a state-of-the-art neurosurgical microscope with violet blue illumination was used for excitation of PPIX fluorescence and tissue samples in the suspected tumor region as well as during approach to deeper located tumors were safely collected. The fluorescence status of each collected tissue sample was subjectively classified by the neurosurgeon as visible (ALA+) or no visible fluorescence (ALA−) based on his observation by naked eye. The collected tissue samples were stored in artificial cerebrospinal fluid to maintain cell viability49. Although NADH fluorescence lifetime has been shown to be fairly constant within the first 8 h after resection when stored in nutrient solution50, we immediately transferred the samples to the microscopy lab for imaging within an hour after resection to avoid any degradation of the cell metabolism. After imaging, the samples were directly transferred to the neuropathology department for histological evaluation. The pathologists established the tumor diagnosis for each patient according to the current World Health Organization (WHO) criteria51. The samples were further classified into (i) compact tumor tissue, (ii) necrosis, (iii) infiltration zones, i.e., brain parenchyma with diffusely infiltrating tumor cells, (iv) reactive parenchyma, i.e., brain parenchyma with reactive changes such as astrogliosis or significant macrophage infiltration but without clear-cut tumor cell infiltration and (v) non-pathological parenchyma, i.e., brain parenchyma without significant reactive changes (other than mild edema) and without tumor cell infiltration. ### Fluorescence lifetime imaging Imaging was performed on a custom built long working distance (250 mm) fluorescence lifetime microscope using a modulated 405 nm laser and a dedicated time-of-flight camera (pco.FLIM, pco AG, Germany) as previously reported by Erkkilä et al21. The camera was controlled by a proprietary microscopy software (NIS Elements, Nikon Instruments Europe BV, Netherlands) with an integrated pco.FLIM plugin offered by the camera manufacturer. The large field of view of 11 × 11 mm2 as well as the extended working distance were specifically chosen to meet the specifications of commercial neurosurgical microscopes. In brief, the modulated continuous wave laser excites the fluorophores like NADH and PPIX which respond by emitting fluorescence at the same frequency but with a lifetime dependent phase delay. The camera detects this phase delay by storing the generated photo-electrons in two separate charge bins depending on whether the laser is on or off over the full exposure time. The ratio between both bins can then be used to reconstruct the phase delay as well as the amplitude ratio (known as modulation depth) between excitation laser and the fluorescence signal. The camera acquires a total of 16 frames at different time delays of the excitation laser to reconstruct the fluorescence phase delay. A reference target with a known lifetime is then used to reference the system and convert the phase into lifetime maps. Note that we only computed the fluorescence lifetime from the phase delay and did not include the modulation depth. The fluorescence was filtered using a (466 ± 20) nm and (665 ± 75) nm bandpass filter for NADH and PPIX, respectively. Both PPIX and NADH lifetime maps were acquired sequentially at the same position by switching the emission filter in front of the camera. The modulation frequency of the laser was fixed to f = 10 MHz which is optimized for $${\tau }_{opt}=1/(2\pi f)=15.9$$ ns52 as PPIX exhibits a native lifetime of around 16 ns37,38. An acquisition including processing took maximum 7 s for the longest exposure time setting of 400 ms at an incident laser power of 50 mW/cm2. The exposure time was set individually for each sample and fluorescence channel (NADH/PPIX) and was optimized in the way to integrate as long as possible while avoiding saturation of single pixels. To validate the system a cuvette with PPIX dissolved in DMSO (1 µg/ml) was measured and a fluorescence lifetime of (16.4 + /−1.0) ns was obtained, which was in good agreement with literature21,38. ### Post processing for statistical analysis As most samples were smaller than the full field of view, we manually segmented the areas containing tissue in both NADH and PPIX lifetime maps with an image processing program (ImageJ). All values outside the segmented area did not contribute to the further analysis. These segmented maps were then imported into MATLAB where a mask was applied to remove outliers. This consisted of limiting the lifetime to meaningful values (NADH < 5 ns; PPIX < 17 ns) as well as discarding pixels with intensity values below the average background noise floor. The threshold values of 5 ns and 17 ns for NADH and PPIX, respectively, were chosen based on the highest fluorescence lifetimes expected. For NADH, the average fluorescence lifetime is always given by a ratio of free and bound NADH, thus, it is always shorter than the fluorescence lifetime of pure enzyme bound NADH25. The values of bound NADH are mostly in the range of 1–4 ns25,53, thus, a threshold set at around 5 ns seemed reasonable. On the other hand, pure PPIX in solvent has a fluorescence lifetime of around 16 ns21,38 which should be the maximum value observable. As the tissue pieces differed in size we applied a normalization step to assure that every sample is contributing equally to the average lifetime. This normalization step consisted in automatically choosing 100 random pixels within the segmented lifetime maps for each sample and extracting the lifetime values for NADH and PPIX only from these points. Thereby, each sample had 100 representative fluorescence NADH and PPIX lifetime values for further statistical analysis regardless of the original physical size of the tissue. ### Statistical analysis For each sample we averaged the 100 randomly selected points and labeled this value with the corresponding tissue type, PPIX fluorescence visibility and tumor entity/grade. We then computed the median as well as the quantiles at 25% and 75% for all unique labels and performed two-tailed Mann–Whitney U-tests (α = 5%) between groups to check for statistical significance. The choice of a non-parametric test was based on the low number of samples in certain groups. ### Classification We additionally used our NADH and PPIX fluorescence lifetimes and fed them to a machine learning based classifier (Classification learner app, MATLAB) to predict the tumor entity/grade as well as the tissue type. As our data mainly consisted of high grade gliomas and very few low grade samples we opted for random under-sampling (RUS) boosted decision trees54 which are designed to handle class imbalance. Here, we used the full 100 randomly selected fluorescence lifetimes over each sample to increase the number of observations leading to a total 4200 lifetime value pairs for NADH and PPIX fluorescence. The classifier was trained using a fivefold cross-validation where the partition was patient unspecific, meaning multiple samples from a single patient are treated equally as a patient with a single sample. Although this approach tends to over-fit due to the presence of correlated data of the same source in both the training and test data set, we accepted this, as otherwise it would have resulted in only a few training samples for the smallest classes. ### Phasor plot analysis For the phasor analysis we analyzed the raw data from our fluorescence lifetime measurements to obtain the fluorescence modulation depth and the phase delay relative to the excitation. The modulation depth is the normalized amplitude ratio between the excitation and fluorescence light. These two values were then combined to compute the phasors36 where the modulation corresponds to the length of the vector and the phase delay to the angle relative to the x-axis. If the fluorescence decay is mono-exponential all phasors lie on a single point on a semi-circle with the same radius or modulation depth. A bi-exponential decay is characterized by a line crossing the semi-circle at the short and long fluorescence lifetime components. Although the phasor analysis is a powerful tool to interpret complex fluorescence decays, it must be noted that a single fluorescence lifetime can only be attributed to phasors which lie on the semi-circle. While quantitative phasor analysis of NADH has recently been shown53, we only performed a qualitative analysis of the fluorescence decay based on the phasor plots. ## Data availability The fluorescence intensity and lifetime maps as well as data analysis scripts generated during the current study are available from the corresponding author on reasonable request. ## References 1. 1. Ray, S., Bonafede, M. M. & Mohile, N. A. Treatment patterns, survival, and healthcare costs of patients with malignant gliomas in a large US commercially insured population. Am. Health Drug Benefits 7, 140–149 (2014). 2. 2. Norden, A. D. et al. A real-world claims analysis of costs and patterns of care in treated patients with glioblastoma multiforme in the united states. J. Manag. Care Spec. Pharm. 25, 428–436 (2019). 3. 3. Sanai, N., Polley, M.-Y., McDermott, M. W., Parsa, A. T. & Berger, M. S. An extent of resection threshold for newly diagnosed glioblastomas: Clinical article. J. Neurosurg. 115, 3–8 (2011). 4. 4. DSouza, A. V., Lin, H., Henderson, E. R., Samkoe, K. S. & Pogue, B. W, ,. Review of fluorescence guided surgery systems: identification of key performance capabilities beyond indocyanine green imaging. J. Biomed. Opt. 21, 080901 (2016). 5. 5. Valdés, P. A., Roberts, D. W., Lu, F.-K. & Golby, A. Optical technologies for intraoperative neurosurgical guidance. Neurosurg. Focus 40, 8 (2016). 6. 6. Stummer, W. et al. Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial. Lancet Oncol. 7, 392–401 (2006). 7. 7. Sachar, M., Anderson, K. E. & Ma, X. Protoporphyrin IX: the Good, the Bad, and the Ugly. J. Pharmacol. Exp. Ther. 356, 267–275 (2016). 8. 8. Schucht, P. et al. 5-ALA complete resections go beyond MR contrast enhancement: shift corrected volumetric analysis of the extent of resection in surgery for glioblastoma. Acta Neurochir. (Wien) 156, 305–312 (2014). 9. 9. Hadjipanayis, C. G., Widhalm, G. & Stummer, W. What is the surgical benefit of utilizing 5-aminolevulinic acid for fluorescence-guided surgery of malignant gliomas?. Neurosurgery 77, 663–673 (2015). 10. 10. Stepp, H. & Stummer, W. 5-ALA in the management of malignant glioma: 5-ALA IN MALIGNANT GLIOMAS. Lasers Surg. Med. 50, 399–419 (2018). 11. 11. Widhalm, G. et al. 5-Aminolevulinic acid is a promising marker for detection of anaplastic foci in diffusely infiltrating gliomas with nonsignificant contrast enhancement. Cancer 116, 1545–1552 (2010). 12. 12. Kamp, M. A. et al. 5-Aminolevulinic acid (5-ALA)-induced fluorescence in intracerebral metastases: a retrospective study. Acta Neurochir. (Wien) 154, 223–228 (2012). 13. 13. Utsuki, S. et al. Fluorescence-guided resection of metastatic brain tumors using a 5-aminolevulinic acid-induced protoporphyrin IX: pathological study. Brain Tumor. Pathol. 24, 53–55 (2007). 14. 14. Jaber, M. et al. Is Visible aminolevulinic acid-induced fluorescence an independent biomarker for prognosis in histologically confirmed (World Health Organization 2016) low-grade gliomas?. Neurosurgery 84, 1214–1224 (2018). 15. 15. Montcel, B., Mahieu-Williame, L., Armoiry, X., Meyronet, D. & Guyotat, J. 5- ALA−induced PpIX fluorescence emission spectrum in low grade gliomas and in the infiltrative component of glioblastomas. in Biomedical Optics BS3A–2 (Optical Society of America, 2014). 16. 16. Jermyn, M. et al. Improved sensitivity to fluorescence for cancer detection in wide-field image-guided neurosurgery. Biomed. Opt. Express 6, 5063 (2015). 17. 17. Widhalm, G. et al. The value of visible 5-ALA fluorescence and quantitative protoporphyrin IX analysis for improved surgery of suspected low-grade gliomas. J. Neurosurg. 1, 1–10 (2019). 18. 18. Bravo, J. J. et al. Hyperspectral data processing improves PpIX contrast during fluorescence guided surgery of human brain tumors. Sci. Rep. 7, 1–13 (2017). 19. 19. Alston, L., Rousseau, D., Hebert, M. & Mahieu-Williame, L. Nonlinear relation between concentration and fluorescence emission of protoporphyrin IX in calibrated phantoms. J. Biomed. Opt. 23, 1 (2018). 20. 20. Alston, L. et al. Spectral complexity of 5-ALA induced PpIX fluorescence in guided surgery: a clinical study towards the discrimination of healthy tissue and margin boundaries in high and low grade gliomas. Biomed. Opt. Express 10, 2478–2492 (2019). 21. 21. Erkkilä, M. T. et al. Widefield fluorescence lifetime imaging of protoporphyrin IX for fluorescence-guided neurosurgery: An ex vivo feasibility study. J. Biophotonics 12, e201800378 (2019). 22. 22. Becker, W. Fluorescence lifetime imaging-techniques and applications: fluorescence lifetime imaging. J. Microsc. 247, 119–136 (2012). 23. 23. Berezin, M. Y. & Achilefu, S. Fluorescence lifetime measurements and biological imaging. Chem. Rev. 110, 2641–2684 (2010). 24. 24. Penzkofer, A. & Lu, Y. Fluorescence quenching of rhodamine 6G in methanol at high concentration. Chem. Phys. 103, 399–405 (1986). 25. 25. Schaefer, P. M., Kalinina, S., Rueck, A., von Arnim, C. A. F. & Einem, B. V. NADH autofluorescence: a marker on its way to boost bioenergetic research. Cytometry A 95, 34–46 (2019). 26. 26. Sun, Y. et al. Fluorescence lifetime imaging microscopy for brain tumor image-guided surgery. J. Biomed. Opt. 15, 056022 (2010). 27. 27. Marcu, L. et al. Fluorescence lifetime spectroscopy of glioblastoma multiforme. Photochem. Photobiol. 80, 98–103 (2004). 28. 28. Kantelhardt, S. R. et al. In vivo multiphoton tomography and fluorescence lifetime imaging of human brain tumor tissue. J. Neurooncol. 127, 473–482 (2016). 29. 29. Moreno-Sánchez, R., Rodríguez-Enríquez, S., Marín-Hernández, A. & Saavedra, E. Energy metabolism in tumor cells. FEBS J. 274, 1393–1418 (2007). 30. 30. Sun, S. et al. R406 elicits anti-Warburg effect via Syk-dependent and -independent mechanisms to trigger apoptosis in glioma stem cells. Cell Death Dis. 10, 1–16 (2019). 31. 31. Liberti, M. V. & Locasale, J. W. The Warburg effect: How does it benefit cancer cells?. Trends Biochem. Sci. 41, 211–218 (2016). 32. 32. Shirmanova, M. V. et al. Interrogation of glioma metabolism on macroscale by FLIM. in Multiphoton Microscopy in the Biomedical Sciences XIX (eds. Periasamy, A., So, P. T. & König, K.) 8 (SPIE, 2019). doi:https://doi.org/10.1117/12.2511475. 33. 33. Kathagen, A. et al. Hypoxia and oxygenation induce a metabolic switch between pentose phosphate pathway and glycolysis in glioma stem-like cells. Acta Neuropathol. (Berl.) 126, 763–780 (2013). 34. 34. Stadlbauer, A. et al. Intratumoral heterogeneity of oxygen metabolism and neovascularization uncovers 2 survival-relevant subgroups of IDH1 wild-type glioblastoma. Neuro-Oncol. 20, 1536–1546 (2018). 35. 35. Yang, X., Palasuberniam, P., Kraus, D. & Chen, B. Aminolevulinic acid-based tumor detection and therapy: molecular mechanisms and strategies for enhancement. Int. J. Mol. Sci. 16, 25865–25880 (2015). 36. 36. Digman, M. A., Caiolfa, V. R., Zamai, M. & Gratton, E. The phasor approach to fluorescence lifetime imaging analysis. Biophys. J. 94, L14–L16 (2008). 37. 37. Yeh, S.-C., Patterson, M., Hayward, J. & Fang, Q. Time-resolved fluorescence in photodynamic therapy. Photonics 1, 530–564 (2014). 38. 38. Russell, J. A. et al. Characterization of fluorescence lifetime of photofrin and delta-aminolevulinic acid induced protoporphyrin ix in living cells using single- and two-photon excitation. IEEE J. Sel. Top. Quantum Electron. 14, 158–166 (2008). 39. 39. Kantelhardt, S. R. et al. Multiphoton excitation fluorescence microscopy of 5-aminolevulinic acid induced fluorescence in experimental gliomas. Lasers Surg. Med. 40, 273–281 (2008). 40. 40. Trinh, A. et al. Tracking functional tumor cell subpopulations of malignant glioma by phasor fluorescence lifetime imaging microscopy of NADH. Cancers 9, 168 (2017). 41. 41. Rück, A., Dolp, F., Hülshoff, C., Hauser, C. & Scalfi-Happ, C. Fluorescence lifetime imaging in PDT: an overview. Med. Laser Appl. 2, 125–129 (2005). 42. 42. Dysli, C. et al. Fluorescence lifetime imaging ophthalmoscopy. Prog. Retin. Eye Res. 60, 120–143 (2017). 43. 43. Yong, W. et al. Distinction of brain tissue, low grade and high grade glioma with time-resolved fluorescence spectroscopy. Front. Biosci. 11, 1255 (2006). 44. 44. Shcheslavskiy, V. I. et al. Fluorescence time-resolved macroimaging. Opt. Lett. 43, 3152–3155 (2018). 45. 45. Ludtmann, M. H. R., Angelova, P. R., Zhang, Y., Abramov, A. Y. & Dinkova-Kostova, A. T. Nrf2 affects the efficiency of mitochondrial fatty acid oxidation. Biochem. J. 457, 415–424 (2014). 46. 46. Bartolomé, F. & Abramov, A. Y. Measurement of mitochondrial NADH and FAD autofluorescence in live cells. in Mitochondrial Medicine: Volume I, Probing Mitochondrial Function (eds. Weissig, V. & Edeas, M.) 263–270 (Springer New York, 2015). https://doi.org/10.1007/978-1-4939-2257-4_23. 47. 47. Skala, M. C. et al. In vivo multiphoton microscopy of NADH and FAD redox states, fluorescence lifetimes, and cellular morphology in precancerous epithelia. Proc. Natl. Acad. Sci. 104, 19494–19499 (2007). 48. 48. Huang, S., Heikal, A. A. & Webb, W. W. Two-photon fluorescence spectroscopy and microscopy of NAD(P)H and flavoprotein. Biophys. J. 82, 2811–2825 (2002). 49. 49. Hansson, E. & Vällfors, B. A study of irrigation fluids for neurosurgery on brain primary cell cultures. Experientia 36, 64–65 (1980). 50. 50. Walsh, A. J., Poole, K. M., Duvall, C. L. & Skala, M. C. Ex vivo optical metabolic measurements from cultured tissue reflect in vivo tissue status. J. Biomed. Opt. 17, 116015 (2012). 51. 51. Louis, D. N. et al. The 2016 World Health Organization Classification of Tumors of the Central Nervous System: a summary. Acta Neuropathol. (Berl.) 131, 803–820 (2016). 52. 52. Franke, R. & Holst, G. A. Frequency-domain fluorescence lifetime imaging system (pco.flim) based on a in-pixel dual tap control CMOS image sensor. in Imaging, Manipulation, and Analysis of Biomolecules, Cells, and Tissues XIII vol. 9328 93281K (International Society for Optics and Photonics, 2015). 53. 53. Ranjit, S., Malacrida, L., Stakic, M. & Gratton, E. Determination of the metabolic index using the fluorescence lifetime of free and bound nicotinamide adenine dinucleotide using the phasor approach. J. Biophotonics 12, e201900156 (2019). 54. 54. Seiffert, C., Khoshgoftaar, T. M., Van Hulse, J. & Napolitano, A. RUSBoost: Improving classification performance when training data is skewed. in 2008 19th International Conference on Pattern Recognition 1–4 (IEEE, 2008). Download references ## Acknowledgements Part of this work has been presented at a poster session at the 17th IPA World Congress in Boston, USA. We thank Aner Gurvitz, Gerhard Holst, Martin Borkovec and Hannes Stegmann for fruitful discussions and support. ## Funding This project has received funding from the EU Horizon 2020 research and innovation program (MSCA grant 721766), the European Research Council (ERC StG 640396 OPTIMALZ) and the Austrian Science Fund (FWF KLIF 394). T.R. is recipient of a DOC Fellowship of the Austrian Academy of Sciences at the Institute of Neurology (25262). The financial support by the Austrian Federal Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology and Development is gratefully acknowledged. ## Author information Authors ### Contributions M.T.E., W.D., R.L. and G.W. initiated the project. G.W. performed tumor resection. M.T.E. and D.R. performed all lifetime measurements under supervision of A.U. and M.A. B.K. and P.A.M. organized and lead the biopsy handling as well as transportation. J.G., T.R. and A.W. performed the histopathological analyses. M.T.E. and A.R. analyzed the data. M.T.E. and G.W. wrote the draft. All authors revised and approved the final version of the manuscript. ### Corresponding author Correspondence to Marco Andreana. ## Ethics declarations ### Competing interests The authors declare no competing interests. ## Additional information ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Reprints and Permissions ## About this article ### Cite this article Erkkilä, M.T., Reichert, D., Gesperger, J. et al. Macroscopic fluorescence-lifetime imaging of NADH and protoporphyrin IX improves the detection and grading of 5-aminolevulinic acid-stained brain tumors. Sci Rep 10, 20492 (2020). https://doi.org/10.1038/s41598-020-77268-8 Download citation • Received: • Accepted: • Published: ## Comments By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. ## Search ### Quick links Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
# Total Caloris MS 23 #### APPLICATIONS Industrial applications #### Recommendation • Lubrication of slow moving mechanisms subjected to very high temperatures and repeated shocks. The presence of molybdenum disulfide will guarantee good lubrication, and avoid any jamming or sticking. • Allows peak temperatures up to 220°C, provided that the period of operation at this temperature is limited and that the equipment is re-greased frequently. • CALORIS MS 23 grease offers the possibility of working in a corrosive atmosphere due to its resistance to mild alkaline and acidic solutions. • Always avoid contamination of the grease by dust and/or dirt when applying. Preferably use a pneumatic pump system. It is recommended to clean all components before using CALORIS MS 23 and to avoid any mixture/dilution of the grease with conventional greases. #### SPECIFICATIONS • ISO 6743-9: L-XAEEA 2/3 • DIN 51502: MF2/3P – 15 Shock resistant Resistant to acid solutions • Excellent mechanical stability. • Excellent resistance to high loads and repeated shock loading due to the presence of Molybdenum disulfide. • Very good anti-wear properties • Resistant to mild alkaline and acidic solutions. Looking for any other product in TOTAL OIL & LUBRICANTS or other Lubricant ? Just give us a call.
### Author Topic: Jump  (Read 14745 times) 0 Members and 1 Guest are viewing this topic. #### mrmprog • LV7 Elite (Next: 700) • Posts: 559 • Rating: +35/-1 ##### Jump « on: July 27, 2011, 02:51:18 am » This is my first attempt at something big in Axe. You jump around and try not to fall. Yay! I am going to try and make this a full game. It looks better on calc. Thanks to Eeems, Darl, Hayleia, FinaleTI, Builderboy,and p2 for help.  I need to somehow make it so impossible places can't happen. If anyone has a solution to this problem, please let me know. I will be fine tuning things, improving code and the such meanwhile. Please give me feedback, as this is my first Axe project. Controls:2nd to jump, left and right to move, enter to select stuff, clear to exit or skip stuff. « Last Edit: November 15, 2011, 09:08:25 pm by mrmprog » #### Sitarknight • LV2 Member (Next: 40) • Posts: 21 • Rating: +1/-0 • I am SitarKnight ##### Re: mrmprog's game « Reply #1 on: July 27, 2011, 04:57:47 am » Looks pretty good man, I like it "The promise of a Craftknight is stronger than the toughest steel." #### mrmprog • LV7 Elite (Next: 700) • Posts: 559 • Rating: +35/-1 ##### Re: mrmprog's game « Reply #2 on: July 27, 2011, 09:28:06 am » Glad you like it Yay, updates! I made platforms more "regular", a new sprite for falling/jumping, you can jump further, and dying is... different. Next: Highscores, a menu, and a death animation. Maybe Next: Title screen. « Last Edit: July 29, 2011, 05:36:56 am by mrmprog » #### p2 • Posts: 849 • Rating: +51/-11 • I'm back :) ##### Re: mrmprog's game « Reply #3 on: July 27, 2011, 10:02:13 am » looks not bad - it looks good! You may add, that the lines are just possible to be at every trind line of the screen, or something like this. (then they are not like giant, grey blocks of 4 or 5 lines) I think, A ball or a smiley would be a better than a stickfigure. (you can't really see, what the pic is; ih's hard in cause of the grey lines) *insert supercool signature* #### mrmprog • LV7 Elite (Next: 700) • Posts: 559 • Rating: +35/-1 ##### Re: mrmprog's game « Reply #4 on: July 27, 2011, 10:03:59 am » Yeah, I might change that. What would look best in your opinion? Edit: I made some smiley sprites and they look much better, thanks Tommorow, I will post another screenshot with the new sprites. « Last Edit: July 27, 2011, 10:11:24 am by mrmprog » #### p2 • Posts: 849 • Rating: +51/-11 • I'm back :) ##### Re: mrmprog's game « Reply #5 on: July 27, 2011, 10:30:45 am » I thought of something like this: Quote --  -  - -  - --- - -- - ---- - --- -- -- - - - -  --  -- - - -- - ---- -- ----- -- - ---- - - - --  --- -- - --- --- --  -- - --- --- - -- -- -- ---- - - ---- If you make it like this: Code: [Select] :rand^16 -> A           //A is the Y-coord of the line4*(A) -> A the game would look like that: Quote --  -  - -  - --- - -- - ---- - --- -- -- - - - -  --  -- - - -- - ---- -- ----- -- - ---- - - - --  --- -- - - - --  --- -- - --- --- --  -- - --- --- - -- -- -- ---- - - ---- It will be better like that, because else there are giant grey blocks of 3 or 4 lines, each one intersecting each other. Another idea: Try to use :pt-off(X,Y,[FFFFFFFFFFFFFFFF]) :pt-on(X,Y,[code of smiley]) It'll make a white space behind the smiley, so that you can see it better. *insert supercool signature* #### Hayleia • Programming Absol • Coder Of Tomorrow • LV12 Extreme Poster (Next: 5000) • Posts: 3367 • Rating: +393/-7 ##### Re: mrmprog's game « Reply #6 on: July 27, 2011, 11:36:56 am » Another idea: Try to use :pt-off(X,Y,[FFFFFFFFFFFFFFFF]) :pt-on(X,Y,[code of smiley]) It'll make a white space behind the smiley, so that you can see it better. For that idea, a Pt-Off(X,Y,[code of smiley or pointer] would be enough as "Pt-Off" puts a sprite but erases what there is at the location before. I own: 83+ ; 84+SE ; 76.fr ; CX CAS ; Prizm ; 84+CSE Sorry if I answer with something that seems unrelated, English is not my primary language and I might not have understood well. Sorry if I make English mistakes too. • Guest ##### Re: mrmprog's game « Reply #7 on: July 27, 2011, 12:32:52 pm » Another idea: Try to use :pt-off(X,Y,[FFFFFFFFFFFFFFFF]) :pt-on(X,Y,[code of smiley]) It'll make a white space behind the smiley, so that you can see it better. For that idea, a Pt-Off(X,Y,[code of smiley or pointer] would be enough as "Pt-Off" puts a sprite but erases what there is at the location before. No it doesn't.  Pt-Off() simply XORs the sprite in parameter slot 3 against the said place on the buffer, it doesn't clear the space behind it first. #### Hayleia • Programming Absol • Coder Of Tomorrow • LV12 Extreme Poster (Next: 5000) • Posts: 3367 • Rating: +393/-7 ##### Re: mrmprog's game « Reply #8 on: July 27, 2011, 02:18:46 pm » No it doesn't.  Pt-Off() simply XORs the sprite in parameter slot 3 against the said place on the buffer, it doesn't clear the space behind it first. Wut ? Weren't you confused with Pt-Change ? Here is a screenie of Commands.htm Or maybe I'm wrong because of a grayscale story. I don't know anything to grayscale. « Last Edit: July 27, 2011, 02:20:06 pm by Hayleia » I own: 83+ ; 84+SE ; 76.fr ; CX CAS ; Prizm ; 84+CSE Sorry if I answer with something that seems unrelated, English is not my primary language and I might not have understood well. Sorry if I make English mistakes too. • Guest ##### Re: mrmprog's game « Reply #9 on: July 27, 2011, 04:01:37 pm » Oh, my bad, you are correct.  I remember that Pt-Off used to mean it XORed it against the buffer,  I guess that has recently changed. « Last Edit: July 27, 2011, 04:02:46 pm by Ashbad » #### calcdude84se • Needs Motivation • LV11 Super Veteran (Next: 3000) • Posts: 2272 • Rating: +78/-13 • Wondering where their free time went... ##### Re: mrmprog's game « Reply #10 on: July 27, 2011, 04:37:14 pm » Oh, my bad, you are correct.  I remember that Pt-Off used to mean it XORed it against the buffer,  I guess that has recently changed. I don't think it's ever been like that. Pt-Change( has always been used for XOR. "People think computers will keep them from making mistakes. They're wrong. With computers you make mistakes faster." I'll put it online when it does something. #### LincolnB • Check It Out Now • LV9 Veteran (Next: 1337) • Posts: 1115 • Rating: +125/-4 • By Hackers For Hackers ##### Re: mrmprog's game « Reply #11 on: July 27, 2011, 06:20:43 pm » I need to ... make a counter of some sort so that it shifts the screen every few times through the loop, instead of every time. Try this: Code: [Select] .use the variable A for the counter you need.Main game loop:1->ARepeat getkey(15)A+1->AIf A=41->AEndIf A=1Vertical +Vertical +{r}End.Rest of your code goes here..End Let me know how that works out. Also, if you wanted to have multiple difficulty levels, you could have another variable , like B or something, and instead of writing: Code: [Select] If A=41->AEnd , you could write: Code: [Select] If A=B1->AEnd , where B is the difficulty level, 4 being easiest / slower because it makes it wait longer to scroll, and 2 being the fastest (where it scrolls every iteration of the main game loop). Let me know if I didn't explain that very well. « Last Edit: July 27, 2011, 06:27:14 pm by buttsfredkin » Completed Projects: >> Spacky Emprise   >> Spacky 2 - Beta   >> Fantastic Sam >> An Exercise In Futility   >> GeoCore My Current Projects: Projects in Development: In Medias Res - Contest Entry Talk to me if you need help with Axe coding. Spoiler For Bragging Rights: Not much yet, hopefully this section will grow soon with time (and more contests) #### mrmprog • LV7 Elite (Next: 700) • Posts: 559 • Rating: +35/-1 ##### Re: mrmprog's game « Reply #12 on: July 27, 2011, 10:46:55 pm » @buttsfredkin Thank you for the code, but I already did that using the score counter. You will see it soon. Also, about the pixeloff(, if I do that, wont it erase the lines behind the sprite? The lines are just drawn every few cycles of the loop, so I don't think I can redraw them if they are overwritten. « Last Edit: July 28, 2011, 03:42:53 am by mrmprog » #### Hayleia • Programming Absol • Coder Of Tomorrow • LV12 Extreme Poster (Next: 5000) • Posts: 3367 • Rating: +393/-7 ##### Re: mrmprog's game « Reply #13 on: July 28, 2011, 04:00:13 am » Yes, the lines would be erased But what do you use ? Do you use Pt-On or Pt-Change ? Because the guy's pixels seem to be inverted if there is a line behind him, so it looks like a Pt-Change. I own: 83+ ; 84+SE ; 76.fr ; CX CAS ; Prizm ; 84+CSE Sorry if I answer with something that seems unrelated, English is not my primary language and I might not have understood well. Sorry if I make English mistakes too.
term-rewriting-0.4.0.1: Term Rewriting Library Data.Rewriting.Term.Parse Synopsis # Documentation fromString xs s parsers a term from the string s, where elements of xs are considered as variables. parse :: Stream s m Char => ParsecT s u m f -> ParsecT s u m v -> ParsecT s u m (Term f v) Source # parse fun var is a parser for terms, where fun and var are parsers for function symbols and variables, respectively. The var parser has a higher priority than the fun parser. Hence, whenever var succeeds, the token is treated as a variable. Note that the user has to take care of handling trailing white space in fun and var. parseIO :: [String] -> String -> IO (Term String String) Source # Like fromString, but the result is wrapped in the IO monad, making this function useful for interactive testing. >>> parseIO ["x","y"] "f(x,c)" Fun "f" [Var "x",Fun "c" []] parseFun :: Stream s m Char => ParsecT s u m String -> ParsecT s u m String Source # parseFun ident parses function symbols defined by ident. parseVar :: Stream s m Char => ParsecT s u m String -> [String] -> ParsecT s u m String Source # parseVar ident vars parses variables as defined by ident and with the additional requirement that the result is a member of vars. parseWST :: Stream s m Char => [String] -> ParsecT s u m (Term String String) Source # parseWST xs is a parser for terms following the conventions of the ancient ASCII input format of the termination competition: every Char that is neither a white space (according to isSpace) nor one of '(', ')', or ',', is considered a letter. An identifier is a non-empty sequence of letters and it is treated as variable iff it is contained in xs.
# tri2nb 0th Percentile ##### Neighbours list from tri object The function uses the deldir package to convert a matrix of two-dimensional coordinates into a neighbours list of class nb with a list of integer vectors containing neighbour region number ids. Keywords spatial ##### Usage tri2nb(coords, row.names = NULL) ##### Arguments coords matrix of point coordinates with two columns row.names character vector of region ids to be added to the neighbours list as attribute region.id, default seq(1, nrow(x)) ##### Details If coordinates are duplicated, this function cannot be used. If the coordinates are from a grid, then they need to be ordered such that the first three are not collinear, so that the first triangle can be constructed. This can be achieved by randomising the order of the coordinates (possibly several times), and then re-ordering the order of the data to match the new order of the neighbour list - if this fix is used, remember to re-order the row.names argument as well as the coordinates! Please also note that triangulation of grid points will give arbitrary diagonal neighbours, which may not be a sensible outcome, and dnearneigh() may serve better where tri2nb() cannot be used. ##### Value The function returns an object of class nb with a list of integer vectors containing neighbour region number ids. knn2nb, dnearneigh, cell2nb • tri2nb ##### Examples example(columbus) coords <- coordinates(columbus) ind <- sapply(slot(columbus, "polygons"), function(x) slot(x, "ID")) col.tri.nb <- tri2nb(coords, row.names=ind) W <- as(nb2listw(col.tri.nb, style="B"), "CsparseMatrix") plot(columbus, border="grey") x <- seq(0,1,0.1) y <- seq(0,2,0.2) xy <- expand.grid(x, y) try(xy.nb <- tri2nb(xy)) seed <- 1234 xid <- sample(1:nrow(xy)) xy.nb <- tri2nb(xy[xid,]) plot(xy.nb, xy[xid,]) Documentation reproduced from package spdep, version 0.6-9, License: GPL (>= 2) ### Community examples Looks like there are no examples yet.
Locally owned. Advanced tech solutions is your answer for peace of mind and quick repairs and maintenance. Specializing in computer repair. Maintenance and it consulting for personal users or small businesses in the Albuquerque metro area. Monthly maintenance plans available. AS well AS remote repair and WEB hosting and WEBSITE design. Prevent a tech disaster with monthly maintenance and virus removal. Address 7529 Lew Wallace Dr NE, Albuquerque, NM 87109 (505) 797-3700 estimated standard error of the difference Bernalillo, New Mexico Since the above requirements are satisfied, we can use the following four-step approach to construct a confidence interval. Use the difference between sample means to estimate the difference between population means. The difference between the two sample means is 2.98-2.90 = .08. We use another theoretical sampling distribution—the sampling distribution of the difference between means—to test hypotheses about the difference between two sample means. Inferential statistics used in the analysis of this type of experiment depend on the sampling distribution of the difference between means. The sample from school B has an average score of 950 with a standard deviation of 90. The next section presents sample problems that illustrate how to use z scores and t statistics as critical values. Using the formulas above, the mean is The standard error is: The sampling distribution is shown in Figure 1. The standard deviation of this set of mean values is the standard error. The subscripts M1 - M2 indicate that it is the standard deviation of the sampling distribution of M1 - M2. Want to stay up to date? It is rare that the true population standard deviation is known. The standard deviation of the age was 9.27 years. From the Normal Distribution Calculator, we find that the critical value is 2.58. If eight boys and eight girls were sampled, what is the probability that the mean height of the sample of girls would be higher than the mean height of the sample Secondly, the standard error of the mean can refer to an estimate of that standard deviation, computed from the sample of data being analyzed at the time. The distribution of the differences between means is the sampling distribution of the difference between means. The problem states that test scores in each population are normally distributed, so the difference between test scores will also be normally distributed. The data set is ageAtMar, also from the R package openintro from the textbook by Dietz et al.[4] For the purpose of this example, the 5,534 women are the entire population The mean of these 20,000 samples from the age at first marriage population is 23.44, and the standard deviation of the 20,000 sample means is 1.18. Scenario 1. The SE of the difference then equals the length of the hypotenuse (SE of difference = ). What is the probability that the mean of the 10 members of Species 1 will exceed the mean of the 14 members of Species 2 by 5 or more? The range of the confidence interval is defined by the sample statistic + margin of error. The standard error turns out to be an extremely important statistic, because it is used both to construct confidence intervals around estimates of population means (the confidence interval is the standard Elsewhere on this site, we show how to compute the margin of error when the sampling distribution is approximately normal. WattersList Price: $34.99Buy Used:$0.01Buy New: $15.34Forgotten Statistics: A Refresher Course with Applications to Economics and BusinessDouglas Downing Ph.D., Jeff Clark Ph.D.List Price:$16.99Buy Used: $0.01Buy New:$4.25StatisticsRobert S. With n = 2 the underestimate is about 25%, but for n = 6 the underestimate is only 5%. In this analysis, the confidence level is defined for us in the problem. SEx1-x2 = sqrt [ s21 / n1 + s22 / n2 ] where SE is the standard error, s1 is the standard deviation of the sample 1, s2 is the standard But what exactly is the probability? However, this method needs additional requirements to be satisfied (at least approximately): Requirement R1: Both samples follow a normal-shaped histogram Requirement R2: The population SD's and are equal. The concept of a sampling distribution is key to understanding the standard error. Hutchinson, Essentials of statistical methods in 41 pages ^ Gurland, J; Tripathi RC (1971). "A simple approximation for unbiased estimation of the standard deviation". Identify a sample statistic. They report that, in a sample of 400 patients, the new drug lowers cholesterol by an average of 20 units (mg/dL). The 95% confidence interval for the average effect of the drug is that it lowers cholesterol by 18 to 22 units. This estimate may be compared with the formula for the true standard deviation of the sample mean: SD x ¯   = σ n {\displaystyle {\text{SD}}_{\bar {x}}\ ={\frac {\sigma }{\sqrt {n}}}} The 5 cm can be thought of as a measure of the average of each individual plant height from the mean of the plant heights. Therefore, .08 is not the true difference, but simply an estimate of the true difference. As you might expect, the mean of the sampling distribution of the difference between means is: which says that the mean of the distribution of differences between sample means is equal Boost Your Self-Esteem Self-Esteem Course Deal With Too Much Worry Worry Course How To Handle Social Anxiety Social Anxiety Course Handling Break-ups Separation Course Struggling With Arachnophobia? For women, it was $15, with a standard deviation of$2. SE = sqrt [ s21 / n1 + s22 / n2 ] SE = sqrt [(3)2 / 500 + (2)2 / 1000] = sqrt (9/500 + 4/1000) = sqrt(0.018 + 0.004) ISBN 0-521-81099-X ^ Kenney, J. Notice that it is normally distributed with a mean of 10 and a standard deviation of 3.317. Take it with you wherever you go. If eight boys and eight girls were sampled, what is the probability that the mean height of the sample of girls would be higher than the mean height of the sample This estimate is derived by dividing the standard deviation by the square root of the sample size. Relative standard error See also: Relative standard deviation The relative standard error of a sample mean is the standard error divided by the mean and expressed as a percentage. Despite the small difference in equations for the standard deviation and the standard error, this small difference changes the meaning of what is being reported from a description of the variation The researchers report that candidate A is expected to receive 52% of the final vote, with a margin of error of 2%. Sampling Distribution of Difference Between Means Author(s) David M. When the sample size is large, you can use a t statistic or a z score for the critical value. Retrieved Oct 15, 2016 from Explorable.com: https://explorable.com/standard-error-of-the-mean . Statistical Notes. Find the margin of error.
# ndiffs 0th Percentile ##### Number of differences required for a stationary series Functions to estimate the number of differences required to make a given time series stationary. ndiffs estimates the number of first differences necessary. Keywords ts ##### Usage ndiffs(x, alpha = 0.05, test = c("kpss", "adf", "pp"), type = c("level", "trend"), max.d = 2) ##### Arguments x A univariate time series alpha Level of the test, possible values range from 0.01 to 0.1. test Type of unit root test to use type Specification of the deterministic component in the regression max.d Maximum number of non-seasonal differences allowed ##### Details ndiffs uses a unit root test to determine the number of differences required for time series x to be made stationary. If test="kpss", the KPSS test is used with the null hypothesis that x has a stationary root against a unit-root alternative. Then the test returns the least number of differences required to pass the test at the level alpha. If test="adf", the Augmented Dickey-Fuller test is used and if test="pp" the Phillips-Perron test is used. In both of these cases, the null hypothesis is that x has a unit root against a stationary root alternative. Then the test returns the least number of differences required to fail the test at the level alpha. ##### Value An integer indicating the number of differences required for stationarity. ##### References Dickey DA and Fuller WA (1979), "Distribution of the Estimators for Autoregressive Time Series with a Unit Root", Journal of the American Statistical Association 74:427-431. Kwiatkowski D, Phillips PCB, Schmidt P and Shin Y (1992) "Testing the Null Hypothesis of Stationarity against the Alternative of a Unit Root", Journal of Econometrics 54:159-178. Osborn, D.R. (1990) "A survey of seasonality in UK macroeconomic variables", International Journal of Forecasting, 6:327-336. Phillips, P.C.B. and Perron, P. (1988) "Testing for a unit root in time series regression", Biometrika, 72(2), 335-346. Said E and Dickey DA (1984), "Testing for Unit Roots in Autoregressive Moving Average Models of Unknown Order", Biometrika 71:599-607. auto.arima and ndiffs • ndiffs ##### Examples # NOT RUN { ndiffs(WWWusage) ndiffs(diff(log(AirPassengers),12)) # } Documentation reproduced from package forecast, version 8.3, License: GPL-3 ### Community examples Looks like there are no examples yet.
How To Write Letter To Editor Filename how to write letter to editor How To Write Letter Editor 0 How To Write Letter Editor 1 How To Write Letter Editor 2 How To Write Letter Editor 3 How To Write Letter Editor 4 How To Write Letter Editor 5 How To Write Letter Editor 6 How To Write Letter Editor 7 How To Write Letter Editor 8 How To Write Letter Editor 9 How To Write Letter Editor 10 How To Write Letter Editor 11 How To Write Letter Editor 12 How To Write Letter Editor 13 How To Write Letter Editor 14 How To Write Latter To Editor Filename how to write latter to editor How To Write Latter Editor 0 How To Write Latter Editor 1 How To Write Latter Editor 2 How To Write Latter Editor 3 How To Write Latter Editor 4 How To Write Latter Editor 5 How To Write Latter Editor 6 How To Write Latter Editor 7 How To Write Latter Editor 8 How To Write Latter Editor 9 How To Write Latter Editor 10 How To Write Latter Editor 11 How To Write Latter Editor 12 How To Write Latter Editor 13 How To Write Latter Editor 14 How To Write A Letter To Eiditor Filename how to write a letter to eiditor How To Write A Letter Eiditor 0 How To Write A Letter Eiditor 1 How To Write A Letter Eiditor 2 How To Write A Letter Eiditor 3 How To Write A Letter Eiditor 4 How To Write A Letter Eiditor 5 How To Write A Letter Eiditor 6 How To Write A Letter Eiditor 7 How To Write A Letter Eiditor 8 How To Write A Letter Eiditor 9 How To Write A Letter Eiditor 10 How To Write A Letter Eiditor 11 How To Write A Letter Eiditor 12 How To Write A Letter Eiditor 13 How To Write A Letter Eiditor 14 How To Write A Letter To Editor Filename how to write a letter to editor How To Write A Letter Editor 0 How To Write A Letter Editor 1 How To Write A Letter Editor 2 How To Write A Letter Editor 3 How To Write A Letter Editor 4 How To Write A Letter Editor 5 How To Write A Letter Editor 6 How To Write A Letter Editor 7 How To Write A Letter Editor 8 How To Write A Letter Editor 9 How To Write A Letter Editor 10 How To Write A Letter Editor 11 How To Write A Letter Editor 12 How To Write A Letter Editor 13 How To Write A Letter Editor 14
motivation In my last post I discussed one possible study design for a therapeutic trial of anle138b in mouse models of fatal familial insomina and E200K Creutzfeldt-Jakob Disease.  In the cost structure for such a study, it turned out that > 80% of the costs in the initial cost estimate were directly proportional to the number of times that the mice had to to undergo bioluminescence imaging (BLI).  It occurred to me that because the age of onset in these mice is fairly variable, our study is in any case not powered to detect small changes in onset due to therapeutic treatment – for instance, my power curves showed we’d be unlikely to detect a 10% delay in onset.  Given that we can only detect large effect sizes anyway, I wondered whether our power would really be diminished much by only imaging the mice once every two or three weeks instead of every week.  The purpose of this post is to model how the frequency of observation affects statistical power in studies using BLI as a readout. background Above: Lampyris noctiluca, one of the many species that use luciferase to create bioluminescence.  Wikimedia Commons image by Wofl. Luciferase is an enzyme that breaks down the small molecule luciferin in order to release light.  Some wavelengths of that light pass right through bone, such that if a luciferase transgene is expressed in the mouse brain and mice are injected with luciferin, a fancy piece of equipment called the IVIS Spectrum can quantify the amount of light. Almost a decade ago, the development of mice with a GFAP-luc transgene made it possible to monitor upregulation of GFAP using BLI [Zhu 2004].  GFAP is a gene expressed in glial cells and it is upregulated during neurological stress, so the amount of luciferase produced in these mice is an indication of how advanced a disease is in their brain.  This transgene has since become a tool for studying several neurological diseases [Cordeau 2008 (ft), Keller 2009Watts 2011Stohr 2012Jany 2013].  BLI is an accurate proxy for prion replication in the brain [Tamguney 2009] and more recently it was validated as a proxy for therapeutic efficacy in a study of cpd-B as a therapeutic against RML prions [Lu & Giles 2013 (ft)]. BLI involves imaging the mice in an IVIS Spectrum at regular intervals (say, weekly), a quick (< 5 minute per mouse) process that involves injecting each mouse with 1.5 mg of luciferin (“50 ul of 30 mg/ml D-luciferin potassium salt solution”  [Lu & Giles 2013 (ft)].  BLI eliminates the need for labor-intensive (read: expensive) behavioral phenotyping of mice.  But it turns out that luciferin is expensive too.  It’s not for lack of competition – it’s available from several commercial suppliers - Gold BioTechPerkinElmerSigma, and Life at a minimum – but the best price works out to around $3 per dose. Multiply that by tens of mice and tens of weeks of imaging, and it can easily become the largest single cost in a study. Add to that the time that a technician or research assistant spends imaging the mice, which is likely to be the study’s largest labor cost, and it’s clear that reducing the number of imaging sessions could lead to potentially huge cost savings. update: we later found a better luciferin price – see our revised budget – which makes this issue less important, though the labor involved is still a good reason not to image more often than you need to. In practice bioluminescence has been monitored either every 7 days [Tamguney 2009Lu 2013 (ft)] 14 days [Watts 2011Stohr 2012 Fig 1] or 21 days [Stohr 2012 Fig 3]. The endpoint in these studies was originally defined as being when a single scan reveals luminescence greater than 2e6 photons [Tamguney 2009] and more recently has been defined as the time when two consecutive scans both reveal luminescence greater than either 1e6 photons [Stohr 2012 supplement] or 7e5 photons [Watts 2011]. model In order to model the relationship between observation frequency and power, I assume that each mouse will have onset at a particular point in time but that the onset won’t be observed until the following BLI session, so that for instance if they are monitored every Friday, then onset on a Monday won’t be noticed until four days later, and thus onset on a Monday and on a Wednesday of the same week are indistinguishable. The less frequent the imaging, the more timepoints become indistinguishable from one another. code First, some quick startup code: library(sqldf) # SQL's group by is more intuitive to me than R's aggregate! library(survival) # for log-rank test options(stringsAsFactors=FALSE) And then a wrapper for the simplest case of the log-rank test (from this post): simplesurvtest = function(vec1,vec2) { # vec1, vec2: lists of survival times of mice in control and treatment groups days = c(vec1,vec2) # make a single vector with times for both mice group = c(rep(0,length(vec1)),rep(1,length(vec2))) # make a matching vector indicating control/treatment status status = rep(1,length(vec1)+length(vec2)) # "status" = 1 = "had an event" for all mice mice = data.frame(days,status,group) # convert to data frame return (survdiff(Surv(days,status==1)~group,data = mice)) # return log-rank test results } And wrappers to return p values from the t, log rank and Kolmogorov-Smirnov tests: p_t = function(control_onset,treated_onset) { t_test_result = t.test(control_onset,treated_onset,alternative="two.sided",paired=FALSE) return (t_test_result$p.value) } p_lrt = function(control_onset,treated_onset) { lrt_test_result = simplesurvtest(control_onset,treated_onset) # call wrapper function for simple survival test lrt_p_value = 1 - pchisq(lrt_test_result$chisq, df=1) # obtain p value from chisq statistic by taking 1 - CDF(chi at 1 degree of freedom) return (lrt_p_value) } p_ks = function(control_onset,treated_onset) { ks_test_result = ks.test(control_onset, treated_onset) return(ks_test_result$p.value) } And a function to handle rounding up the day of onset to the next day of BLI observation: # rounds a value up to the next time it would be observed, e.g. if monitoring every 7 days, gliosis at day 50 is not noticed until day 56 roundup = function(rawvalue, interval) { return ( (rawvalue %/% interval + 1)*interval ) # %/% is integer division, which rounds down; +1 rounds up instead } With that, I can write this function to run simulations and determine power empirically: # function to run n_iter simulations to determine power empirically based on how many meet the alpha threshold of significance empirical_power = function (n_iter=1000, alpha=.05, mean_onset_control, sd_onset_control, effect_size, n_per_group, obs_freq, statistical_test) { empirical_power_result = 0.0 # choose a statistical test if (statistical_test == "t") { p_func = p_t } else if (statistical_test == "lrt") { p_func = p_lrt } else if (statistical_test == "ks") { p_func = p_ks } # conduct simulation for (iter in 1:n_iter) { this_iter_result = 0 # control onset is normal(mean,sd) control_onset = rnorm(n=n_per_group, m=mean_onset_control, s=sd_onset_control) # assume that treatment increases both the mean survival and the variance, so normal(mean*(1+effect_size),sd*(1+effect_size)) treated_onset = rnorm(n=n_per_group, m=mean_onset_control*(1+effect_size), s=sd_onset_control*(1+effect_size)) # detect cases where roundup() makes data constant, and handle separately lest they throw errors in statistical test functions if ( length(unique(roundup(control_onset,obs_freq))) == 1 & length(unique(roundup(treated_onset,obs_freq))) == 1 ) { # if control == treated then the correct answer is NO difference if (roundup(control_onset,obs_freq)[1] == roundup(treated_onset,obs_freq)[1]) { this_iter_result = 0 } else { # if control <> treated then the correct answer is YES there is a difference this_iter_result = 1 } } else { # normal cases where data are not constant this_iter_result = (1 * (p_func(roundup(control_onset,obs_freq),roundup(treated_onset,obs_freq)) < alpha)) } empirical_power_result = empirical_power_result + this_iter_result * 1/n_iter } # return power return (empirical_power_result) } Here’s one example call which asks the following question. If control onset is 365 ± 30 days and the effect of a drug is a 10% (so ~36 day) delay in onset, what is the power to detect significance using a log-rank test at the p < .05 threshold with n = 15 mice per group, monitored via BLI every 1 day, simulated over 1000 trials? # example to illustrate use of the function set.seed(1111) empirical_power(n_iter=10000, alpha=.05, mean_onset_control=365, sd_onset_control=30, effect_size=.10, n_per_group=15, obs_freq=1, statistical_test="lrt") Result: 85% power. Note that the log-rank test is particularly slow, and 10,000 iterations takes almost 1 minute. results with some specific examples Next we can ask how that answer depends on how often we monitor the mice. Would checking them every 7 days, 14 days or 21 days give us less power than checking every day? # power with monitoring every 7, 14 or 21 days set.seed(12345) empirical_power(n_iter=10000, alpha=.05, mean_onset_control=365, sd_onset_control=30, effect_size=.10, n_per_group=15, obs_freq=7, statistical_test="lrt") empirical_power(n_iter=10000, alpha=.05, mean_onset_control=365, sd_onset_control=30, effect_size=.10, n_per_group=15, obs_freq=14, statistical_test="lrt") empirical_power(n_iter=10000, alpha=.05, mean_onset_control=365, sd_onset_control=30, effect_size=.10, n_per_group=15, obs_freq=21, statistical_test="lrt") Amazingly, no. The results for these three are 85.85%, 85.08%, and 84.63% – all well within a standard error of each other (try re-running with a different random seed and you’ll see variability from 84 to 86%).  So basically: no power is lost by checking the mice only every third week compared to checking them every single day. And even more astonishingly, this is true not just within this narrow range of 7 – 21 day intervals – you lose only a bit of power by going up to even 50 day intervals.  Here’s a demonstration of that.  I calculate power for every possible n-weekly interval, i.e. doing BLI every week, every other week, every third week … every 57th week. set.seed(12345) obs_freqs = c(seq(7,399,7)) pwrs = vector(mode="numeric") for (obs_freq in obs_freqs) { pwrs = c(pwrs, empirical_power(n_iter=10000, alpha=.05, mean_onset_control=365, sd_onset_control=30, effect_size=.10, n_per_group=15, obs_freq=obs_freq, statistical_test="lrt")) } conditions = cbind(alpha=.05, mean_onset_control=365, sd_onset_control=30, effect_size=.10, n_per_group=15, statistical_test="lrt")[1,] plot(obs_freqs,pwrs,type='l',lwd=3,col='red',ylim=c(0,1),xlab='BLI observation frequency',ylab='power',main='Power as a function of BLI observation frequency',sub=print_vars(conditions), cex.sub=.6) Here’s a narrative interpretation of this plot.  You start with ~85% power if you monitor the mice every week, and as you space the intervals out more and more you lose just a bit of power, dropping to 80% if you only check at 50 day intervals. But it’s not monotonic.  There are spikes at 77, 98, 196 and 399 days.  These should be considered as artificial.  In the particular example I’ve set up, the control mice have onset around day 365 and the treated mice have onset around day 401.  If the observation frequency is a divisor of some number right in the sweet spot where most of the control mice have had onset and most treated mice have not, then you can still get a significant result even though you’re doing very few observations (77*5 = 385, 98*4 = 392, 196*2 = 392, 399*1 = 399). Those spikes, though artificial in a way, do reflect a true fact: if you are confident about the time of onset of control mice, you could design an experiment where you just keep the mice until that timepoint, then check them all histopathologically (no need to even bother with BLI) and do a Fisher’s exact test or similar to test whether more of the control mice than treated mice exhibit gliosis.  Monitoring BLI only once, at 400 days, is equivalent to doing that.  Here’s an example that illustrates this: set.seed(12345) n_per_group = 15 mean_onset_control = 365 sd_onset_control = 30 effect_size = .10 control_onset = rnorm(n=n_per_group, m=mean_onset_control, s=sd_onset_control) treated_onset = rnorm(n=n_per_group, m=mean_onset_control*(1+effect_size), s=sd_onset_control*(1+effect_size)) hist(rbind(control_onset,treated_onset),beside=TRUE,col=c('red','green')) control_onset treated_onset roundup(control_onset,400) roundup(treated_onset,400) t.test(control_onset,treated_onset) # result: 366 vs. 405 days, p = .001 fisher.test(rbind(table(roundup(control_onset,400)),table(roundup(treated_onset,400)))) # result: p = .014 In this particular example, the t test you could do on the exact onset days, if you monitored BLI every day, would give you 39-day difference in onset (366 vs. 405 days) at p = .001. If you monitor only at day 400 and day 800, you discover that 14 of 15 control mice have onset by day 400 while only 8 of 15 treated mice have onset by day 400. A Fisher’s exact test on these values gives you p = .014. So, amazingly, even monitoring for gliosis just once really can give you a significant result, if you know enough to pick the right date to check.  But that’s a gamble – in practice, we don’t know exactly when onset will occur, and there are big troughs of zero power in between the spikes, so better not to risk it.  Also, part of the point of doing BLI is not just to know if there is a statistically significant difference in survival between treated and control mice, but also to quantify how large that difference is.   A Fisher’s exact test can’t tell you a percentage delay in onset. How, then, does observation frequency affect our ability to quantify the delay in onset? set.seed(12345) n_per_group = 15 mean_onset_control = 365 sd_onset_control = 30 effect_size = .10 obs_freqs = c(seq(7,399,7)) results=data.frame(obs_freq=numeric(),iteration=integer(),onsetdiff=numeric()) for (obs_freq in obs_freqs) { for (i in 1:100) { control_onset = rnorm(n=n_per_group, m=mean_onset_control, s=sd_onset_control) treated_onset = rnorm(n=n_per_group, m=mean_onset_control*(1+effect_size), s=sd_onset_control*(1+effect_size)) onsetdiff = mean(roundup(treated_onset,obs_freq)) - mean(roundup(control_onset,obs_freq)) results = rbind(results,c(obs_freq,i,onsetdiff)) } } colnames(results)=c("obs_freq","iteration","onsetdiff") plot(results$obs_freq+runif(n=dim(results)[1],min=-3,max=3),results$onsetdiff,pch=16,cex=.6,xlab='observation interval',ylab='Nominal difference in onset', main='Variability in estimated delay in onset vs. observation frequency',cex.main=.8,col='#999999') abline(h=36.5,col='#000000') abline(h=0,col='red') In this simulated example, we know that the true delay in onset is 36.5 days (shown by the black horizontal bar).  The above code considers observation intervals of 7, 14, 21… 399 days and runs 100 simulations for each.  The dispersion of the estimated difference in onset around the true value looks pretty constant for observation intervals up to 50 or 70 and then begins to balloon. Notice also the shape of the dispersion.  It begins as a cloud at the far left, where you have frequent observations, but turns into discrete streaks starting around 70 days.  That’s because once the observation intervals are larger than the dispersion in the true data – the times of onset in the mice – your beautiful ratio-level data is essentially converted to categorical data.  For instance, as shown in the previous example, testing only once at 400 days is equivalent to asking “how many more treated than control mice had onset by 400 days?”, and there are only 16 possible answers to that: 0, 1, 2, 3, … 15. Another interesting point about this plot is just how few points lie below the x axis.  For instance, in the 0-100 day range, there are zero simulations in which a negative delay in onset is observed.  In other words, though in this case we may have limited (60-85%) power to detect a statistically significant delay in onset in the treated group, we’re overwhelmingly likely to see a nominal delay in onset. Finally, one technical aside about this plot: to prevent dots from overlapping one another and obscuring each other’s density, I added uniformly distributed noise to each x value with +runif(n=dim(results)[1],min=-3,max=3)  – this is one of my favorite R tricks. Now, just because we can’t see an increase in dispersion over the 1-100 day range in the plot above doesn’t mean it’s not there.  So I plotted the standard deviation in the estimate of delay in onset from 100 simulations at each observation frequency: # we can't see an increase in dispersion on the plot - now let's be more formal. *does* variance increase with larger obs_freq? sdvals = sqldf("select obs_freq, variance(onsetdiff) variance from results group by 1 order by 1;") m = lm(variance ~ obs_freq, data = subset(sdvals, obs_freq < 100)) summary(m) conditions = cbind(alpha=.05, mean_onset_control=365, sd_onset_control=30, effect_size=.10, n_per_group=15)[1,] plot(sdvals$obs_freq, sqrt(sdvals$variance),type='l', lwd=3, col='orange', main='Dispersion in estimate of delay in onset vs. observation frequency', xlab='observation interval', ylab='standard deviation in estimate', cex.main=.8, sub=print_vars(conditions), cex.sub=.6) And the result is that yes, variance does increase a bit – there seems to be a positive slope, even in the 0-100 range on the x axis.  But it’s subtle.  And of note: the y-intercept is ~10.  This reflects the fact that with a 36.5 day delay in onset (signal) on a 30 day standard deviation in onset (nosie), there is already a 10 day standard deviation in the estimates of delay in onset that you might get experimentally, even if you tested the mice with BLI every single day.  Against this background level of variability, performing less frequent observations doesn’t really make the estimate that much worse.  The standard deviation rises from a baseline of 10 to only about 12 by the time you get to 50-day intervals. So yes, our ability to estimate the delay in onset is impaired a bit by observing the mice less frequently, but within a reasonable range (up to once every month or two) it’s really not that much worse than if you performed BLI every day.  Or phrased differently, the vast majority of experimental error is built into the experiment by the variability of the mouse model itself, and only < 20% is due to the infrequency of observations, unless you observe even less frequently than about once in 50 days. more general results Everything above was just one example with very specific parameters.  How generalizable is it?  Can we draw any conclusions about when it is okay to save money by performing BLI less frequently versus when it really hurts you? To answer these questions I wanted to run my simulations over a wide space of possible scenarios, so I submitted this R script to run overnight, and got this output.  Note – the script I ran has a lot of other random stuff in it since I was still figuring this all out, but I left it unedited so that it accurately reflects what created the output file.  The important part of the script is just this nested loop: # Now vary the obs_freq and see how it matters. set.seed(12345) # fixed vars #statistical_test = "t" alpha = .05 mean_onset_control = 365 n_per_group = 15 n_iter = 10000 # iterable vars obs_freqs = c(1,7,14,21,30,60,90) effect_sizes = c(3.5/365,7/365,14/365,21/365,30/365,.1,.3,.5,.7,1.0,1.5,2.0) sd_onset_control_vals = c(1,3,7,14,21,30,60,90,150) statistical_tests = c("t","lrt","ks") results = data.frame(sd_onset_control = numeric(), effect_size=numeric(), obs_freq=numeric(), statistical_test=character(), pwr=numeric()) for (statistical_test in statistical_tests) { for (obs_freq in obs_freqs) { for (effect_size in effect_sizes) { for (sd_onset_control in sd_onset_control_vals) { pwr = empirical_power(n_iter=n_iter, alpha=alpha, mean_onset_control=mean_onset_control, sd_onset_control=sd_onset_control, effect_size=effect_size, n_per_group=n_per_group, obs_freq=obs_freq, statistical_test=statistical_test) results = rbind(results, c(sd_onset_control, effect_size, obs_freq, statistical_test, pwr)) } } } } colnames(results) = c("sd_onset_control","effect_size","obs_freq","statistical_test","pwr") results write.table(results,'biolum-power-sim-results-10000.txt',sep='\t',row.names=FALSE,col.names=TRUE,quote=FALSE) # bsub.py long 160:00 "R CMD BATCH bioluminescence-power-sim.r" # For 10K iterations each this took ~ 6 hours I then set up to analyze the resulting data.  First off, I was curious to see how much of the variation in power over the conditions I had set up could be explained by a simple linear model of effect size, sd, observation frequency and which statistical test was used. simres = read.table('biolum-power-sim-results-10000.txt',sep='\t',header=TRUE) m = lm(pwr ~ sd_onset_control + effect_size + obs_freq + statistical_test, data = simres) summary(m) # R^2 = .46 m = lm(pwr ~ sd_onset_control * effect_size * obs_freq * statistical_test, data = simres) summary(m) # R^2 = .53 Only 46% can be explained in an additive linear model, and 53% with interaction terms.  Power is a funny nonlinear beast.  All the coefficients were significant and had their expected signs: higher SD and less frequent observations are bad for power, larger effect sizes are good for power.  Log-rank and t test are more powered than the KS test. Call: lm(formula = pwr ~ sd_onset_control + effect_size + obs_freq + statistical_test, data = simres) Residuals: Min 1Q Median 3Q Max -0.61323 -0.27961 0.00747 0.29927 0.58923 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.5715293 0.0153931 37.129 < 2e-16 *** sd_onset_control -0.0025059 0.0001374 -18.238 < 2e-16 *** effect_size 0.3993109 0.0102772 38.854 < 2e-16 *** obs_freq -0.0022484 0.0002184 -10.294 < 2e-16 *** statistical_testlrt 0.0837661 0.0159002 5.268 1.51e-07 *** statistical_testt 0.0795741 0.0159002 5.005 6.03e-07 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3091 on 2262 degrees of freedom Multiple R-squared: 0.4672, Adjusted R-squared: 0.466 F-statistic: 396.7 on 5 and 2262 DF, p-value: < 2.2e-16 Next I wanted to ask how much power was lost by having observations less frequent than once per week, compared to the power for weekly observations.  I joined the data to itself in SQL and linearly modeled the relationship between the difference in power and the other variables: # join to see how power is lost with less frequent observations sql_query = " select s7.sd_onset_control, s7.effect_size, s7.statistical_test, so.obs_freq, s7.pwr pwr7, so.pwr pwro, s7.pwr - so.pwr pwrdiff from simres s7, simres so where s7.sd_onset_control = so.sd_onset_control and s7.effect_size = so.effect_size and s7.statistical_test = so.statistical_test and s7.obs_freq = 7 and so.obs_freq > 7 ; " compare7 = sqldf(sql_query) m = lm(pwrdiff ~ obs_freq + sd_onset_control + effect_size, data = compare7) # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.1190553 0.0121145 9.828 <2e-16 *** # obs_freq 0.0020599 0.0001959 10.516 <2e-16 *** # sd_onset_control -0.0013645 0.0001172 -11.647 <2e-16 *** # effect_size -0.1090742 0.0087633 -12.447 <2e-16 *** #Multiple R-squared: 0.1989, Adjusted R-squared: 0.1974 This simple model explained about 20% of variance in power lost.  But I was pretty sure the interaction between these variables would be much more interesting than any of the variables themselves.  First I browsed the list of cases where we went from almost 100% power to almost 0% power: compare7[compare7$pwrdiff > .90,] And every entry on the list was a case with an SD of 1 – 7 days, a small effect size (≤ 36.5 days) and an observation frequency larger than the effect size. No surprises here: if your observations are less frequent than the difference between the treated and control data points, you’ll have no power. But this is only a loss of power when the SD is so small that you would have been able to see the difference if only you’d imaged more frequently. Based on this I hypothesized that what determines loss of power is the relationship between the SD, the observation frequency and the absolute effect size (in days, as opposed to relative effect size in % as I originally had it). I therefore created a column for absolute effect size, and converted the difference in power to a loss in power: compare7$effect_size_abs = compare7$effect_size * 365 compare7$pwrloss = -compare7$pwrdiff # set as a negative number so that the color scheme makes more sense in image() compare7$pwrloss[compare7$pwrloss > 0] = 0 # consider only loss, not gain, of power And I sought to plot power loss (compared to 7 day observations as a benchmark) against absolute effect size and observation frequency, using color as a third dimension. Here’s the plot for power loss with biweekly observations as opposed to weekly. # image() plot of power lost vs. SD vs. absolute effect size, given an observation frequency of 14 days as compared to 7 days benchmark temp1 = compare7[compare7$obs_freq==14 & compare7$statistical_test=="lrt",c("effect_size_abs","sd_onset_control","pwrloss")] temp2 = acast(temp1, formula = effect_size_abs ~ sd_onset_control, value.var="pwrloss") image(1:dim(temp2)[1],1:dim(temp2)[2],temp2,xaxt='n',xlab='Effect size (days)',yaxt='n',ylab='SD of onset in controls (days)',main='Loss in power with 14-day observation frequency') for (i in 1:dim(temp2)[1]) { for (j in 1:dim(temp2)[2]) { text(i,j,labels=formatC(temp2[i,j],digits=2,format="f"),cex=.5) } } axis(side=1,at=seq(1,dim(temp2)[1],1),labels=format(as.numeric(rownames(temp2)),digits=1),cex.axis=.6,las=2) axis(side=2,at=seq(1,dim(temp2)[2],1),labels=format(as.numeric(colnames(temp2)),digits=1),cex.axis=.6,las=2) In most grid cells, no power at all is lost. A substantial amount of power is lost only in the lower left corner – for instance, there is a 79% loss in power if the effect size is 7 days and the SD is 1 day (the red square). Here’s what I interpret that lower left corner as representing: Power is lost only when the observation interval is larger than both the effect size and the SD. To rationalize this, consider the following. In the upper left, the SD is larger than the effect size, and so you never had any power to start with; there is no power to lose. In the lower right, the effect size is so large that you’re saturated with 100% power – you won’t miss this effect. As you move towards the upper right, both the effect size and SD are large, so coarsely detecting the difference in distributions is good enough – precision down to the level of a week is not important. When I create the same plot for power lost at 21 days vs. 7 days, the effect is consistent with this interpretation: And same with 30 days: And 60: (If/when I do this over again I’ll choose regular intervals for the effect sizes and SDs that I simulate so that the x and y axes are linear.) discussion All of the evidence presented here suggests that doing BLI imaging less frequently than once per week only leads to a loss of power in very specific circumstances. The only time when more than 1 or 2% power is lost is when both the effect size (in days) and the standard deviation in onset are less than the observation frequency. For instance, if you image only once per month, you only lose power in cases where the effect of a drug treatment is less than a one month delay and the standard deviation in the ages of onset of the mice is less than one month. One interpretation is that there isn’t much point in performing BLI at intervals any smaller than either the standard deviation in the mice, or the effect size (in days) that you estimate your study is powered to detect. As a real application, consider this possible study design for anle138b. We don’t know for certain the standard deviation in age of onset of gliosis in the FFI mice, though I argued it might well be more than 30 days. We also don’t know the effect size we expect, though if anle138b’s therapeutic effectts against RML are any guide (and I wish they were!), it could easily be 30%, which would be 4 months’ delay in onset. If we assume the standard deviation of onset in FFI mice is at least 30 days, then the conclusion from this post is that imaging weekly would provide essentially no marginal statistical power for our study compared to imaging monthly. And, as argued in one example above, imaging only once a month would also not have much impact on our ability to estimate the number of days of delay in onset. Less frequent imaging, then, is a really good deal. In the cost structure we outlined, imaging monthly as opposed to weekly would cut the direct costs by more than half, from$20,000 to \$9,000.  All that for basically no loss in power. outstanding issues Here are a few other considerations this analysis does not address: • Basing the measured “onset” on the first single BLI measurement above threshold vs. requiring two consecutive measurements to be above a threshold.  Intuitively, it seems that imaging every 30 days and requiring 1 measurement above threshold should be equivalent to imaging every 14 days and requiring 2 measurements above threshold.  But I’m not certain that’s correct, and I haven’t modeled it here. • Many of the published BLI studies see a jagged pattern of BLI photons over time [e.g. red line in Watts 2011 Fig 2].  In some cases [e.g. orange line in Stohr 2012 Fig 1] the difference between times is far larger than the variance within any one time.  Does anyone know what causes this week-to-week variation? • I haven’t explicitly modeled the measurement error in the BLI process itself, i.e. the noise in the number of photons measured. • It’s possible that in the FFI mice, for instance, the gliosis is more gradual/subtle and there will not be a single obvious inflection point.  If so, it might prove more appropriate to compare the slopes of the BLI curves using an ANCOVA or something like that.  I haven’t modeled how the power of such a test would be affected by less frequent observations. I welcome any suggestions or critiques – including suggestions of how to incorporate the above ideas.
# Control Your Computer's Fan Speed The objective of this question is to create a program that pushes your computer's internal fan speed to its maximum RPM. Upon exiting the program, your fan should return to the normal operating RPM. Rules: 1. You may not perform a CPU intensive task in order to modify the fan's speed. 2. Exiting the program may be via user input, a timeout period, or a response to a SIGTERM or SIGEXIT signal. 3. Avoid the standard loopholes. • How should the program exit? Timeout? User input? Signal? – Greg Hewgill Jul 3 '14 at 0:10 • @GregHewgill No requirement is put in place on that, whatever takes the fewest number of characters would make an optimal solution however. – syb0rg Jul 3 '14 at 0:33 • @user3334871, controlling the fan speed is possible. The bit I'm dubious about is resetting it to normal when the program is killed without any opportunity to react. – Peter Taylor Jul 3 '14 at 9:10 • I suspect very strongly that in order to meet this requirement, the program would need to be closed in a controlled manner. As far as I'm aware, this is no way to capture an unmanaged process termination. – primo Jul 3 '14 at 10:21 • "computer's internal fan speed", which fan? – CousinCocaine Jul 4 '14 at 7:02 # OSX + Bash + smcFanControl, 91 bytes This relies on the third-party smcFanControl suite to do the hard work and is therefore more of a proof-of-concept than a serious answer. Real answers could pick apart the smcFanControl source code and do this without third-party help. smcFanControl.app is assumed to be installed in /Applications. p=/Ap*/smcFan*/C*/R*/smc\ -k # Path to CLI utility f()($p'FS! ' -w000$1) # function to set fan mode f 3 # Set fan speeds to "forced" mode eval \$p\ F{0,1}"Tg -w5DC0;" # Set fan 1 and 2 to 6000 RPM target speed read # wait for keyboard input f 0 # Return fans to "auto" mode • depending on your entry the whole point of this contest is to find an utility that has the shortest name and then control the fan with it. how clever one must have to be... and this doesn't mean i don't like your answer. i belive it's already a winner. but the whole challenge is pointless. – bebe Jul 3 '14 at 17:08 • DigitalTrauma found a program that can be controlled like that. What if i find (or make but okay, its a loophole) an utility that does the same if I type maxfan? – bebe Jul 3 '14 at 17:20 • @bebe All good points. See my edit. – Digital Trauma Jul 3 '14 at 17:31 • basically Outsourcing the answer – Not that Charles Jul 3 '14 at 17:33 # GLXGEARS - 8bytes First thing I thought of was yes, or yes in parallel. As we are not allowed to use the CPU to control the speed, let's use the GPU: glxgears # Reboot, 6 bytes reboot Just after a boot, the fans start spinning at max rpm because the power is turned on to the fan, before the BIOS loads any real time controllers that will base the speed of the fan on the temperature of the processor. This also keeps the processor from getting excessively hot if you were to try the alternative... which would be to keep the fan off until those controllers were loaded and basing the fan speed on processor temp. More of a safeguard than anything. The processor is starting to work the moment you turn the computer on, but the BIOS still needs time to load. https://superuser.com/a/427723/246895 (Does not work on every pc, but is confirm the OP) • The program exits before the reboot actually occurs, so the fan spin-up happens after program exit. This is not at all in line with the spec. – Peter Taylor Jul 4 '14 at 15:21 • @PeterTaylor Completely true. – CousinCocaine Jul 4 '14 at 17:07
# torch.Tensor.scatter_reduce_¶ Tensor.scatter_reduce_(dim, index, src, reduce, *, include_self=True) Reduces all values from the src tensor to the indices specified in the index tensor in the self tensor using the applied reduction defined via the reduce argument ("sum", "prod", "mean", "amax", "amin"). For each value in src, it is reduced to an index in self which is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim. If include_self="True", the values in the self tensor are included in the reduction. self, index and src should all have the same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= self.size(d) for all dimensions d != dim. Note that index and src do not broadcast. For a 3-D tensor with reduce="sum" and include_self=True the output is given as: self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2 Note This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information. Note The backward pass is implemented only for src.shape == index.shape. Warning This function is in beta and may change in the near future. Parameters: • dim (int) – the axis along which to index • index (LongTensor) – the indices of elements to scatter and reduce. • src (Tensor) – the source elements to scatter and reduce • reduce (str) – the reduction operation to apply for non-unique indices ("sum", "prod", "mean", "amax", "amin") • include_self (bool) – whether elements from the self tensor are included in the reduction Example: >>> src = torch.tensor([1., 2., 3., 4., 5., 6.]) >>> index = torch.tensor([0, 1, 0, 1, 2, 1]) >>> input = torch.tensor([1., 2., 3., 4.]) >>> input.scatter_reduce(0, index, src, reduce="sum") tensor([5., 14., 8., 4.]) >>> input.scatter_reduce(0, index, src, reduce="sum", include_self=False) tensor([4., 12., 5., 4.]) >>> input2 = torch.tensor([5., 4., 3., 2.]) >>> input2.scatter_reduce(0, index, src, reduce="amax") tensor([5., 6., 5., 2.]) >>> input2.scatter_reduce(0, index, src, reduce="amax", include_self=False) tensor([3., 6., 5., 2.])
# random variable ## English English Wikipedia has an article on: Wikipedia ### Noun random variable (plural random variables) 1. (statistics, broadly) A quantity whose value is random and to which a probability distribution is assigned, such as the possible outcome of a roll of a dice. 2. (statistics, formally) A measurable function from a sample space to the measurable space of possible values of the variable. • 1996, Ron C. Mittelhammer, Mathematical Statistics for Economics and Business, Volume 78, Springer, page 45, Henceforth the symbol ${\displaystyle X(w)}$ will be used for the random variable ${\displaystyle X:S\rightarrow \mathbb {R} }$. • 2009, Christian Perwass, Geometric Algebra with Applications in Engineering, Springer, page 351, The particular example considered here is the Hilbert space of random variables. • 2012, Scott Miller, Donald Childers, Probability and Random Processes, Elsevier (Academic Press), 2nd Edition, page 177, A two-dimensional random variable is a mapping of the points in the sample space to ordered pairs {x, y}. Usually, when dealing with a pair of random variables, the sample space naturally partitions itself so that it can be viewed as a combination of two simpler sample spaces. #### Usage notes Especially in discrete cases, a random variable is sometimes said to be indexed by the domain of its defining function, leading to notations such as ${\displaystyle X[n]}$ and ${\displaystyle X_{i}}$ to represent particular values of the codomain.
write.m 1.27 KB Houtan Bastani committed Sep 02, 2019 1 2 ``````function write(o, fid, pg, sec, row, col, rep_dir) %function write(o, fid, pg, sec, row, col, rep_dir) `````` 3 ``````% Write a Table object `````` Houtan Bastani committed Feb 18, 2013 4 5 ``````% % INPUTS `````` Houtan Bastani committed Sep 02, 2019 6 7 8 9 10 11 12 ``````% o [table] table object % fid [integer] file id % pg [integer] this page number % sec [integer] this section number % row [integer] this row number % col [integer] this col number % rep_dir [string] directory containing report.tex `````` Houtan Bastani committed Feb 18, 2013 13 14 ``````% % OUTPUTS `````` 15 ``````% o [table] table object `````` Houtan Bastani committed Feb 18, 2013 16 17 18 ``````% % SPECIAL REQUIREMENTS % none `````` Houtan Bastani committed Feb 12, 2013 19 `````` `````` Houtan Bastani committed Aug 29, 2019 20 ``````% Copyright (C) 2013-2019 Dynare Team `````` Houtan Bastani committed Feb 12, 2013 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 ``````% % This file is part of Dynare. % % Dynare is free software: you can redistribute it and/or modify % it under the terms of the GNU General Public License as published by % the Free Software Foundation, either version 3 of the License, or % (at your option) any later version. % % Dynare is distributed in the hope that it will be useful, % but WITHOUT ANY WARRANTY; without even the implied warranty of % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the % GNU General Public License for more details. % % You should have received a copy of the GNU General Public License % along with Dynare. If not, see . `````` Houtan Bastani committed Sep 02, 2019 37 ``````tableName = writeTableFile(o, pg, sec, row, col, rep_dir); `````` Houtan Bastani committed Aug 29, 2019 38 ``````fprintf(fid, '\\input{%s}', tableName); `````` Houtan Bastani committed Mar 08, 2013 39 ``end``
# The American Practical Navigator/Chapter 26 The American Practical Navigator by the United States government Chapter 26 Chapters: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, Glossary, Acronyms # CHAPTER 26 EMERGENCY NAVIGATION ## BASIC TECHNIQUES OF EMERGENCY NAVIGATION ### 2600. Planning for Emergencies Increasing reliance on electronic navigation and communication systems has dramatically changed the perspective of emergency navigation. While emergency navigation once concentrated on long-distance lifeboat navigation, today it is far more likely that a navigator will suffer failure of his ship’s primary electronic navigation systems than that he will be forced to navigate a lifeboat. In the unlikely event that he must abandon ship, his best course of action is to remain as close to the scene as possible, for this is where rescuers will concentrate their search efforts. Leaving the scene of a disaster radically decreases the chance of rescue, and there is little excuse for failure to notify rescue authorities with worldwide communications and maritime safety systems available at little cost. See Chapter 28 for further discussion of these systems. In the event of failure or destruction of electronic systems when the vessel itself is not in danger, navigational equipment and methods may need to be improvised. This is especially true with ECDIS and electronic charts. The navigator of a paperless ship, whose primary method of navigation is ECDIS, must assemble enough backup paper charts, equipment, and knowledge to complete his voyage in the event of a major computer system failure. A navigator who keeps a couple of dozen paper charts and a spare handheld GPS receiver under his bunk will be a hero in such an event. If he has a sextant and celestial calculator or tables and the knowledge to use them, so much the better. No navigator should ever become completely dependent on electronic methods. The navigator who regularly navigates by blindly pushing buttons and reading the coordinates from “black boxes” will not be prepared to use basic principles to improvise solutions in an emergency. For offshore voyaging, the professional navigator should become thoroughly familiar with the theory of celestial navigation. He should be able to identify the most useful stars and know how to solve various types of sights. He should be able to construct a plotting sheet with a protractor and improvise a sextant. He should know how to solve sights using tables or a navigational calculator. For the navigator prepared with such knowledge the situation is never hopeless. Some method of navigation is always available to one who understands certain basic principles. The modern ship’s regular suite of navigation gear consists of many complex electronic systems. Though they may possess a limited backup power supply, most depend on an uninterrupted supply of ship’s electrical power. The failure of that power due to breakdown, fire, or hostile action can instantly render the unprepared navigator helpless. This discussion is intended to provide the navigator with the information needed to navigate a vessel in the absence of the regular suite of navigational gear. Training and preparation for a navigational emergency are essential. This should consist of regular practice in the techniques discussed herein while the regular navigation routine is in effect in order to establish confidence in emergency procedures. ### 2601. Emergency Navigation Kit The navigator should assemble a kit containing equipment for emergency navigation. This kit should contain: 1. At least one proven and personally tested handheld GPS receiver with waypoints and routes entered, and with plenty of spare batteries. 2. A small, magnetic hand-bearing compass such as is used in small craft navigation, to be used if all other compasses fail. 3. A minimal set of paper charts for the voyage at hand, ranging from small-scale to coastal to approach and perhaps harbor, for the most likely scenarios. A pilot chart for the ocean basin in question makes a good small scale chart for offshore use. 4. A notebook or journal suitable for use as a deck log and for computations, plus maneuvering boards, graph paper, and position plotting sheets. 5. Pencils, erasers, a straightedge, protractor or plotter, dividers and compasses, and a knife or pencil sharpener. 6. A timepiece. The optimum timepiece is a quartz crystal chronometer, but any high-quality digital wristwatch will suffice if it is synchronized with the ship’s chronometer. A portable radio capable of receiving time signals, together with a good wristwatch, will also suffice. 7. A marine sextant. (An inexpensive plastic sextant will suffice.) Several types are available commercially. The emergency sextant should be used periodically so its limitations and capabilities are fully understood. 8. A celestial navigation calculator and spare batteries, or a current Nautical Almanac and this book or a similar text. Another year’s almanac can be used for stars and the Sun without serious error by emergency standards. Some form of long-term almanac might be copied or pasted in the notebook. 9. Tables. Some form of table might be needed for reducing celestial observations if the celestial calculator fails. The Nautical Almanac produced by the U.S. Naval Observatory contains detailed procedures for calculator sight reduction and a compact sight reduction table. 10. Flashlight. Check the batteries periodically and include extra batteries and bulbs in the kit. 11. Portable radio. A handheld VHF transceiver approved by the Federal Communications Commission for emergency use can establish communications with rescue authorities. A small portable radio may be used as a radio direction finder or for receiving time signals. 12. An Emergency Position Indicating Radiobeacon (EPIRB) and a Search and Rescue Transponder (SART) are absolutely essential. (See Chapter 28). ### 2602. Most Probable Position In the event of failure of primary electronic navigation systems, the navigator may need to establish the most probable position (MPP) of the vessel. Usually there is little doubt as to the position. The most recent fix updated with a DR position will be adequate. But when conflicting information or information of questionable reliability is received, the navigator must determine the MPP. When complete positional information is lacking, or when the available information is questionable, the most probable position might be determined from the intersection of a single line of position and a DR, from a line of soundings, from lines of position which are somewhat inconsistent, or from a dead reckoning position with a correction for set and drift. Continue a dead reckoning plot from one fix to another because the DR plot often provides the best estimate of the MPP. A series of estimated positions may not be consistent because of the continual revision of the estimate as additional information is received. However, it is good practice to plot all MPP’s, and sometimes to maintain a separate EP plot based upon the best estimate of track and speed made good. This could indicate whether the present course is a safe one (See Chapter 23). ### 2603. Plotting Sheets If plotting sheets are not available, a Mercator plotting sheet can be constructed through either of two alternative methods based upon a graphical solution of the secant of the latitude, which approximates the expansion of latitude. First method (Figure 2603a): Figure 2603a. Small area plotting sheet with selected longitude scale. Step one: Draw a series of equally spaced vertical lines at any spacing desired. These are the meridians; label them at any desired interval, such as 1', 2', 5', 10', 30', 1°, etc. Step two: Draw and label a horizontal line through the center of the sheet to represent the parallel of the mid-latitude of the area. Step three: Through any convenient point, such as the intersection of the central meridian and the parallel of the mid-latitude, draw a line making an angle with the horizontal equal to the mid-latitude. In Figure 2603a this angle is 35°. Step four: Draw in and label additional parallels. The length of the oblique line between meridians is the perpendicular distance between parallels, as shown by the broken arc. The number of minutes of arc between parallels is the same as that between the meridians. Step five: Graduate the oblique line into convenient units. If 1' is selected, this scale serves as both a latitude and mile scale. It can also be used as a longitude scale by measuring horizontally from a meridian instead of obliquely along the line. The meridians may be shown at the desired interval and the mid-parallel may be printed and graduated in units of longitude. In using the sheet it is necessary only to label the meridians and draw the oblique line. From it determine the interval used to draw in and label additional parallels. If the central meridian is graduated, the oblique line need not be. Second method (Figure 2603b): Figure 2603b. Small area plotting sheet with selected latitude scale. Step one: At the center of the sheet draw a circle with a radius equal to 1° (or any other convenient unit) of latitude at the desired scale. If a sheet with a compass rose is available, as in Figure 2603b, the compass rose can be used as the circle and will prove useful for measuring directions. It need not limit the scale of the chart, as an additional concentric circle can be drawn, and desired graduations extended to it. Step two: Draw horizontal lines through the center of the circle and tangent at the top and bottom. These are parallels of latitude; label them accordingly, at the selected interval (as every 1°, 30', etc.). Step three: From the center of the circle draw a line making an angle with the horizontal equal to the mid-latitude. In Figure 2603b this angle is 40°. Step four: Draw in and label the meridians. The first is a vertical line through the center of the circle. The second is a vertical line through the intersection of the oblique line and the circle. Additional meridians are drawn the same distance apart as the first two. Step five: Graduate the oblique line into convenient units. If 1' is selected, this scale serves as a latitude and mile scale. It can also be used as a longitude scale by measuring horizontally from a meridian, instead of obliquely along the line. In the second method, the parallels may be shown at the desired interval, and the central meridian may be printed and graduated in units of latitude. In using the sheet it is necessary only to label the parallels, draw the oblique line, and from it determine the interval and draw in and label additional meridians. If the central meridian is graduated, as shown in Figure 2603b, the oblique line need not be. The same result is produced by either method. The first method, starting with the selection of the longitude scale, is particularly useful when the longitude limits of the plotting sheet determine the scale. When the latitude coverage is more important, the second method may be preferable. In either method a simple compass rose might be printed. Both methods use a constant relationship of latitude to longitude over the entire sheet and both fail to allow for the ellipticity of the Earth. For practical navigation these are not important considerations. ### 2604. Dead Reckoning Of the various types of navigation, dead reckoning alone is always available in some form. In an emergency it is of more than average importance. With electronic systems out of service, keep a close check on speed, direction, and distance made good. Carefully evaluate the effects of wind and current. Long voyages with accurate landfalls have been successfully completed by this method alone. This is not meant to minimize the importance of other methods of determining position. However, a good dead reckoning position may actually be more accurate than one determined from several inexact LOP’s. If the means of determining direction and distance (the elements of dead reckoning) are accurate, it may be best to adjust the dead reckoning only after a confident fix. Plotting can be done directly on a pilot chart or plotting sheet. If this proves too difficult, or if an independent check is desired, some form of mathematical reckoning may be useful. Table 2604, a simplified traverse table, can be used for this purpose. Angle 0 18 31 41 49 56 63 69 75 81 87 90 Factor 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 Table 2604. Simplified traverse table. To find the difference or change of latitude in minutes, enter the table with course angle, reckoned from north or south toward the east or west. Multiply the distance run in miles by the factor. To find the departure in miles, enter the table with the complement of the course angle. Multiply the distance run in miles by the factor. To convert departure to difference of longitude in minutes, enter the table with mid-latitude and divide the departure by the factor. Example: A vessel travels 26 miles on course 205°, from Lat. 41°44'N, Long. 56°21'W. Required: Latitude and longitude of the point of arrival. Solution: The course angle is 205°−180° = S25°W, and the complement is 90° − 25° = 65°. The factors corresponding to these angles are 0.9 and 0.4, respectively. The difference of latitude is 26 × 0.9 = 23' (to the nearest minute) and the departure is 26 × 0.4 = 10 NM. Since the course is in the southwestern quadrant in the Northern Hemisphere, the latitude of the point of arrival is 41°44'N − 23' = 41°21'N. The factor corresponding to the mid-latitude 41°32'N is 0.7. The difference of longitude is 10 ÷ 0.7 = 14'. The longitude of the point of arrival is 56°21'W + 14' = 56°35'W. Lat. = 41°21'N Long. = 56°35'W. ### 2605. Deck Log At the onset of a navigational emergency, a navigation log should be started if a deck log is not already being maintained. The date and time of the casualty should be the first entry, followed by navigational information such as ship’s position, status of all navigation systems, the decisions made, and the reasons for them. The best determination of the position of the casualty should be recorded, followed by a full account of courses, distances, positions, winds, currents, and leeway. No important navigational information should be left to memory. ### 2606. Direction Direction is one of the elements of dead reckoning. A deviation table for each compass, including any lifeboat compasses, should already have been determined. In the event of destruction or failure of the gyrocompass and bridge magnetic compass, lifeboat compasses can be used. If an almanac, accurate Greenwich time, and the necessary tables are available, the azimuth of any celestial body can be computed and this value compared with an azimuth measured by the compass. If it is difficult to observe the compass azimuth, select a body dead ahead and note the compass heading. The difference between the computed and observed azimuths is compass error on that heading. This is of more immediate value than deviation, but if the latter is desired, it can be determined by applying variation to the compass error. Several unique astronomical situations occur, permitting determination of azimuth without computation: Polaris: Polaris is always within 2° of true north for observers between the equator and about 60° North. When Polaris is directly above or below the celestial pole, its azimuth is true north at any latitude. This occurs when the trailing star of either Cassiopeia or the Big Dipper is directly above or below Polaris. When these two stars form a horizontal line with Polaris, the maximum correction applies. Below about 50° latitude, this correction is 1°, and between 50° and 65°, it is 2°. If Cassiopeia is to the right of Polaris, the azimuth is 001° (002° above 50°N), and if Cassiopeia is to the left of Polaris, the azimuth is 359° (358° above 50°N). The south celestial pole is located approximately at the intersection of a line through the longer axis of the Southern Cross with a line from the northernmost star of Triangulum Australe, perpendicular to the line joining the other two stars of the triangle. No conspicuous star marks this spot. Meridian Transit: Any celestial body bears due north or south at meridian transit, either upper or lower. This is the moment of maximum (or minimum) altitude of the body. However, since the altitude at this time is nearly constant during a considerable change of azimuth, the instant of meridian transit may be difficult to determine. If time and an almanac are available, and the longitude is known, the time of transit can be computed. It can also be graphed as a curve on graph paper and the time of meridian transit determined with sufficient accuracy for emergency purposes. Body on Prime Vertical: If any method is available for determining when a body is on the prime vertical (due east or west), the compass azimuth at this time can be observed. Table 20, Meridian Angle and Altitude of a Body on the Prime Vertical Circle provides this information. Any body on the celestial equator (declination 0°) is on the prime vertical at the time of rising or setting. For the Sun this occurs at the time of the equinoxes. The star Mintaka (δ Orionis), the leading star of Orion’s belt, has a declination of approximately 0.3°S and can be considered on the celestial equator. For an observer near the equator, such a body is always nearly east or west. Because of refraction and dip, the azimuth should be noted when the center of the Sun or a star is a little more than one Sun diameter (half a degree) above the horizon. The Moon should be observed when its upper limb is on the horizon. Body at Rising or Setting: Except for the Moon, the azimuth angle of a body is almost the same at rising as at setting, except that the former is toward the east and the latter toward the west. If the azimuth is measured both at rising and setting, true south (or north) is midway between the two observed values, and the difference between this value and 180° (or 000°) is the compass error. Thus, if the compass azimuth of a body is 073° at rising, and 277° at setting, true south (180°) is $\frac{\text{073°+277°}}{\text{2}}$ by compass, and the compass error is 5°E. This method may be in error if the vessel is moving rapidly in a northerly or southerly direction. If the declination and latitude are known, the true azimuth of any body at rising or setting can be determined by means of a diagram on the plane of the celestial meridian or by computation. For this purpose, the body (except the Moon) should be considered as rising or setting when its center is a little more than one Sun diameter (half a degree) above the horizon, because of refraction and dip. Finding direction by the relationship of the Sun to the hands of a watch is sometimes advocated, but the limitations of this method prevent its practical use at sea. A simple technique can be used for determining deviation. Find an object that is easily visible and that floats, but will not drift too fast in the wind. A life preserver, or several tied together, will suffice. Throw this marker overboard, and steer the vessel steadily in the exact opposite direction to the chosen course. At a distance of perhaps half a mile, or more if the marker is still clearly in view, execute a Williamson turn, or turn the vessel 180° in the smallest practical radius, and head back toward the marker. The magnetic course will be midway between the course toward the object and the reciprocal of the course away from the object. Thus, if the boat is on compass course 151° while heading away from the object, and 337° while returning, the magnetic course is midway between 337° and 151° + 180° = 331°, or $\frac{\text{337°+331°}}{\text{2}}$ = 334°. Since 334° magnetic is the same as 337° by compass, the deviation on this heading is 3°W. If a compass is not available, any celestial body can be used to steer by, if its diurnal apparent motion is considered. A reasonably straight course can be steered by noting the direction of the wind, the movement of the clouds, the direction of the waves, or by watching the wake of the vessel. The angle between the centerline and the wake is an indication of the amount of leeway. A body having a declination the same as the latitude of the destination is directly over the destination once each day, when its hour angle equals the longitude, measured westward through 360°. At this time it should be dead ahead if the vessel is following the great circle leading directly to the destination. Inspect the almanac to find a body with a suitable declination. ## EMERGENCY CELESTIAL NAVIGATION ### 2607. Almanacs Almanac information, particularly declination and Greenwich Hour Angle of bodies, is important to celestial navigation. If the only copy available is for a previous year, it can be used for the Sun, Aries (♈), and stars without serious error by emergency standards. However, for greater accuracy, proceed as follows: For declination of the Sun, enter the almanac with a time that is earlier than the correct time by 5h 49m multiplied by the number of years between the date of the almanac and the correct date, adding 24 hours for each February 29th that occurs between the dates. If the date is February 29th, use March 1 and reduce by one the number of 24 hour periods added. For GHA of the Sun or Aries, determine the value for the correct time, adjusting the minutes and tenths of arc to agree with that at the time for which the declination is determined. Since the adjustment never exceeds half a degree, care should be used when the value is near a whole degree, to prevent the value from being in error by 1°. If no almanac is available, a rough approximation of the declination of the Sun can be obtained as follows: Count the days from the given date to the nearer solstice (June 21st or December 22nd). Divide this by the number of days from that solstice to the equinox (March 21st or September 23rd), using the equinox that will result in the given date being between it and the solstice. Multiply the result by 90°. Enter Table 2604 with the angle so found and extract the factor. Multiply this by 23.45° to find the declination. Example 1: The date is August 24th. Required: The approximate declination of the Sun. Solution: The number of days from the given date to the nearer solstice (June 21) is 64. There are 94 days between June 21 and September 23. Dividing and multiplying by 90°, $\frac{\textit{64}}{\textit{94}}$ × 90° = 61.3° The factor from Table 2604 is 0.5. The declination is 23.45° × 0.5 = 11.7°. We know it is north because of the date. The accuracy of this solution can be improved by considering the factor of Table 2604 as the value for the mid-angle between the two limiting ones (except that 1.00 is correct for 0° and 0.00 is correct for 90°), and interpolating to one additional decimal. In this instance the interpolation would be between 0.50 at 59.5° and 0.40 at 66°. The interpolated value is 0.47, giving a declination of 11.0°N. Still greater accuracy can be obtained by using a table of natural cosines instead of Table 2604. By natural cosine, the value is 11.3°N. If the latitude is known, the declination of any body can be determined by observing a meridian altitude. It is usually best to make a number of observations shortly before and after transit, plot the values on graph paper, letting the ordinate (vertical scale) represent altitude, and the abscissa (horizontal scale) the time. The altitude is found by fairing a curve or drawing an arc of a circle through the points, and taking the highest value. A meridian altitude problem is then solved in reverse. Example 2: The latitude of a vessel is 40°16'S. The Sun is observed on the meridian, bearing north. The observed altitude is 36°29'. Required: Declination of the Sun. Solution: The zenith distance is 90° - 36°29' = 53°31'. The Sun is 53°31' north of the observer, or 13°15' north of the equator. Hence, the declination is 13°15' N. Answer: Dec. 13°15' N. The GHA of Aries can be determined approximately by considering it equal to GMT (in angular units) on September 23rd. To find GHA Aries on any other date, add 1° for each day following September 23rd. The value is approximately 90° on December 22nd, 180° on March 21st and 270° on June 21st. The values found can be in error by as much as several degrees, and so should not be used if better information is available. An approximate check is provided by the great circle through Polaris, Caph (the leading star of Cassiopeia), and the eastern side of the square of Pegasus. When this great circle coincides with the meridian, LHA ♈ is approximately 0°. The hour angle of a body is equal to its SHA plus the hour angle of Aries. If an error of up to 4°, or a little more, is acceptable, the GHA of the Sun can be considered equal to GMT ± 180° (12h). For more accurate results, one can make a table of the equation of time from the Nautical Almanac perhaps at five- or ten-day intervals, and include this in the emergency navigation kit. The equation of time is applied according to its sign to GMT ± 180° to find GHA. ### 2608. Altitude Measurement With a sextant, altitudes are measured in the usual manner. If in a small boat or raft, it is a good idea to make a number of observations and average both the altitudes and times, or plot on graph paper the altitudes versus time. The rougher the sea, the more important this process becomes, which tends to average out errors caused by rough weather observations. The improvisations which may be made in the absence of a sextant are so varied that in virtually any circumstances a little ingenuity will produce a device to measure altitude. The results obtained with any improvised method will be approximate at best, but if a number of observations are averaged, the accuracy can be improved. A measurement, however approximate, is better than an estimate. Two general types of improvisation are available: 1. Circle. Any circular degree scale, such as a maneuvering board, compass rose, protractor, or plotter can be used to measure altitude or zenith distance directly. This is the principle of the ancient astrolabe. A maneuvering board or compass rose can be mounted on a flat board. A protractor or plotter may be used directly. There are a number of variations of the technique of using such a device. Some of them are: A peg or nail is placed at the center of the circle as seen in Figure 2608a. Figure 2608a. Improvised astrolabe; shadow method. A weight is hung from the 90° graduation, and a string for holding the device is attached at the 270° graduation. When it is held with the weight acting as a plumb bob, the 0° - 180° line is horizontal. In this position the board is turned in azimuth until it is in line with the Sun. The intersection of the shadow of the center peg with the arc of the circle indicates the altitude of the center of the Sun. The weight and loop can be omitted and pegs placed at the 0° and 180° points of the circle. While one observer sights along the line of pegs to the horizon, an assistant notes the altitude. The weight can be attached to the center pin, and the three pins (0°, center, 180°) aligned with the celestial body. The reading is made at the point where the string holding the weight crosses the scale. The reading thus obtained is the zenith distance unless the graduations are labeled to indicate altitude. This method, illustrated in Figure 2608b, is used for bodies other than the Sun. Figure 2608b. Improvised astrolabe; direct sighting method. Whatever the technique, reverse the device for half the readings of a series to minimize errors of construction. Generally, the circle method produces more accurate results than the right triangle method, described below. 2. Right triangle. A cross-staff can be used to establish one or more right triangles, which can be solved by measuring the angle representing the altitude, either directly or by reconstructing the triangle. Another way of determining the altitude is to measure two sides of the triangle and divide one by the other to determine one of the trigonometric functions. This procedure, of course, requires a source of information on the values of trigonometric functions corresponding to various angles. If the cosine is found, Table 2604 can be used. The tabulated factors can be considered correct to one additional decimal for the value midway between the limited values (except that 1.00 is the correct value for 0° and 0.00 is the correct value for 90°) without serious error by emergency standards. Interpolation can then be made between such values. By either protractor or table, most devices can be graduated in advance so that angles can be read directly. There are many variations of the right triangle method. Some of these are described below. Two straight pieces of wood can be attached to each other in such a way that the shorter one can be moved along the longer, the two always being perpendicular to each other. The shorter piece is attached at its center. One end of the longer arm is held to the eye. The shorter arm is moved until its top edge is in line with the celestial body, and its bottom edge is in line with the horizon. Thus, two right triangles are formed, each representing half the altitude. See Figure 2608c. Figure 2608c. Improvised cross-staff. For low altitudes, only one of the triangles is used, the long arm being held in line with the horizon. The length of half the short arm, divided by the length of that part of the long arm between the eye and the intersection with the short arm, is the tangent of half the altitude (the whole altitude if only one right triangle is used). The cosine can be found by dividing that part of the long arm between the eye and the intersection with the short arm by the slant distance from the eye to one end of the short arm. Graduations consist of a series of marks along the long arm indicating settings for various angles. The device should be inverted for alternate readings of a series. A rule or any stick can be held at arm’s length. The top of the rule is placed in line with the celestial body being observed, and the top of the thumb is placed in line with the horizon. The rule is held vertically. The length of rule above the thumb, divided by the distance from the eye to the top of the thumb, is the tangent of the angle observed. The cosine can be found by dividing the distance from the eye to the top of the thumb by the distance from the eye to the top of the rule. If the rule is tilted toward the eye until the minimum of rule is used, the distance from the eye to the middle of the rule is substituted for the distance from the eye to the top of the thumb, half the length of the rule above the thumb is used, and the angle found is multiplied by 2. Graduations consist of marks on the rule or stick indicating various altitudes. For the average observer each inch of rule will subtend an angle of about 2.3°, assuming an eye-to-ruler distance of 25 inches. This relationship is good to a maximum altitude of about 20°. The accuracy of this relationship can be checked by comparing the measurement against known angles in the sky. Angular distances between stars can be computed by sight reduction methods, including Pub. No. 229, by using the declination of one star as the latitude of the assumed position, and the difference between the hour angles (or SHA’s) of the two bodies as the local hour angle. The angular distance is the complement of the computed altitude. The angular distances between some well-known star pairs are: end stars of Orion’s belt, 2.7°; pointers of the Big Dipper, 5.4°, Rigel to Orion’s belt, 9.0°; eastern side of the great square of Pegasus, 14.0°; Dubhe (the pointer nearer Polaris) and Mizar (the second star in the Big Dipper, counting from the end of the handle), 19.3°. The angle between the lines of sight from each eye is, at arm’s length, about 6°. By holding a pencil or finger horizontally, and placing the head on its side, one can estimate an angle of about 6° by closing first one eye and then the other, and noting how much the pencil or finger appears to move in the sky. The length of the shadow of a peg or nail mounted perpendicular to a horizontal board can be used as one side of an altitude triangle. The other sides are the height of the peg and the slant distance from the top of the peg to the end of the shadow. The height of the peg, divided by the length of the shadow, is the tangent of the altitude of the center of the Sun. The length of the shadow, divided by the slant distance, is the cosine. Graduations consist of a series of concentric circles indicating various altitudes, the peg being at the common center. The device is kept horizontal by floating it in a bucket of water. Half the readings of a series are taken with the board turned 180° in azimuth. Two pegs or nails can be mounted perpendicular to a board, with a weight hung from the one farther from the eye. The board is held vertically and the two pegs aligned with the body being observed. A finger is then placed over the string holding the weight, to keep it in position as the board is turned on its side. A perpendicular line is dropped from the peg nearer the eye, to the string. The body’s altitude is the acute angle nearer the eye. For alternate readings of a series, the board should be inverted. Graduations consist of a series of marks indicating the position of the string at various altitudes. As the altitude decreases, the triangle becomes smaller. At the celestial horizon it becomes a straight line. No instrument is needed to measure the altitude when either the upper or lower limb is tangent to the horizon, as the sextant altitude is then 0°. ### 2609. Sextant Altitude Corrections If altitudes are measured by a marine sextant, the usual sextant altitude corrections apply. If the center of the Sun or Moon is observed, either by sighting at the center or by shadow, the lower-limb corrections should be applied, as usual, and an additional correction of minus 16' applied. If the upper limb is observed, use minus 32'. If a weight is used as a plumb bob, or if the length of a shadow is measured, omit the dip (height of eye) correction. If an almanac is not available for corrections, each source of error can be corrected separately, as follows: If a sextant is used, the index correction should be determined and applied to all observations, or the sextant adjusted to eliminate index error. Refraction is given to the nearest minute of arc in Table 2609. Altitude 5° 6° 7° 8° 10° 12° 15° 21° 33° 63° 90° Refraction 9' 8' 7' 6' 5' 4' 3' 2' 1' 0 Table 2609. Simplified refraction table. The value for a horizon observation is 34'. If the nearest 0.1° is sufficiently accurate, as with an improvised method of observing altitude, a correction of 0.1° should be applied for altitudes between 5° and 18°, and no correction applied for greater altitudes. Refraction applies to all observations, and is always minus. Dip, in minutes of arc, is approximately equal to the square root of the height of eye, in feet. The dip correction applies to all observations in which the horizon is used as the horizontal reference. It is always a minus. If 0.1° accuracy is acceptable, no dip correction is needed for height of eye in a small boat. The semidiameter of the Sun and Moon is approximately 16' of arc. The correction does not apply to other bodies or to observations of the center of the Sun and Moon, by whatever method, including shadow. The correction is positive if the lower limb is observed, and negative if the upper limb is observed. For emergency accuracy, parallax is applied to observations of the Moon only. An approximate value, in minutes of arc, can be found by multiplying 57' by the factor from Table 2604, entering that table with altitude. For more accurate results, the factors can be considered correct to one additional decimal for the altitude midway between the limiting values (except that 1.00 is correct for 0° and 0.00 is correct for 90°), and the values for other altitudes can be found by interpolation. This correction is always positive. For observations of celestial bodies on the horizon, the total correction for zero height of eye is: Sun: upper limb: (−)50' Lower limb: (−)18' Moon: upper limb: (+)7' Lower limb: (+)39' Planet/Star: (−)34° Dip should be added algebraically to these values. Since the sextant altitude is zero, the observed altitude is equal to the total correction. ### 2610. Sight Reduction Sight reduction tables should be used, if available. If not, use the compact sight reduction tables found in the Nautical Almanac. If trigonometric tables and the necessary formulas are available, they will serve the purpose. Speed in solution is seldom a factor in a liferaft, but might be important aboard ship, particularly in hostile areas. If tables but no formulas are available, determine the mathematical knowledge possessed by the crew. Someone may be able to provide the missing information. If the formulas are available, but no tables, approximate natural values of the various trigonometric functions can be obtained graphically. Graphical solution of the navigational triangle can be made by the orthographic method explained in Chapter 15, Navigational Astronomy. A maneuvering board might prove helpful in the graphical solution for either trigonometric functions or altitude and azimuth. Very careful work will be needed for useful results by either method. Unless proper navigational equipment is available, better results might be obtained by making separate determinations of latitude and longitude. ### 2611. Finding Latitude Several methods are available for determining latitude; none requires accurate time. Latitude can be determined using a meridian altitude of any body, if its declination is known. If accurate time, knowledge of the longitude, and an almanac are available, the observation can be made at the correct moment, as determined in advance. However, if any of these are lacking, or if an accurate altitude measuring instrument is unavailable, it is better to make a number of altitude observations before and after meridian transit. Then plot altitude versus time on graph paper, and the highest (or lowest, for lower transit) altitude is scaled from a curve faired through the plotted points. At small boat speeds, this procedure is not likely to introduce a significant error. The time used for plotting the observations need not be accurate, as elapsed time between observations is all that is needed, and this is not of critical accuracy. Any altitudes that are not consistent with others of the series should be discarded. Latitude by Polaris is explained in Chapter 20, Sight Reduction. In an emergency, only the first correction is of practical significance. If suitable tables are not available, this correction can be estimated. The trailing star of Cassiopeia (ε Cassiopeiae) and Polaris have almost exactly the same SHA. The trailing star of the Big Dipper (Alkaid) is nearly opposite Polaris and ε Cassiopeiae. These three stars, ε Cassiopeiae, Polaris, and Alkaid, form a line through the N. Celestial Pole (approximately). When this line is horizontal, there is no correction. When it is vertical, the maximum correction of 56' applies. It should be added to the observed altitude if Alkaid is at the top, and subtracted if ε Cassiopeiae is at the top. For any other position, estimate the angle this line makes with the vertical, and multiply the maximum correction (56') by the factor from Table 2604, adding if Alkaid is higher than ε Cassiopeiae, and subtracting if it is lower. See Figure 2611. Figure 2611. Relative positions of ε Cassiopeiae, Polaris, and Alkaid with respect to the north celestial pole. For more accurate results, the factor from Table 2604 can be considered accurate to one additional decimal for the mid-value between those tabulated (except that 1.00 is correct for 0° and 0.00 for 90°). Other values can be found by interpolation. The length of the day varies with latitude. Hence, latitude can be determined if the elapsed time between sunrise and sunset can be accurately observed. Correct the observed length of day by adding 1 minute for each 15' of longitude traveled toward the east and subtracting 1 minute for each 15' of longitude traveled toward the west. The latitude determined by length of day is the value for the time of meridian transit. Since meridian transit occurs approximately midway between sunrise and sunset, half the interval may be observed and doubled. If a sunrise and sunset table is not available, the length of daylight can be determined graphically using a diagram on the plane of the celestial meridian, as explained in Chapter 15. A maneuvering board is useful for this purpose. This method cannot be used near the time of the equinoxes and is of little value near the equator. The Moon can be used if moonrise and moonset tables are available. However, with the Moon, the half-interval method is of insufficient accuracy, and allowance should be made for the longitude correction. The declination of a body in zenith is equal to the latitude of the observer. If no means are available to measure altitude, the position of the zenith can be determined by holding a weighted string overhead. ### 2612. Finding Longitude Unlike latitude, determining longitude requires accurate Greenwich time. All such methods consist of noting the Greenwich time at which a phenomenon occurs locally. In addition, a table indicating the time of occurrence of the same phenomenon at Greenwich, or equivalent information, is needed. Three methods may be used to determine longitude. When a body is on the local celestial meridian, its GHA is the same as the longitude of the observer if in west longitude, or 360°−λ in east longitude. Thus, if the GMT of local time of transit is determined and a table of Greenwich Hour Angles (or time of transit of the Greenwich meridian) is available, longitude can be computed. If only the equation of time is available, the method can be used with the Sun. This is the reverse of the problem of finding the time of transit of a body. The time of transit is not always apparent. If a curve is made of altitude versus time, as suggested previously, the time corresponding to the highest altitude is used in finding longitude. Under some conditions, it may be preferable to observe an altitude before meridian transit, and then again after meridian transit when the body has returned to the same altitude as at the first observation. Meridian transit occurs midway between these two times. A body in the zenith is on the celestial meridian. If accurate azimuth measurement is available, note the time when the azimuth is 000° or 180°. The difference between the observed GMT of sunrise or sunset and the LMT tabulated in the almanac is the longitude in time units, which can then be converted to angular measure. If the Nautical Almanac is used, this information is tabulated for each third day only. Greater accuracy can be obtained if interpolation is used for determining intermediate values. Moonrise or moonset can be used if the tabulated LMT is corrected for longitude. Planets and stars can be used if the time of rising or setting can be determined. This can be computed, or approximated using a diagram on the plane of the celestial meridian (See Chapter 15, Navigational Astronomy). Either of these methods can be used in reverse to set a watch that has run down or to check the accuracy of a watch if the longitude is known. In the case of a meridian transit, the time at the instant of transit is not necessary. Simply start the watch and measure the altitude several times before and after transit, or at equal altitudes before and after transit. Note the times of these observations and find the exact watch time of meridian transit. The difference between this time and the correct time of transit is the correction factor by which to reset the watch.