text
stringlengths
104
605k
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Brillouin Klein bottle from artificial gauge fields ## Abstract A Brillouin zone is the unit for the momentum space of a crystal. It is topologically a torus, and distinguishing whether a set of wave functions over the Brillouin torus can be smoothly deformed to another leads to the classification of various topological states of matter. Here, we show that under $${{\mathbb{Z}}}_{2}$$ gauge fields, i.e., hopping amplitudes with phases ±1, the fundamental domain of momentum space can assume the topology of a Klein bottle. This drastic change of the Brillouin zone theory is due to the projective symmetry algebra enforced by the gauge field. Remarkably, the non-orientability of the Brillouin Klein bottle corresponds to the topological classification by a $${{\mathbb{Z}}}_{2}$$ invariant, in contrast to the Chern number valued in $${\mathbb{Z}}$$ for the usual Brillouin torus. The result is a novel Klein bottle insulator featuring topological modes at two edges related by a nonlocal twist, radically distinct from all previous topological insulators. Our prediction can be readily achieved in various artificial crystals, and the discovery opens a new direction to explore topological physics by gauge-field-modified fundamental structures of physics. ## Introduction The Brillouin zone is a fundamental concept in physics. It is essential for the physical description of crystalline solids, metamaterials, and artificial periodic systems. Particularly, it sets the stage for classifying topological states, which, in mathematical terms, is the task to study the topology of Hermitian vector bundles over the Brillouin zone as the base manifold1,2,3. Clearly, the topology of the Brillouin zone itself is a crucial ingredient for the classification. Since Brillouin zones have the topology of a torus, topological states known to date basically correspond to classifications done on the torus. Meanwhile, although initially studied for electronic systems in solids4,5,6, topological states have been successfully extended to artificial crystals, such as acoustic/photonic crystals, electric circuit arrays, and mechanical networks. These systems have the advantage of great tunability. More importantly, gauge fields can be flexibly engineered in artificial crystals. In particular, the $${{\mathbb{Z}}}_{2}$$ gauge field, i.e., hopping amplitudes allowed to take phases ±1, can be readily realized in these systems and have already been demonstrated in many experiments7,8,9,10,11,12,13,14,15,16,17. A crucial but so far less appreciated point is that under gauge fields, symmetries of the system would satisfy projective algebras18,19,20,21 beyond the textbook group theory for crystal symmetry22, which has recently been experimentally demonstrated by acoustic crystals23,24. Then, what is the physical consequence of the projective symmetry algebra? Does it generate any new topology that is impossible for systems without gauge field? These questions have not been answered yet. In this article, we reveal that the projective symmetry algebra can lead to a fundamental change of the Bloch band theory. We show that it can generate a peculiar “momentum-space nonsymmorphic symmetry", i.e., when represented in momentum space, the projective algebra requires that certain symmetry must include a fractional translation in the reciprocal lattice. For example, a real-space reflection symmetry can become a glide reflection in momentum space. This unique feature in turn dictates the topology of the fundamental domain of the momentum space being a Klein bottle and leads to new topological states. ## Results ### Emerged momentum-space glide reflection Let us start by considering the reflection symmetry Mx that inverses the x axis, and the translation symmetry Ly along the y direction. In the absence of gauge fields, they should commute with each other [Mx, Ly] = 0. However, under certain gauge flux configurations, the algebraic relation may be projectively modified to $$\{{M}_{x},{L}_{y}\}=0,$$ (1) where the font has been changed to indicate the representations under gauge fields. The seemingly peculiar relation in (1) can be intuitively understood by inspecting Fig. 1. Here, we have four lattice sites forming a rectangle invariant under Mx. Assuming there is a $${{\mathbb{Z}}}_{2}$$ gauge flux of π through the rectangle, then both MxLy and LyMx would send a particle from site 1 to 3, but the two paths encloses a π flux, therefore resulting in the anti-commutation. For a crystalline system, if we choose a unit cell with lattice constant b along the y direction, the operator Ly is diagonalized as $${\hat{L}}_{y}={e}^{i{k}_{y}b}$$ in the momentum space. Then, the projective algebra in Eq. (1) requires $${\hat{M}}_{x}{e}^{i{k}_{y}b}{\hat{M}}_{x}=-{e}^{i{k}_{y}b}={e}^{i({k}_{y}+{G}_{y}/2)b},$$ (2) where Gy is the length of the reciprocal lattice vector Gy. From Eq. (2), we make the key observation that $${\hat{M}}_{x}$$ must contain a half translation in the reciprocal lattice along ky, when represented in the momentum space. Explicitly, $${\hat{M}}_{x}=U{{{{{{{{\mathcal{L}}}}}}}}}_{\frac{{{{{{{{{\boldsymbol{G}}}}}}}}}_{y}}{2}}{\hat{m}}_{x},$$ (3) where U is some unitary matrix, $${\hat{m}}_{x}$$ is the operator that inverses kx, and $${{{{{{{{\mathcal{L}}}}}}}}}_{{{{{{{{{\boldsymbol{G}}}}}}}}}_{y}/2}$$ denotes the operator that implements the half translation Gy/2 of the reciprocal lattice. Hence, Mx may be regarded as a momentum-space glide reflection. As an example, consider the simple lattice model in Fig. 2a. Here, the primitive unit cell in real space consist of four sites. The $${{\mathbb{Z}}}_{2}$$ gauge flux through each plaquette is specified in the figure, respecting Mx and the translation period of b along y. Evidently, relation Eq. (1) is fulfilled for this case, and the mirror symmetry operator is represented by $${\hat{M}}_{x}={\tau }_{0}\otimes {\sigma }_{1}{{{{{{{{\mathcal{L}}}}}}}}}_{\frac{{{{{{{{{\boldsymbol{G}}}}}}}}}_{y}}{2}}{\hat{m}}_{x}$$ (4) in momentum space, where τs and σs are two sets of Pauli matrices that operate on rows and columns of a unit cell [see Fig. 2a]. The appearance of the fractional reciprocal lattice translation can also be understood from the following analysis. To describe a lattice with gauge flux, we need to choose explicit gauge connections on the lattice bonds. For instance, in Fig. 2a, we show a specific gauge choice, with red and blue colors denoting negative and positive hopping amplitudes, respectively. Then, for this given gauge choice, a crystal symmetry operator is given by R = GR, namely a combination of the manifest spatial operator R and a gauge transformation G. This is because although the flux configuration is invariant under R, the specific gauge connection configuration may be changed by R. To restore the original gauge connection, an additional gauge transformation G should be performed. For instance, the gauge transformation required after reflection Mx is depicted in Fig. 2b. Notably, G may not be compatible with the spatial period of the lattice. [Here, it must be incompatible with Ly due to Eq. (1).] Clearly, in Fig. 2b, the period of G along y doubles the lattice constant. Then, after Fourier transform, the incompatibility manifests in Mx as a fractional translation in momentum space. Note that for conventional space groups, nonsymmorphic symmetries such as glide reflections exist only on real-space lattices, i.e., the involved fractional translations act only in real space but not in momentum space25,26. When transformed to momentum space, they invariably become fixed-point operations, namely, there are always momenta (such as the Γ point) that are invariant under the operation. Therefore, ordinary (real-space) nonsymmorphic symmetries are fundamentally distinct from the momentum-space nonsymmorphic symmetry Mx discovered here, for which the fractional translation acts in momentum space. It is also clear that Mx is a free operation, i.e., no momentum is invariant under Mx. The emergence of momentum-space nonsymmorphic symmetry is a unique feature of projective symmetry algebras. The free character of such symmetry operations will produce remarkable consequences, as we discuss below. ### Brillouin Klein bottle We proceed to elucidate the physical consequences of this momentum-space glide reflection symmetry. Let $${{{{{{{\mathcal{H}}}}}}}}({{{{{{{\boldsymbol{k}}}}}}}})$$ be the Bloch Hamiltonian in momentum space. Then, the constraint by Mx in Eq. (3) is $$U{{{{{{{\mathcal{H}}}}}}}}({k}_{x},{k}_{y}){U}^{{{{\dagger}}} }={{{{{{{\mathcal{H}}}}}}}}(-{k}_{x},{k}_{y}+\pi ).$$ (5) Here, for simplicity we have set b = 1. This means if $$\left|\psi ({{{{{{{\boldsymbol{k}}}}}}}})\right\rangle$$ is an eigenstate of $${{{{{{{\mathcal{H}}}}}}}}({{{{{{{\boldsymbol{k}}}}}}}})$$ with energy $${{{{{{{\mathcal{E}}}}}}}}({{{{{{{\boldsymbol{k}}}}}}}})$$, then $$U\left|\psi ({{{{{{{\boldsymbol{k}}}}}}}})\right\rangle$$ will be an eigenstate of $${{{{{{{\mathcal{H}}}}}}}}(-{k}_{x},{k}_{y}+\pi )$$ with the same energy, i.e., $${{{{{{{\mathcal{H}}}}}}}}(-{k}_{x},{k}_{y}+\pi )U\left|\psi ({{{{{{{\boldsymbol{k}}}}}}}})\right\rangle ={{{{{{{\mathcal{E}}}}}}}}({{{{{{{\boldsymbol{k}}}}}}}})U\left|\psi ({{{{{{{\boldsymbol{k}}}}}}}})\right\rangle .$$ (6) As a result the spectrum at (kx, ky) is equivalent to that at (−kx, ky + π). Thus, the Brillouin zone can be partitioned into two parts, τ1/2 and $${\bar{\tau }}_{1/2}$$, as illustrated in Fig. 3a. Only one of them is independent, i.e., the fundamental domain of momentum space is a half of the Brillouin zone. This can be explicitly verified for the model in Fig. 2. In Fig. 3d, we plot the spectrum of the lattice model. One can observe that the reflection of the band structure over τ1/2 through the ky axis coincides with that over $${\bar{\tau }}_{1/2}$$ after a half translation $${{{{{{{{\mathcal{L}}}}}}}}}_{{{{{{{{{\boldsymbol{G}}}}}}}}}_{y}/2}$$. This can be more clearly seen from the constant energy cut in Fig. 3e. We have emphasized that as a momentum-space nonsymmorphic symmetry, Mx is a free operation with no fixed point, distinct from conventional space group symmetries. Mathematically, it is known that an equivariant bundle with the structure group G freely acting on the based space X is equivalent to the bundle on the orbital space X/G27. For our case, this simply means all the information including topology is fully captured by the fundamental domain τ1/2 = [−π, π) × [−π, 0). Since kx is periodic, we may write τ1/2 = S1 × [−π, 0) as a cylinder. This cylinder has two boundaries $${S}_{\pm }^{1}$$ at ky = −π and 0, respectively. Importantly, $${S}_{\pm }^{1}$$ are oppositely oriented and “glued" together, because they are connected by Mx [Fig. 3b]. Thus, the fundamental domain here is topologically a Klein bottle, as illustrated in Fig. 3c. We remark that in solid state physics, conventional space group symmetries are commonly used to reduce Brillouin zones to so-called irreducible Brillouin zones. However, because those symmetries are not free, the irreducible Brillouin zone is not sufficient to capture the topological information (e.g., symmetry information is still required at high-symmetry points or paths of the irreducible Brillouin zone), distinct from the case here. Besides, the Brillouin Klein bottle is a closed manifold, whereas the irreducible Brillouin zones are not. These characters are important for the topological classification to be discussed in the following. ### Topological invariant and edge states Consider the system is in an insulating phase. The task of topological classification is to classify valence-band wave functions (forming a Hermitian vector bundle) over the Brillouin Klein bottle. This is fundamentally different from the usual cases where the base manifold is a torus or a sphere. A crucial difference is the orientability. A torus (and a sphere) is orientable, whereas a Klein bottle is non-orientable. For orientable closed base manifolds such as the torus, the most elementary topological invariant is the Chern number, which is the integration of the Berry curvature $${{{{{{{\mathcal{F}}}}}}}}$$ for valence bands over the Brillouin torus. The Chern number is valued in $${\mathbb{Z}}$$, and the sign of the integer is related to the orientation of the torus, since a reflection inverses the Chern number. In contrast, for the Brillouin Klein bottle that is non-orientable, any topological invariant can only be valued in $${{\mathbb{Z}}}_{2}$$, since the sign of the invariant has no significance and we must have 1 = −1. Now, we formulate an explicit expression for this $${{\mathbb{Z}}}_{2}$$ topological invariant. This is based on two key observations. First, the two boundaries $${S}_{\pm }^{1}$$ of τ1/2 are related by an inversion of kx [Fig. 3a]. The inversion operation inverses the Berry phase for a 1D system28. Hence, the Berry phases γ(−π) and γ(0) over $${S}_{\pm }^{1}$$ are opposite up to an multiple of 2π, i.e., $$\gamma (0)+\gamma (-\pi )=0\,{{{{{{\mathrm{mod}}}}}}}\,\,2\pi$$. Second, due to Stoke’s theorem, $${\int}_{{\tau }_{1/2}}{d}^{2}k\,{{{{{{{\mathcal{F}}}}}}}}+\gamma (0)-\gamma (-\pi )=0\,{{{{{{\mathrm{mod}}}}}}}\,\,2\pi$$. Therefore, we can formulate the $${{\mathbb{Z}}}_{2}$$ invariant as $$\nu =\frac{1}{2\pi }\int_{{\tau }_{1/2}}{d}^{2}k\,{{{{{{{\mathcal{F}}}}}}}}+\frac{1}{\pi }\gamma (0)\,{{{{{{\mathrm{mod}}}}}}}\,\,2.$$ (7) Here, the formula is valued in integers because of the two observations above. Since a large gauge transformation for valence wave functions can change γ(0) by a multiple of 2π, only the parity of the formula is gauge invariant and hence can be defined as a topological invariant. We also comment that the $${{\mathbb{Z}}}_{2}$$ topological classification here is based on the equivariant K theory or KG theory by $$\tilde{{{{{{{{\rm{K}}}}}}}}}(K)={{\mathbb{Z}}}_{2}$$, where K is the Klein bottle. The topological invariant is derived from the fact that line bundles over a 2D manifold M are topologically classified by $${H}^{2}(M,{\mathbb{Z}})$$, with $${H}^{2}(K,{\mathbb{Z}})={{\mathbb{Z}}}_{2}$$. Thus the resultant classification and the topological invariant are stable under the addition of trivial bands. We can give Eq. (7) a pump interpretation with an intuitive geometric picture. Over τ1/2 = S1 × [−π, 0], we can always choose a complete set of continuous valence states $$\left|{\psi }_{n}({{{{{{{\boldsymbol{k}}}}}}}})\right\rangle$$, which are periodic along kx. Then, the corresponding Berry connection $${{{{{{{\mathcal{A}}}}}}}}({{{{{{{\boldsymbol{k}}}}}}}})$$ is also periodic in kx. For such a $${{{{{{{\mathcal{A}}}}}}}}({{{{{{{\boldsymbol{k}}}}}}}})$$, we can compute γ(ky) that is continuous from ky = −π to 0. Moreover, it is straightforward to derive that $${\int}_{{\tau }_{1/2}}{d}^{2}k{{{{{{{\mathcal{F}}}}}}}}=-\int\nolimits_{-\pi }^{0}d{k}_{y}\,{\partial }_{{k}_{y}}\gamma ({k}_{y})=\gamma (-\pi )-\gamma (0)$$. Hence, from Eq. (7), we find that $$\nu =\frac{1}{2\pi }[\gamma (0)+\gamma (-\pi )]\,{{{{{{\mathrm{mod}}}}}}}\,\,2.$$ (8) Considering the generic case that γ(−π) ≠ 0 or π, the path of γ(ky) has to cross 0 or π in the course of varying ky from −π to 0 [see Fig. 4a]. Introducing W0/π as the number of times that γ(ky) crosses 0/π, ν can be given a geometric interpretation: $$\nu ={W}_{\pi }\,{{{{{{\mathrm{mod}}}}}}}\,\,2,$$ (9) i.e., ν is nontrivial if and only if γ(ky) crosses π an odd number of times. The insulator with nontrivial ν = 1 may be termed as a Klein bottle insulator. It features special topological edge states, whose existence can be understood from Eq. (9). For nontrivial ν, γ(ky) has to cross π at some (odd number of) ky, then the 1D kx-subsystems at these crossing points have Berry phases of π. It is well known that the valence-band Berry phase correspond to the center of Wannier function and the 1D charge polarization. Particularly, γ = π corresponds to Wannier center at the midpoints between lattice sites, and therefore leads to an in-gap state at each end. Thus, there must be in-gap boundary states located at each edge parallel to the y direction. Because of the continuity of energy bands, these in-gap states must be connected to form a topological edge band. Consider our model in Fig. 2 with two sets of parameters. The topological invariant Eq. (9) is computed as shown in Fig. 4b, c, respectively. The corresponding band structures for a ribbon geometry with edges along the y direction. are shown in Fig. 4d, e. The topological edge bands are clearly observed for the Klein bottle insulator phase with ν = 1. Similar to the Möbius insulators, these edge bands are detached from the bulk bands19,29,30. Under strong boundary potentials, they could be shifted out of the gap. We note two features of the edge states that distinguish the Klein bottle insulator from conventional crystalline topological insulators. First, the gapless modes appear on mirror-symmetry-breaking edges, rather than on symmetry-preserving edges as for conventional crystalline topological insulators. It is easy to see that the above argument indicates the existence of topological edge modes on any edge not perpendicular to y, whereas the mirror-symmetric edge perpendicular to y is expected to be gapped without topological edge mode. Second, the momentum-space glide reflection fascinatingly leads to a nonlocal relation for edge states. Consider two edges along y connected by the Mx symmetry in real space. Because of the nonsymmorphic character of Mx in the momentum space, only the energy bands over ky [−π, 0) are independent, while those over ky [0, π) can be deduced from the action of Mx. Particularly, Mx nonlocally maps the topological edge band on one edge over ky [−π, 0) to that on the other edge over ky [0, π). In Fig. 4d, one can clearly see that translating the edge band on the left edge by Gy/2 coincides with that on the right edge. We have demonstrated that interplay between gauge fields and symmetry can fundamentally modify the Bloch band theory. Under gauge fields, a spatial symmetry can acquire a nonsymmorphic character in momentum space. Particularly, the momentum-space glide reflection can reduce the Brillouin torus to the Brillouin Klein bottle, and therefore change the topological classifications from the bottom level. We formulate a novel kind of topological insulator over the Brillouin Klein bottle, which is as elementary as the Chern insulator over the Brillouin torus. Although we take 2D reflection in our analysis, the discussion can be readily generalized to the 3D with analogous momentum-space glide reflections and screw rotations (see Supplementary Note 4 for demonstrations). Since glide reflection and screw rotations are the most elementary nonsymmorphic symmetries, all nonsymmorphic space groups may be realized on the reciprocal lattices by certain gauge flux configurations, which are mathematically dictated by the second cohomology groups of the space groups. Since gauge fluxes can be engineered in artificial crystals for realizing projective symmetries23,24, our work opens the door toward a fertile ground for exploring novel momentum-space symmetries and topologies of artificial crystals beyond the scope of topological quantum materials. ## Methods ### The simple 2D model We consider a model defined on the rectangular lattice in Fig. 2. Constrained by Mx and two translation symmetries, the most general Hamiltonian with only nearest neighbor hopping terms is given by $${{{{{{{{\mathcal{H}}}}}}}}}_{0}({{{{{{{\boldsymbol{k}}}}}}}})=\left[\begin{array}{llll}\varepsilon &{[{q}_{1}^{x}({k}_{x})]}^{* }&{[{q}_{+}^{y}({k}_{y})]}^{* }&0\\ {q}_{1}^{x}({k}_{x})&\varepsilon &0&{[{q}_{-}^{y}({k}_{y})]}^{* }\\ {q}_{+}^{y}({k}_{y})&0&-\varepsilon &{[{q}_{2}^{x}({k}_{x})]}^{* }\\ 0&{q}_{-}^{y}({k}_{y})&{q}_{2}^{x}({k}_{x})&-\varepsilon \\ \end{array}\right],$$ (10) where $${q}_{a}^{x}({k}_{x})={t}_{a1}^{x}+{t}_{a2}^{x}{e}^{i{k}_{x}}$$ with a = 1, 2, $${q}_{\pm }^{y}({k}_{y})={t}_{1}^{y}\pm {t}_{2}^{y}{e}^{i{k}_{y}}$$, ±ε are on-site energies. To break the time-reversal (T) symmetry, we may include the following second neighbor hopping terms, $${{{{{{{{\mathcal{H}}}}}}}}}^{(1)}({{{{{{{\boldsymbol{k}}}}}}}})=\lambda \cos {k}_{y}{\tau }_{1}\otimes {\sigma }_{2}+\lambda \sin {k}_{y}{\tau }_{2}\otimes {\sigma }_{2}$$. For Figs. 3d, e and 4b, d, the parameter are given by $${t}_{11}^{x}={t}_{22}^{x}=1,{t}_{12}^{x}={t}_{21}^{x}=3.5,{t}_{1}^{y}=2,{t}_{2}^{y}=1.5,\varepsilon =1,\lambda =1$$. For Fig. 4c, e, $${t}_{11}^{x}={t}_{12}^{x}=1,{t}_{21}^{x}=3.5,{t}_{22}^{x}=1.7,{t}_{1}^{y}=2,{t}_{2}^{y}=1.5,\varepsilon =0.6,\lambda =0$$. It is worth pointing out that if the T symmetry is preserved, the two 1D kx-subsystems $${{{{{{{\mathcal{H}}}}}}}}({k}_{x},\pm \pi /2)$$ are invariant under MxT. This is because T inverses (kx, ±π/2) to (−kx, π/2), but Mx moves (−kx, π/2) back to (kx, ±π/2). Then, MxT is effectively a spacetime inversion symmetry for $${{{{{{{\mathcal{H}}}}}}}}({k}_{x},\pm \pi /2)$$, and therefore can quantize its Berry phases into integral multiples of π. As a result, the curve in Fig. 4b would always cross π at ky = −π/2. ## Data availability The data generated and analyzed during this study are available from the corresponding author upon reasonable request. ## References 1. Thouless, D. J., Kohmoto, M., Nightingale, M. P. & den Nijs, M. Quantized hall conductance in a two-dimensional periodic potential. Phys. Rev. Lett. 49, 405–408 (1982). 2. Simon, B. Holonomy, the quantum adiabatic theorem, and berry’s phase. Phys. Rev. Lett. 51, 2167–2170 (1983). 3. Chiu, C.-K., Teo, J. C. Y., Schnyder, A. P. & Ryu, S. Classification of topological quantum matter with symmetries. Rev. Mod. Phys. 88, 035005 (2016). 4. Volovik, G. E. The Universe in a Helium Droplet, Vol. 117 (Oxford University Press on Demand, 2003). 5. Qi, X.-L. & Zhang, S.-C. Topological insulators and superconductors. Rev. Mod. Phys. 83, 1057–1110 (2011). 6. Hasan, M. Z. & Kane, C. L. Colloquium: topological insulators. Rev. Mod. Phys. 82, 3045–3067 (2010). 7. Ozawa, T. et al. Topological photonics. Rev. Mod. Phys. 91, 015006 (2019). 8. Ma, G., Xiao, M. & Chan, C. T. Topological phases in acoustic and mechanical systems. Nat. Rev. Phys. 1, 281–294 (2019). 9. Lu, L., Joannopoulos, J. D. & Soljačić, M. Topological photonics. Nat. Photonics 8, 821–829 (2014). 10. Yang, Z. et al. Topological acoustics. Phys. Rev. Lett. 114, 114301 (2015). 11. Xue, H. et al. Observation of an acoustic octupole topological insulator. Nat. Commun. 11, 2442 (2020). 12. Imhof, S. et al. Topolectrical-circuit realization of topological corner modes. Nat. Phys. 14, 925–929 (2018). 13. Yu, R., Zhao, Y. X. & Schnyder, A. P. 4d spinless topological insulator in a periodic electric circuit. Natl Sci. Rev. 7, 1288–1295 (2020). 14. Prodan, E. & Prodan, C. Topological phonon modes and their role in dynamic instability of microtubules. Phys. Rev. Lett. 103, 248101 (2009). 15. Huber, S. D. Topological mechanics. Nat. Phys. 12, 621–623 (2016). 16. Cooper, N. R., Dalibard, J. & Spielman, I. B. Topological bands for ultracold atoms. Rev. Mod. Phys. 91, 015005 (2019). 17. Dalibard, J., Gerbier, F., Juzeliūnas, G. & Öhberg, P. Colloquium: artificial gauge potentials for neutral atoms. Rev. Mod. Phys. 83, 1523 (2011). 18. Wen, X.-G. Quantum orders and symmetric spin liquids. Phys. Rev. B 65, 165113 (2002). 19. Zhao, Y. X., Huang, Y.-X. & Yang, S. A. $${{\mathbb{Z}}}_{2}$$-projective translational symmetry protected topological phases. Phys. Rev. B 102, 161117(R) (2020). 20. Zhao, Y. X., Chen, C., Sheng, X.-L. & Yang, S. A. Switching spinless and spinful topological phases with projective PT symmetry. Phys. Rev. Lett. 126, 196402 (2021). 21. Shao, L. B., Liu, Q., Xiao, R., Yang, S. A. & Zhao, Y. X. Gauge-field extended kp method and novel topological phases. Phys. Rev. Lett. 127, 076401 (2021). 22. Bradley, C. & Cracknell, A. The Mathematical Theory of Symmetry in Solids: Representation Theory for Point Groups and Space Groups (Oxford University Press, 2009). 23. Xue, H. et al. Projectively enriched symmetry and topology in acoustic crystals. Phys. Rev. Lett. 128, 116802 (2022). 24. Li, T. et al. Acoustic Möbius insulators from projective symmetry. Phys. Rev. Lett. 128, 116803 (2022). 25. Wieder, B. J. & Kane, C. L. Spin-orbit semimetals in the layer groups. Phys. Rev. B 94, 155108 (2016). 26. Watanabe, H., Po, H. C., Vishwanath, A. & Zaletel, M. Filling constraints for spin-orbit coupled insulators in symmorphic and nonsymmorphic crystals. Proc. Natl Acad. Sci. USA. 112, 14551–14556 (2015). 27. Segal, G. Equivariant K-theory. Publ. Math.ématiques de. l’IHÉS 34, 129–151 (1968). 28. Zak, J. Berry’s phase for energy bands in solids. Phys. Rev. Lett. 62, 2747–2750 (1989). 29. Shiozaki, K., Sato, M. & Gomi, K. Z2 topology in nonsymmorphic crystalline insulators: Möbius twist in surface states. Phys. Rev. B 91, 155120 (2015). 30. Young, S. M. & Wieder, B. J. Filling-enforced magnetic dirac semimetals in two dimensions. Phys. Rev. Lett. 118, 186401 (2017). ## Acknowledgements This work is supported by National Natural Science Foundation of China (Grants No. 12174181, No. 12161160315, and No. 11874201), and the Singapore MOE AcRF Tier 2 (MOE2019-T2-1-001). ## Author information Authors ### Contributions Z.Y.C. and Y.X.Z. conceived the idea. S.A.Y. and Y.X.Z. supervised the project. Z.Y.C. and Y.X.Z. did the theoretical analysis. Z.Y.C., S.A.Y. and Y.X.Z. wrote the manuscript. ### Corresponding author Correspondence to Y. X. Zhao. ## Ethics declarations ### Competing interests The authors declare no competing interests. ## Peer review ### Peer review information Nature Communications thanks the anonymous, reviewer(s) for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Chen, Z.Y., Yang, S.A. & Zhao, Y.X. Brillouin Klein bottle from artificial gauge fields. Nat Commun 13, 2215 (2022). https://doi.org/10.1038/s41467-022-29953-7 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41467-022-29953-7
# Rotate, Translate or Scale Crystal Structure¶ ## Toggle Option¶ Enabling the "Rotate / Translate / Scale" features, accessible via the corresponding buttons within the "Footer" Menu of the main 3D editor interface, displays a set of coordinate axes alongside the structure being currently inspected. Depending on whether the Translate/Scale or Rotate option is selected, the resulting reference system will consist in Cartesian or Spherical coordinates respectively. In this way the component of the crystal structure, which has been selected under the "Scene" sidebar list, can be shifted or modified in different ways, as described in what follows. ## Translation¶ After the translational coordinate axes (cartesian) are activated by pressing the Translate button, the following two possibilities are available to the user. ### Axial¶ The user can achieve an axial translation of the crystal component with respect to this coordinate system by holding the corresponding axis with the left mouse button, and then making the desired move. ### Planar¶ Planar translations can also be performed in much the same way, by holding the colored squares between the axes indicating the various planes of the Cartesian space. ### Origin¶ A further colored square is present at the origin of the cartesian coordinate system, which can be held and moved in very much the same fashion as in planar translations to achieve a translation of the origin point itself. ## Rotation¶ Spherical rotation axes are enabled upon pressing the Rotate option. This allows one to rotate the selected crystal structure component along one of the three azimuth angles. ## Scaling¶ Selecting the third Scale button option allows the user to deform the selected crystal structure component in various directions, by for example compressing or elongating it. This is done similarly to Axial translations, by holding the corresponding axis (or the origin) with the left mouse button and then making the desired move. ## Local Coordinates¶ There are two coordinate systems, referred to as "local" and "world". Local position is the position of the object relative to its parent. If the user tries to move the parent object, the local position of the child objects do not change. The world position on the other hand represents the absolute position of the object in space. The user can select to employ local coordinates by ticking the local checkbox located at the right-end of the "Footer" Menu. ## Animation¶ We demonstrate the execution of example translations, rotations and scaling of a crystal structure in the following animation.
# Matlab Code For Qpsk Constellation VHDL programming code is used to generate QPSK digital signal. Generate C and C++ code using MATLAB. c code as follows to read qpsk data from qpskwithfilt. The block uses the same constellation for each transmit antenna. Position control with load using RC servo 2. MATLAB code for implementing pi/4 DQPSK - very blunt. A set of random QPSK symbols can be created by generating an array of random integer numbers in 1. QPSK receiver using RTL-SDR In this part we will use the MATLAB model for transmission from a USRP node and reception at an RTL-SDR device using QPSK modulation [2]. To demodulate a signal that was modulated using quadrature phase shift keying:. ECE549_M1L2 - Free download as Powerpoint Presentation (. Signal Constellation QPSK Detector with Decision Directed Carrier Phase Recovery E-mail your answer plus any additional Matlab code to esp "at" eecs. The recommended systems will require less computing resources, because Doppler frequency shift calculation could be done during one OFDM symbol. bpsk, qpsk, qam etc - OFDM transceiver with rayleigh channel using Standard PDP in matlab - no diversity(1x1) and ALAMOUTI STBC 2x1 and. This animation also shows that the constellation and eye diagram can be plotted with any offset. [constellational. The binary code taken here is the Gold code sequence. The resulting noisy modulated signal is demodulated using the BPSK demodulation technique. l QPSK, modulation is very robust, but requires some form of linear amplification. MATLAB Central contributions by Ronak Sakaria. Plot QPSK Reference Constellation. Refer Digital Modulations using Matlab : Build Simulation Models from Scratch for full Matlab code. The QPSK Modulator Baseband block provides the capability to visualize a signal constellation from the block mask. Generate C and C++ code using MATLAB. Look for spreading of the constellation points as the burst progresses. constellation points at [-1+1j 1+1j -1-1j 1-1j]). Transport. QPSK uses four points on the constellation diagram, equispaced around a circle. Search for jobs related to Qpsk modulator or hire on the world's largest freelancing marketplace with 14m+ jobs. Visualizing the Impact of Timing Errors Using Eye & Constellation Diagram. Instructions on how to make your model compatible with the. 5 db ,then we evaluate the coded. There are 4 status corresponding to the phases π/4, 3π/4, 5π/4, 7π/4. Matlab is used in the analysis and design. It plots the constellation diagram for the same. Each QPSK symbol is represented by one complex sample. The remaining bits are the payload. 1b, respectively. Search for jobs related to Qpsk software demodulation or hire on the world's largest freelancing marketplace with 15m+ jobs. A baseband simulation model for 8-PSK modulation is given below. Carrier recovery system for QPSK Hi All, I want to implement a carrier recovery system in Matlab for a quadrature QPSK system. The discussion then moves to binary phase shift keying (BPSK) and shows how this simpler format is modeled using. Search for jobs related to Ber matlab code mmse or hire on the world's largest freelancing marketplace with 15m+ jobs. Advanced topics in digital communications are briefly discussed if time allows. Plot the resultant constellation. ConstellationDiagram System object is QPSK, it is not necessary to set additional properties. MATLAB Code for QPSK Modulation and Demodulation - File Exchange - MATLAB Central. I need star 16 QAM Modulator and demodulator design matlab code Please suggest me how to make the code. Simulation in MATLAB of Digital Communication modulations (BPSK,QPSK,16 QAM) to find the performance and probability of error in Rayleigh and Rician fading. The (punctured) convolutional code is the one standardized in IEEE 802. BER using convolutional coding with QPSK and 16QAM modulations in an AWGN and Rayleigh channels as a function of the average SNR (i. time, constellation plot, polar plot, and eye diagram. In this paper, a QPSK modulation design that a Manchester. The origin of coarse is a point that is phase invariant. Difference between QPSK modulation,BPSK modulation types,QPSK modulation matlab code links are also provided. Best How To : The QPSK symbol alphabet has four complex symbols. Based on the number of carriers in one OFDM symbol, a DVB-H system may operate in three transmission. Its constellation diagram is as following:. The Amplitude And Phase Needs To Be Correct. As mentioned QPSK stands for Quadrature Phase Shift Keying. We will use MatLab 7. As in QPSK, there are two ways (codes) to map the symbols to the constellation points: Binary code and Gray code. This is called pi/4 QPSK. A comparative. If you are generating HDL code you will be to add Vivado to your MATLAB path. It plots the constellation diagram for the same. BPSK/QPSK – Matlab exercise. I will say that your demodulator is modulatioh too complicated. TLT-5400/5406 DIGITAL TRANSMISSION, 2nd Matlab-Exercise Here we consider complex symbol alphabets, such as QPSK and QAM (Chapter 6. in constellation. You clicked a link that corresponds to this MATLAB command:. je voulais faire une modulation QPSK a partir d'une chaine binaire avec la fonction pskmod, puis d'afficher la constellation du signal des symboles. Information 1: I went through the tutorial BER for BPSK modulation but I do not Eq(12) contains an expression for. Create a constellation diagram object. 01 Qpsk Ber Matlab Code 10+ 0 0. Gray codes can be applied to higher-order modulations, as shown in this Gray-coded QPSK constellation. Each message has only two levels, ±V volt. Pass the signal through a noisy channel. demonstrate by Matlab. A scatter plot or constellation diagram can be useful when. Noise model. As shown in Fig. This MATLAB function returns soft bits cw and constellation symbols symbols resulting from the inverse operation of physical uplink shared channel (PUSCH) processing from TS 38. It is fully based on MIMO-OFDM technique for 4G netork to find out the BER and high data rates by using 2*2 MIMO-OFDM system, MATLAB BER analysis. 일단 QPSK, BPSK 의 경우 각 constellation 지점들이 90 도 또는 180 도 위상차이를 보이기 때문에 심볼에 대해 위상을 구할 필요가 없이 BPSK 의 경우 간단하게 x=0 지점을, QPSK 의 경우 x=0, y=0 지점을 기준으로 심볼 demapping 을 할 수 있었습니다. The first 8 bits are used to determine constellation points of the QPSK channels and the rest that of the BPSK channels. I try to implement the matlab code on BER for QPSK in OFDM over rayleigh channel (with n taps). The HDL Optimized QPSK Transmitter example shows how Simulink® blocks that support HDL code generation can be used to implement the baseband processing of a digital communications transmitter. MATLAB を入手する Plot Noisy QPSK. MATLAB code for Discrete. m Search and download open source project / source codes from CodeForge. Information 1: I went through the tutorial BER for BPSK modulation but I do not Eq(12) contains an expression for. Quadrature phase-shift keying (QPSK) Constellation diagram for QPSK with Gray coding. I need a matlab code to run a clear case scenirio in satellite channel channel with figures displaying 16qam constellation,eb/no 2. thank u Reply Krishna Sankar December 6, 2009 at 4:31 pm. QAM becomes QPSK: The QAM modulator is so named because, in analog applications, the messages do in fact vary the amplitude of each of the DSBSC signals. Vinoth Babu, VIT University Matlab program for BPSK BER under AWGN channel by Dr Matlab Code for Pulse Code Modulation by Dr. If so why Qpsk Ber Matlab Code bit at these pictures of files from the hard drive? Any ideas probability stickers just use the by SPD theoretical your unit inside of it. 331 Spring 2009 Plan ECOMMS: Topics Digital Communications Transceiver Digital Bandpass Communications Industry Trends Applications Digital Modulation Recall: Dig Comm Principle Signal Vector Representation Signal Changes. For a non-bandlimited message this. It is digital modulation. in constellation. codes and also shown the effect of noise on received symbol using constellation diagrams with the help of MATLAB simulation. Assuming that the additive noise follows the Gaussian probability distribution function, with and. QPSK modulate random data symbols and apply an amplitude imbalance to the signal. I would suggest the in all other USB2 as far as I know. The QPSK symbol alphabet has four complex symbols. 1b, respectively. band limited QPSK. 01 Qpsk Constellation Matlab Code 10+ 0 0. For further details about how BERTool applies the semianalytic technique, see the reference page for the semianalytic function, which BERTool uses to screenshot of BPSK waveforms. The (punctured) convolutional code is the one standardized in IEEE 802. In this project work I try to design a QPSK modulator with SIMULINK/ Matlab. There must be something wrong in my code but I cannot figure it out what is wrong. org 67 | Page. Matlab code is simple, just define an eNodeB and pass it to lteCFI() function. 4 for simulation and 16-QAM are multiplied by the code sequence used to spread the transmitted BER PERFORMANCE EVALUATION OF BPSK, QPSK AND QAM-16 FOR DS. The output is a vector of soft bits ready to be the input of a convolutional decoder. Implementation of QPSK Modulation on MATLAB Simulation Fig. Quadrature Phase Shift Keying (QPSK) is one of the most popular digital modulation techniques. 5: 8P SK Modulation. ConstellationDiagram System object™. Now add the necessary files to your path by running the startup script: startup_adi_qpsk. codes and also shown the effect of noise on received symbol using constellation diagrams with the help of MATLAB simulation. In this page, I will go through Matlab/Octave functions for communication system within a context of a whole process but in step-by-step manner. Information 1: I went through the tutorial BER for BPSK modulation but I do not Eq(12) contains an expression for. (The constellation could have also been used diagram after the PSK coder in the transmitter instead of before OFDM demodulator though). Matlab Qpsk Transmitter Software Matlab CAPE-OPEN Unit Operation v. The primary advantage of the Gray code is that only one of the two bits changes when moving between adjacent constellation points. I have the QPSK I and Q samples in an excel sheet. The code is based on the fundamental. BER Vs Eb/No for QPSK and BER were discussed in the previous articles. Implementation of QPSK Modulation on MATLAB Simulation Fig. 5 db ,then we evaluate the coded. Assuming that the additive noise follows the Gaussian probability distribution function, with and. 11ac-2012, Section 22. It uses four points on the constellation diagram, equispaced around a circle. Posted on April 1, Qpsk Constellation Diagram Stack Overflow. It is assumed that the oversampling. The developed MATLAB code for QPSK constellation diagram is shown below:. Figure 6 Figure 5: The figure shows a OQPSK constellation. The Raised Cosine Transmit Filter block performs root raised cosine pulse shaping with a roll off factor. Quadrature Phase Shift Keying (QPSK)/BPSK and QPSK/QPSK Waveform (Digita Modulation Techniques) [HD] - Duration: 11:02. 1 1024QAM, as per IEEE 802. Posted on April 1, Qpsk Constellation Diagram Stack Overflow. Such modulated signal is transmitted through AWGN channel. Octave/Matlab - Communication System Home : www. MATLAB Code for QPSK Modulation and Demodulation - File Exchange - MATLAB Central. Gray codes can be applied to higher-order modulations, as shown in this Gray-coded QPSK constellation. For a non-bandlimited message this. Information 1: I went through the tutorial BER for BPSK modulation but I do not Eq(12) contains an expression for. The block uses the same constellation for each transmit antenna. com Description: Design of OFDM system based on MATLAB QPSK modulation and. bpsk, qpsk, qam etc - OFDM transceiver with rayleigh channel using Standard PDP in matlab - no diversity(1x1) and ALAMOUTI STBC 2x1 and. 8PSK 16QAM Figure 4: BER over AWGN channel for BPSK, QPSK, 8PSK and 16QAM The following matlab program illustrates the BER calculations for BPSK over an AWGN channel. It is digital modulation technique. Matlab Program to compute SER of QPSK-Dr. A scatter plot or constellation diagram can be useful when. QPSK Modulation-Quadrature Phase Shift Keying modulation. Pass the signal through a noisy channel. A QPSK modulator can be implemented as follows. Do you want the code for DQPSK modulation or demodulation or just for plotting the signal constellation? Please elaborate. Chapter 1 Fundamentals of Cellular Communication In this chapter, all the background knowledge which is required for this project has been discussed. Gray codes can be applied to higher-order modulations, as shown in this Gray-coded QPSK constellation. 1 Modulator in DVB – S/S2 systems While two of the same size data streams go from convolution coder to the mapper and the mapping is in fact only transposition to Gray’s code in DVB-S, the single one from LDPC coder in DVB-S2 system is mapped on two or three-bits symbols (see Fig. BER using convolutional coding with QPSK and 16QAM modulations in an AWGN and Rayleigh channels as a function of the average SNR (i. this blog about digital communication, how to simulate code matlab for BPSK, QPSK and 8 QAM, then apply it to Rectangular pulse shaping (RPS) then simulate code matlab for Square Root Raised Cosine (SQRC) filter as pulse shaping filter and matched filter, and apply it to the system, and we found minimum number of coefficient that the loss did not exceed 0. For a QPSK constellation centered around the origin of the co-ordinate system, the decision boundaries are simply defined by the x and y axes. The primary advantage of the Gray code is that only one of the two bits changes when moving between adjacent constellation points. Hello, I want to plot a BER vs SNR for a 16 QAM constellation in the context of linear equalization of a white gaussian noise channel using an LMS algorithm. This MATLAB function returns the reference symbols of the constellation diagram for the specified modulation scheme. Specify the number of points in the signal constellation to which the bits are mapped. MATLAB Code for QPSK Modulation and Demodulation – File Exchange – MATLAB Central. Designed and simulated Rake receiver on MATLAB with 3 Rake fingers for 2x2 MIMO system using BPSK modulation to tackle multipath fading and compared the BER with generalized Rake receiver. It's free to sign up and bid on jobs. The main objective of the paper is to make a performance comparison of the basic modulation scheme (Quadrature Phase Shift Keying (QPSK) and Binary Phase Shift Keying (BPSK)) using the USRP. Draw the constellation diagram for QPSK. I want to Plot a graph which showing that there is no difference except in their phases. A set of random QPSK symbols can be created by generating an array of random integer numbers in 1. The purpose of this paper is to create a BPSK/QPSK (Binary Phase Shift Keying and Quadrature Phase Shift Keying) system using Hardware Co Simulation with VHDL a hardware description language targeting a Virtex-5 device (XC5VLX50T-1ff1136) and is verified using MATLAB Simulink [2]. According to some digital communications reference books there are multiple ways to achieve this. 2 is given by, y(t) = x(t) +W(t). MATLAB Code for QPSK Modulation and Demodulation - File Exchange - MATLAB Central. The eye diagram shows how the waveforms used to send multiple bits of data can potentially lead to errors in interpretation. 5 db ,then we evaluate the coded. The figure shows the signal constellation in the complex plane and the eye diagrams of the quadrature components for QPSK (quadrature phase-shift keying). How can i go about doing this then? Is a SRRC filter appropriate? I have some VHDL code written for mapping pairs of bits to a gray code QPSK constellation but i dont know how to actually program the FPGA with this code. The binary code taken here is the Gold code sequence. An analysis. I have looked for proxy for junk, but it not, don't for the card ??? Having done some research i ofdm used for straight one (or vice versa). Delaying one arm so it becomes a QPSK signal again did not seem to work either. Understanding current loop compensation in boost PFC 2. Labels: Differential Manchester Coding Scheme, Implementing BPSK, Manchester Coding Scheme, Program Code For BPSK, Programme Code For MFSK, Programme Code For QPSK, QPSK and MFSK in Matlab Older Posts Home. MATLAB code for Discrete. Plot the constellation diagram for QPSK. QPSK constellation Diagram Below is my MATLAB code for QPSK commnication: I am unable to understand which values to use to generate the phase constellation. Implementations Introduction Initialization Code Architecture for the System Under Test Description of the Individual Components System Under Test Execution and Results Alternate Execution Options Summary Appendix References Im plem entations This example describes the MATLAB implementation of the QPSK transceiver. I haven't made any effort for effiecient code. This is due to support of two bits per carrier in QPSK in comparison to one bit per carrier in the case of BPSK. QPSK performs by changing the phase of the In--phase (Q) carrier between 90° and270°. and C++ code using MATLAB. VHDL programming code is used to generate QPSK digital signal. Refer following as well as links mentioned on left side panel for useful MATLAB codes. Voici mon code:. Sometimes known as quaternary or quadriphase PSK, 4-PSK, or 4-QAM[6], QPSK uses four points on the constellation diagram, equispaced around a circle. This example shows how to optimize the QPSK receiver modeled in QPSK Transmitter and Receiver example for HDL code generation and hardware implementation. m from ELEC 6831 at Concordia University. 8 Scatter Plot Of QAM Showing Transmitted Signal Constellation In Its 4 Quadrants. Create a BPSK Modulator System object™ and calculate the reference constellation values. BPSK vs QPSK-Difference Between BPSK and QPSK modulation techniques. Based on the number of carriers in one OFDM symbol, a DVB-H system may operate in three transmission. QUADRATURE PHASE SHIFT KEYING (QPSK) This technique is also known as quaternary PSK, quadriphase PSK or 4-PSK. The Constellation Diagram block connects the points displayed by the Constellation Diagram block to display the signal trajectory. The Constellation Diagram Without Sync block shows a spiral pattern that indicates a phase and frequency offset. codes are employed in the system entailing detection MATLAB simulation. Matlab Assignment: Write MATLAB code for the following. There must be something wrong in my code but I cannot figure it out what is wrong. Figure: Constellation plot for QPSK (4-QAM) constellation. I try to implement the matlab code on BER for QPSK in OFDM over rayleigh channel (with n taps). je voulais faire une modulation QPSK a partir d'une chaine binaire avec la fonction pskmod, puis d'afficher la constellation du signal des symboles. Modulation is the variation of parameters of a sinusoidal carrier according to the data. The signature constellation pattern of QPSK and related digital modulations schemes can be a bit baffling to novices. Figure 5: Block diagram of a QPSK receiver Part 3. Domain EQ), in order to obtain a nice looking constellation. Plot QPSK Reference Constellation. Figure 6 Figure 5: The figure shows a OQPSK constellation. 8 Scatter Plot Of QAM Showing Transmitted Signal Constellation In Its 4 Quadrants. After demodulation, the I-channel bits and Q-channel sequences are combined into a single sequence. All these techniques will have different and unique constellation diagrams. Matlab Program For Qpsk Modulation 10+ 0 0. Now, try to benchmark this code. Create a BPSK Modulator System object™ and calculate the reference constellation values. To derotate the constellation, specify the same phase as in the mapping function. Noise model. I will say that your demodulator is modulatioh too complicated. The QPSKModulator object modulates using the quadrature phase shift keying method. ( the distance from QPSK symbol to origin in the constellation diagram) connected ? I tried to follow your explains and finished a matlab code (attached. Constellation Mapper and For QPSK, the soft information associated with each bit is calculated by capabilities of MATLAB and Simulink system-level design. Log Likelihood Ratio Soft Decision Demapper An FPGA Implementation for a High Data Rate Modem Major Qualifying Project Report Submitted to the Faculty of Worcester Polytechnic Institute In partial fulfillment of the requirements for the Degree of Bachelor of Science By Brian Leslie October 29th, 2013. (The constellation as observed in the upper figure is at DC and to note it appears to be sampled at 2 samples per symbol, which is typical, so it is really every other sample (the ones on the diagonals) that represents the QPSK constellation at the correct symbol sampling locations). Perform the constellation demapping with an estimated variance noise equal to zero (no added noise). IQImbalanceCompensator System object?. Modulation Abstract— MATLAB and Simulink based simulation of Schemes that will be studied are 16-ary QAM (Quadrature wideband code division multiple access presents a way of Amplitude Modulation) and QPSK (Quadrature Phase Shift demonstrating the performance of WCDMA in a Wireless Keying). Look for spreading of the constellation points as the burst progresses. >> >> Thanks a lot. Could you please help me to check it. Performance evaluation of the system in noise. It's free to sign up and bid on jobs. Thus for constructing a M-QAM constellation, the PAM dimension is set as $$D=\sqrt{M}$$. Generate a random QPSK constellation at a defined EVM level. This page describes QPSK modulation basics or Quadrature Phase Shift Keying modulation. QPSK uses four points on the constellation diagram, equally spaced around a circle. 8PSK 16QAM Figure 4: BER over AWGN channel for BPSK, QPSK, 8PSK and 16QAM The following matlab program illustrates the BER calculations for BPSK over an AWGN channel. MATLAB Code for Eb/No (SNR) Vs BER Curve Plotting for BPSK. As you know, easy-to-read code is not always efficient for a specific chipset. in constellation. Chapter 1 Fundamentals of Cellular Communication In this chapter, all the background knowledge which is required for this project has been discussed. The binary code taken here is the Gold code sequence. Constellation Visualization. yes, communications_engineer, but that, the plot of constellation is what the next question is asking as following, which i didnt post before: "Generate a sequence of information bits with the length of 16,000. I mix it down using another cos and sin wave plus Low Pass Filter to get my recovered I and Q data. Because the default reference constellation for the comm. The effect of pulse shaping RRC filtering is to provide a roundness in the QPSK constellation diagram. Look for spreading of the constellation points as the burst progresses. Advanced topics in digital communications are briefly discussed if time allows. QPSK constellation Diagram Below is my MATLAB code for QPSK commnication: I am unable to understand which values to use to generate the phase constellation. A comparative. this blog about digital communication, how to simulate code matlab for BPSK, QPSK and 8 QAM, then apply it to Rectangular pulse shaping (RPS) then simulate code matlab for Square Root Raised Cosine (SQRC) filter as pulse shaping filter and matched filter, and apply it to the system, and we found minimum number of coefficient that the loss did not exceed 0. If so why Qpsk Ber Matlab Code bit at these pictures of files from the hard drive? Any ideas probability stickers just use the by SPD theoretical your unit inside of it. Designed and simulated Rake receiver on MATLAB with 3 Rake fingers for 2x2 MIMO system using BPSK modulation to tackle multipath fading and compared the BER with generalized Rake receiver. QPSK vs OQPSK vs pi/4QPSK-Difference between QPSK,OQPSK and pi/4QPSK modulation techniques. I have a binary file that i want to simulate using an awgn and a mobile channel. As you know, easy-to-read code is not always efficient for a specific chipset. The remaining bits are the payload. I want to Write a code in Matlab which creates a constant envelop PSK signal waveform that generate for M=8 (M stands for modulation), so that amplitude of the signal can reach up to sqrt(2). > >Do you mean pi/4-DQPSK? > I think you mean pi/4-DQPSK. In this paper, a QPSK modulation design that a Manchester. The Raised Cosine Transmit Filter block performs root raised cosine pulse shaping with a roll off factor. Because the default reference constellation for the comm. I haven't made any effort for effiecient code. In this paper, we present a (7, 4) code which can be directly applied on the phase of the QPSK constellation points. (Use different color for the original constellation symbol and the distorted samples. QPSK refers to a type of phase modulation technique where there are four states involved. Adding a load to the auxiliary winding of the TNY circuit 2. What's New in MATLAB and Simulink for Signal ARM Cortex-M CMSIS Code Generation Constellation Diagram Block. o Polar, constellation, eye diagram before sampling o BER vs. l QPSK, modulation is very robust, but requires some form of linear amplification. Result shown in gure 7. QPSK Modulation and Demodulation in Matlab AWGN Channel. Then we will use quantization, QPSK modulation, QPSK demodulation and dequantization. BER ANALYSIS OF BPSK, QPSK & QAM BASED OFDM SYSTEM USING SIMULINK. and C++ code using MATLAB. TLT-5400/5406 DIGITAL TRANSMISSION, 2nd Matlab-Exercise Here we consider complex symbol alphabets, such as QPSK and QAM (Chapter 6. In addition to the well-known ASK format (also known as OOK), we also produce instruments for the generation of PAM, DPSK, DQPSK, DP-QPSK and QAM signals. File 2: qpsk_demod. Constellation Visualization. Scatter Plots and Constellation Diagrams. 331 Spring 2009 Plan ECOMMS: Topics Digital Communications Transceiver Digital Bandpass Communications Industry Trends Applications Digital Modulation Recall: Dig Comm Principle Signal Vector Representation Signal Changes. How can i go about doing this then? Is a SRRC filter appropriate? I have some VHDL code written for mapping pairs of bits to a gray code QPSK constellation but i dont know how to actually program the FPGA with this code. It uses four points on the constellation diagram, equispaced around a circle. 1 1024QAM, as per IEEE 802. MATLAB Code for QPSK Modulation and Demodulation - File Exchange - MATLAB Central. 2nd edition. Matlab codes. Each constellation point is located at a distance \ I need matlab code for it. Which bit does each point represent? What is the difference between Binary code and Gray. I haven't made any effort for effiecient code. Variance generated qpsk modulation and demodulation matlab code user-inputted SNR dB: I found that by googling for snd ber simulation. This is because it has only two basis functions. you could use these files and change the paths or use your own QPSK TX. We will use MatLab 7. Figure 8: QPSK modulation. The QPSKModulator object modulates using the quadrature phase shift keying method. Each message has only two levels, ±V volt. (BER Vs Eb/N0 for QPSK modulation over AWGN), (BER Vs Eb/N0 for BPSK modulation over AWGN) BER Vs Eb/No for 8-PSK is discussed here. Simulation in MATLAB of Digital Communication modulations (BPSK,QPSK,16 QAM) to find the performance and probability of error in Rayleigh and Rician fading. in constellation. This page describes QPSK modulation basics or Quadrature Phase Shift Keying modulation. MATLAB command screen as a rough measurement of relative data rate. Figure 8: QPSK modulation. this blog about digital communication, how to simulate code matlab for BPSK, QPSK and 8 QAM, then apply it to Rectangular pulse shaping (RPS) then simulate code matlab for Square Root Raised Cosine (SQRC) filter as pulse shaping filter and matched filter, and apply it to the system, and we found minimum number of coefficient that the loss did not exceed 0. cute baby say. Then, the second stage will be the implementation of QPSK using USRP Hardware. I have looked for proxy for junk, but it not, don't for the card ??? Having done some research i ofdm used for straight one (or vice versa). LAB 3: MODULATION AND DETECTION I. The journal is divided into 81 subject areas. 8-PSK modulation technique What is Difference between. The first 8 bits are used to determine constellation points of the QPSK channels and the rest that of the BPSK channels. I just tried to create code as simple as possible for the readers. Plot QPSK Reference Constellation. Create a constellation diagram object. Plot the constellation diagram for QPSK. This Constellation Visualization feature allows you to visualize a signal constellation for specific block parameters. The data can be further viewed as wave if plotted using Matlab. Difference between QPSK modulation,BPSK modulation types,QPSK modulation matlab code links are also provided. A communication system simulation. What's New in MATLAB and Simulink for Signal ARM Cortex-M CMSIS Code Generation Constellation Diagram Block. LDPC Encoder and Decoder. UFMC_OFDM___TransceiverChain_0. Thank you very much. 1 Modulator in DVB – S/S2 systems While two of the same size data streams go from convolution coder to the mapper and the mapping is in fact only transposition to Gray’s code in DVB-S, the single one from LDPC coder in DVB-S2 system is mapped on two or three-bits symbols (see Fig. je voulais faire une modulation QPSK a partir d'une chaine binaire avec la fonction pskmod, puis d'afficher la constellation du signal des symboles. QPSK uses four points on the constellation diagram, equispaced around a circle. Monte-Carlo Simulation Technique 6 2. Could you please help me to check it. this blog about digital communication, how to simulate code matlab for BPSK, QPSK and 8 QAM, then apply it to Rectangular pulse shaping (RPS) then simulate code matlab for Square Root Raised Cosine (SQRC) filter as pulse shaping filter and matched filter, and apply it to the system, and we found minimum number of coefficient that the loss did not exceed 0. Result shown in gure 7. average ES/N0). The performance study will be carried out by varying.
# What is a good reference for (this way of) generating a logarithmic scale? I am interested in answers to the title question without parentheses, but I found this method below rather interesting, and I am hoping to find it published somewhere, along with a teacher's guide with suggestions on how to present it. (My current target audience consists of U.S. students aged 10 and younger.) If one wants just powers of 2 (or of some number b), one just takes some ruled filler paper (I like college rules with the lines spaces something like 1/3 inch apart), and on (say) the left edge of the paper places 1,2,4, etc on each rule, and copies this labeling to the right edge of a different sheet. One gets a primitive slide rule to multiply powers of 2 (or of b). A logarithmic scale is just several of these power scales aligned and intertwined, and one needs a good value of (say) (log b / log 2) to get the intertwining right. Here is the idea. Take a second sheet of ruled paper and (using b=3 as an example) label the rules somewhere 3^0(=1),3,9,27,... successively. Now tilt the paper so that the rule labeled 3^0 intersects the rule on the first sheet labeled 1, with the point of intersection on the edge of the first paper. Tilt the paper and adjust so that the first rule stays fixed at the intersection, but now the second rule labeled 3 intersects the edge between (the rules labeled) 2 and 4. Continue tweaking to get (the intersection points with the edge of the rules labelled) 9, 27, 81, and so on properly positioned (between the rules labeled with powers of 2). When the last tweak is done, mark the edge of the first paper with ticks at the points where the rules labeled 3^i intersect the edge. Now you have the start of a log scale that includes powers of 2 and 3. One can also add powers of 5 and 7 in an analogous fashion. When one starts with about 28 powers of 2 and fills in powers of 3, 5, and 7 in this fashion, one can then transfer marks to the edge of the second ruled paper and start using this to compute marks for non prime powers, most notably 6 and 10. Compute and add marks as desired. My search so far reveals some material talking about the basic properties of the log scale and its use as a slide rule. There is likely material on rational approximation of ratios of logs (and this is what is going on with tilting the second sheet, is coming up with a physical measurement of such ratios). However, I have not seen any thing which starts with two sheets of nicely ruled paper and constructs a log scale or a slide rule. Further, the above method does not require exact computation of powers: knowing that 2^19 is near 520,000 and that 3^12 is near 530,000 suffices for the precision expected with this method and tools. I think this could be useful in multiplication practice as well as an introduction to exponentiation and its inverse. Alternately, it could serve as an example for those wanting to learn more about nomographs and other physical calculating devices. However, it may be that I can't find it because it may be considered too challenging for my intended audience. So in addition to references, I also ask others here what background/age range/maturity level would be wanted in order to show this.
# Geometric and Fractal Properties ofBrownian Motion and Random Walk Paths inTwo and Three ```@inproceedings{Lawler1998GeometricAF, title={Geometric and Fractal Properties ofBrownian Motion and Random Walk Paths inTwo and Three}, author={D F Lawler}, year={1998} }``` There is a close relationship between critical exponents for proa-bilities of events and fractal properties of paths of Brownian motion and random walk in two and three dimensions. Cone points, cut points, frontier points, and pioneer points for Brownian motion are examples of sets whose Hausdorr dimension can be given in terms of corresponding exponents. In the latter three cases, the exponents are examples of intersection exponents for Brownian motion. The \non-mean eld" or \multifractal… CONTINUE READING #### Citations ##### Publications citing this paper. Showing 1-10 of 10 extracted citations Highly Influenced 16 Excerpts Highly Influenced 3 Excerpts Highly Influenced 7 Excerpts #### References ##### Publications referenced by this paper. Showing 1-10 of 36 references Highly Influential 20 Excerpts ## Distortion of boundary sets under conformal mappings • N. G. Makarov • Proc. London Math. Soc. (3) • 1985 Highly Influential 4 Excerpts ## Cut points for simple random walk. Electronic Jour- nal of Probability 1, paper • G. Lawler • 1996 Highly Influential 4 Excerpts Highly Influential 4 Excerpts Highly Influential 4 Excerpts • G. Lawler • 1996 ## Simulations and conjectures for disconnection exponents • E. Puckette, W. Werner • Electronic Communications in Probability 1… • 1996 #### Similar Papers Loading similar papers…
# AcuGetCpCf Extract the coefficient of pressure (Cp) and the coefficient of friction (Cf) from an AcuSolve solution database. ## Syntax acuGetCpCf [options] ## Type AcuSolve Post-Processing Program ## Description AcuGetCpCf is a post-processing utility that uses the AcuSolve solution database of nodal pressure and shear stress to compute the coefficient of pressure (Cp) and the coefficient of friction (Cf) on a specified set of points. The Cp and Cf values are computed on a curve that is projected onto a user requested surface(s) or explicitly defined by you in three-dimensional space. The selected surfaces, determined by the osis flag, are queried for, and obtained during the execution of the application. The nodal values of pressure (shear stress) from those surfaces are extracted from the user specified point_type attribute, specified as auto by default. When point_type=auto, you must select the radial_locations and cutplane_normal_direction attributes to define the location and cut plane direction corresponding to the requested surface definition. The quantities obtained at the requested locations are then normalized according to the calculated dynamic pressure, as shown in the following relationship.(1) (2) Where, is the coefficient of pressure, is the local pressure, is the freestream pressure, is the freestream density and is the freestream velocity, defined by the nodal pressure, reference_pressure, reference_density and reference_velocity, respectively. is the coefficient of friction, computed from the magnitude of wall shear stress, and the same normalization parameters. In the following, the full name of each option is followed by its abbreviated name and its type. For a general description of option specifications, see Command Line Options and Configuration Files. See below for more individual option details: help or h (boolean) If set, the program prints a usage message and exits. The usage message includes all available options, their current values, and the place where each option is set. problem or pb (string) The name of the problem is specified via this option. This name is used to generate input file names and extracted surface file names. working_directory or dir (string) All internal files are stored in this directory. This directory does not need to be on the same file system as the input files of the problem. run_id or run (integer) Number of the run in which the extraction is requested. If run_id is set to 0, the last run in the working directory is assumed. time_step or ts (integer) Time step from which to extract selected data. surface_integral_output_sets or osis (string) Comma-separated list of surface_output sets. These are the user-given names specified as the user-given name of the SURFACE_OUTPUT commands in the input file. If surface_integral_output_sets is set to _all, all output sets are included into the coefficient computation. curve_type or type (enumerated) Specifies the type of data to generate during the normalization computation. You can specify either cp or cf to compute the coefficient of pressure and coefficient of friction, respectively. point_type or pttype (enumerated) Specifies the type of data extraction method used to obtain nodal data. You can specify either auto or file. Selecting auto will automatically project the data from the requested surfaces onto a three-dimensional curve. When file is selected, the application requires an explicitly defined set of x, y, z point locations to project the data to. points_file or pts (string) Specifies the file used when point_type=file and defines the set of x, y, z point locations to extract data from. The points_file contains rows of point locations with format, row index, x, y, z (using any delimiter). Specifies the location(s) of the cut plane along the cutplane_normal_direction when point_type=auto. Comma separated string is accepted to run the extraction at multiple cut planes. cutplane_normal_direction or cut_dir (enumerated) Specifies the cut plane normal direction when point_type=auto. Used in conjunction with the radial_location to define the projection direction and cut location. gauge_pressure or gauge_pres (real) Specifies the gauge pressure conversion from relative to absolute pressure if required. Used when absolute_pressure_offset is not equal to 0.0 when the AcuSolve solution was run. reference_density or ref_rho (real) Specifies the reference density used to normalize the pressure (friction) values into coefficients. It should be specified as the freestream density. reference_velocity or ref_vel (real) Specifies the reference velocity used to normalize the pressure (friction) values into coefficients. It should be specified as the magnitude of the freestream velocity. reference_pressure or ref_pres(real) Specifies the reference pressure used to offset the pressure values for the coefficient computation. It should be specified as the freestream pressure. reference_viscosity or ref_vis (real) Specifies the reference dynamic viscosity for the Reynolds number calculation. specific_heat_ratio or gamma (real) Specifies the ratio of specific heats for the Mach number calculation. normalize_chord or nc (boolean) If this option is set to TRUE, the output location will be normalized by the local chord length. chord_scale_fac or csf (real) Specifies the scaling factor to apply to the coordinate locations. Used prior to scaling by the local chord length when normalize_chord=true. cp_scale_fac or cpsf (real) Specifies the scaling factor to apply to the coefficients after they are calculated according to the provided reference quantities. inviscid_flow or inviscid (boolean) If this option is set to True, the cp calculation will be normalized by the reference_velocity alone and not include effects of surface velocity if non-zero. query or q (boolean) If this option is set to TRUE, the application will run in query mode. Query mode will simply print a list of surfaces that are available for coefficient computation. The list of surfaces is queried from the list of surface_output sets available in the solution database. If this option is set to TRUE, the application will write the variable names for each data column in the output file. output_file_format or ofmt (enumerated) Specifies the output format of the files written to disk. Ascii for text readable space delimited file, binary for compressed binary format. output_coordinate or out_crd (boolean) If this option is set to TRUE, the application will also output the x-,y-,z-coordinate in addition to x/c,Cp. verbose or v (integer) Set the verbose level for printing information to the screen. Each higher verbose level prints more information. If verbose is set to 0 (or less), only warning and error messages are printed. If verbose is set to 1, basic processing information is printed in addition to warning and error messages. This level is recommended. verbose levels greater than 1 provide information useful only for debugging. ## Examples Consider the computation of the pressure coefficient of a three-dimensional wing at multiple spanwise locations: acuGetCpCf -pb wing -osis 'Wall - Output' -reference_pressure 94760.7 - reference_velocity 291.7 -reference_density 1.1 -rad_locs 0.23926,0.526372,0.777595 -cut_dir y -type cp or alternatively the options may be put into the configuration file Acusim.cnf as follows: acuGetCpCf.pb=wing acuGetCpCf.osis=”Wall – Output” acuGetCpCf.reference_pressure=94760.7 acuGetCpCf.reference_velocity=291.7 acuGetCpCf.reference_density=1.1 acuGetCpCf.cut_dir=y acuGetCpCf.type=cp The following will be printed to the standard output of the terminal: acuGetCpCf: acuGetCpCf: Opening the AcuSolve solution data base acuGetCpCf: Problem <onera> directory <ACUSIM.DIR> runId <0> acuGetCpCf: acuGetCpCf: Opened run id <1> acuGetCpCf: acuGetCpCf: Extracting <pressure> field from step <1500> acuGetCpCf: acuGetCpCf: Projecting a circle of radius: 1.34e+00 acuGetCpCf: acuGetCpCf: Cut plane: x-z acuGetCpCf: acuGetCpCf: Building projection object acuGetCpCf: acuGetCpCf: Processing location: 1 acuGetCpCf: Cut coordinate: y = 0.23926 acuGetCpCf: Local chord: 0.740 acuGetCpCf: Rotational velocity: 0.0 acuGetCpCf: Total velocity: 291.668 acuGetCpCf: Reference dynamic pressure: 4.68e+04 acuGetCpCf: Cp min/max: -0.70275/0.90013 acuGetCpCf: Chord Reynolds number: 1.33e+07 acuGetCpCf: Writing file: onera.cp.1.dat acuGetCpCf: acuGetCpCf: Processing location: 2 acuGetCpCf: Cut coordinate: y = 0.526372 acuGetCpCf: Local chord: 0.654 acuGetCpCf: Rotational velocity: 0.0 acuGetCpCf: Total velocity: 291.668 acuGetCpCf: Reference dynamic pressure: 4.68e+04 acuGetCpCf: Cp min/max: -0.8154/0.8739 acuGetCpCf: Chord Reynolds number: 1.18e+07 acuGetCpCf: Writing file: onera.cp.2.dat acuGetCpCf: acuGetCpCf: Processing location: 3 acuGetCpCf: Cut coordinate: y = 0.777595 acuGetCpCf: Local chord: 0.580 acuGetCpCf: Rotational velocity: 0.0 acuGetCpCf: Total velocity: 291.668 acuGetCpCf: Reference dynamic pressure: 4.68e+04 acuGetCpCf: Cp min/max: -0.8926/0.85577 acuGetCpCf: Chord Reynolds number: 1.04e+07 acuGetCpCf: Writing file: onera.cp.3.dat acuGetCpCf: acuGetCpCf: Freestream Mach number: 0.84 acuGetCpCf: Theoretical compressible Cp: acuGetCpCf: At stagnation point (max): 1.18899 acuGetCpCf: At sonic point (Mach = 1): -0.32728 The above example produces three output files containing the last time-step from the solution with normalized x/C and pressure coefficient called onera.cp.[1-3].dat. The output file contains the following columns: x/C location Cp 1.0000000000000000e+00 1.4607121703357254e-01 For a steady state simulation, only the last value is likely of interest to you, as it would be the converged solution for a Reynolds Averaged Navier-Stokes simulation. This time-step is output by default. Additional time-steps, if specified in the NODAL_OUTPUT command, can be requested with the -ts option as described above. To simply query the solution database, use the following command: acuGetCpCf -q The following will be printed to the standard output of the terminal: acuGetCpCf: acuGetCpCf: Opening the AcuSolve solution data base acuGetCpCf: Problem <onera> directory <ACUSIM.DIR> runId <0> acuGetCpCf: acuGetCpCf: Opened run id <1> acuGetCpCf: acuGetCpCf: Surface output name: Far field - Output acuGetCpCf: Number of surface nodes: 4713 acuGetCpCf: Number of integrated output steps: 1500 acuGetCpCf: Surface output name: Symmetry - Output acuGetCpCf: Number of surface nodes: 33631 acuGetCpCf: Number of integrated output steps: 1500 acuGetCpCf: Surface output name: Wall - Output acuGetCpCf: Number of surface nodes: 121282 acuGetCpCf: Number of integrated output steps: 1500 Consider the computation of the pressure coefficient of a three-dimensional wing acuGetCpCf -reference_pressure 94760.7 -reference_velocity 291.7 - reference_density 1.1 -type cp -output_coordinate -normalize_chord -pttype file -pts point_locations.txt acuGetCpCf: acuGetCpCf: Opening the AcuSolve solution data base acuGetCpCf: Problem <onera> directory <ACUSIM.DIR> runId <0> acuGetCpCf: acuGetCpCf: Opened run id <1> acuGetCpCf: acuGetCpCf: Extracting <wall_shear_stress> field from step <1500> acuGetCpCf: acuGetCpCf: Projecting a circle of radius: 1.34e+00 acuGetCpCf: acuGetCpCf: Building projection object acuGetCpCf: acuGetCpCf: Processing location: 1 acuGetCpCf: Local chord: 0.833 acuGetCpCf: Rotational velocity: 0.0 acuGetCpCf: Total velocity: 291.667 acuGetCpCf: Reference dynamic pressure: 4.68e+04 acuGetCpCf: Cf-x min/max: 2.96307690806e-06/0.00304619492264 acuGetCpCf: Cf-y min/max: -0.000470032309125/0.00153358625221 acuGetCpCf: Cf-z min/max: -0.00158707581029/0.0021541080237 acuGetCpCf: Cf-mag min/max: 7.05953432918e-06/0.0033957476328 acuGetCpCf: Chord Reynolds number: 1.33e+07 acuGetCpCf: Writing file: onera.cf.dat acuGetCpCf: acuGetCpCf: Freestream Mach number: 0.84 acuGetCpCf: Theoretical compressible Cp: acuGetCpCf: At stagnation point (max): 1.18899 acuGetCpCf: At sonic point (Mach = 1): -0.32725 The output file contains the following columns: x/C Cf(x) Cf(y) Cf(z) Cf(mag) X Y Z 1.00e+00 2.85e-04 1.12e-04 2.33e-06 3.07e-04 8.77e-01 2.39e-01 -4.84e-10
## what is the expectation value of momentum ??x,t?=Aexp(-(x/a)^2)exp(-iwt)sinkx what is the expectation value of momentum ??x,t?=Aexp(-(x/a)^2)exp(-iwt)sinkx
### Home > A2C > Chapter 1 > Lesson 1.2.2 > Problem1-85 1-85. Solve each equation for $x$. Homework Help ✎ 1. $- 2 ( x + 4 ) = 35 - ( 7 - 4 x )$ Pay careful attention to the placement of the parentheses. $x = −6$ 1. $\frac { x - 4 } { 7 } = \frac { 8 - 3 x } { 5 }$ Use multiplication to eliminate the denominators. $x=\frac{38}{13}\approx2.92$
J. Korean Ceram. Soc. > Volume 34(5); 1997 > Article Journal of the Korean Ceramic Society 1997;34(5): 473. 고압연소 소결(HPCS)법에 의한 탄화티타늄(TiC)의 합성 및 소결 김지헌, 최상욱, 조원승, 조동수, 오장환 인하대학교 무기재료공학과 Simultaneous Synthesis and Sintering of Titanium Carbide by HPCS(High Pressure-Self Combustion Sintering) ABSTRACT Titanium carbide(TiC) has a poor sinterability due to the strong covalent bond. Thus, it is generally fabricated by either hot pressing or pressureless-sintering at elevated temperature by the addition of sintering aids such as nickel(Ni), molybdenum(Mo) and cobalt(Co). However, these sintering methods have the following disadvantages; (1) the complicated process, (2) the high energy consumption, and (3) the possibility of leaving inevitable impurities in the product, etc. In order to reduce above disadvantages, we investigated the optimum conditions under which dense titanium carbide bodies could be synthesized and sintered simultaneously by high pressure self-combustion sintering(HPCS) method. This method makes good use of the explosive high energy from spontaneous exothermic reaction between titanium and carbon. The optimum conditions for the nearly full-densification were as follows; (1) The densification of sintered body becomes high by increasing the pressing pressure from 400kgf/$textrm{cm}^2$ upto 1200 kgf/$textrm{cm}^2$. (2) Instead of adding the coarse graphite or activated carbon, the fine particles of carbon black should be added as a carbon source. (3) The optimum molar ratio of carbon to titanium (C/Ti) was unity. In reality, titanium carbide body which were prepared under optimum conditions had relatively dense textures with the apparent porosity of 0.5% and the relative density of 98%. Key words: High Pressure self-Combustion Sintering, TiC, SHS, Relative density, Porosity TOOLS
A short note about using parallel-io to run shell commands in parallel from Haskell. If you want to try out this blog post’s Literate Haskell source then your best bet is to compile in a sandbox which has various package versions fixed using the cabal.config file (via the cabal freeze command). This is how to build the sandbox: git clone https://github.com/carlohamalainen/playground.git rm -fr .cabal-sandbox cabal.sandbox.config dist # start fresh cabal sandbox init cabal install cabal repl Also, note the line ghc-options: -threaded -rtsopts -with-rtsopts=-N in parallel.cabal. Without those rtsopts options you would have to execute the binary using ./P +RTS -N. Now, onto the actual blog post. First, a few imports to get us going. In one of my work projects I often need to call legacy command line tools to process various imaging formats (DICOM, MINC, Nifti, etc). I used to use a plain call to createProcess and then readRestOfHandle to read the stdout and stderr but I discovered that it can deadlock and a better approach is to use process-streaming. This is the current snippet that I use: Suppose we have a shell command that takes a while, in this case because it’s sleeping. Pretend that it’s IO bound. We could run them in order: In Haskell we can think of IO as a data type that describes an IO action, so we can build it up using ‘pure’ code and then execute them later. To make it a bit more explicit, here is a function for running an IO action: We can use it like this: *Main> let action = print 3 -- pure code, nothing happens yet *Main> runIO action -- runs the action 3 And we can rewrite main1 like this: As an aside, runIO is equivalent to liftM id (see Control.Monad for info about liftM). Now, imagine that you had a lot of these shell commands to execute and wanted a pool of, say, 4 workers. The parallel-io package provides withPool which can be used like this: Note that the IO actions (the putStrLn fragments) are provided in a list. A list of IO actions. So we can run our shell commands in parallel like so: If we did this a lot we might define our own version of forM_ that uses withPool: Here is another example of building up some IO actions in pure form and then executing them later. Imagine that instead of a list of Ints for the sleep times, we have some actual sleep times and others that represent an error case. An easy way to model this is using Either, which by convention has the erroneous values in the Left and correct values in the Right. In main5 we define actions by mapping a function over the sleep times, which are are now of type Either String Int. We can’t apply longShellCommand directly because it expects an Int, so we use traverse longShellCommand instead (see Data.Traversable for the definition of traverse). Next, the Either-of-Either is a bit clunky but we can mash them together using join. Here we have to use fmap because we have list elements of type IO (Either [Char] String), not Either [Char] String as join might expect. One topic that I haven’t touched on is dealing with asynchronous exceptions. For this, have a read of Catching all exceptions from Snoyman and also enclosed-exceptions. Also, Chapter 13 of Parallel and Concurrent Programming in Haskell shows how to use the handy async package.
# Banach conjecture There was a paper today in the arXiv mailing list (arXiv:2006.00336) proving yet another case of the Banach conjecture. I never heard of this conjecture before, but it is easy to state and seems to me to be a foundational recognition principle for those Banach spaces that are actually Hilbert spaces. The conjecture was stated by Stefan Banach in 1932 and reads as follows: if $$V$$ is a Banach space (it may be real or complex, and of finite or infinite dimension) such that for some natural number $$n$$ with $$2 \le n < \mathrm{dim}(V)$$ all $$n$$-dimensional subspaces of $$V$$ are pairwise isometrically isomorphic to each other, then $$V$$ is a Hilbert space. Many cases of the Banach conjecture are already known (for example, the $$\infty$$-dimensional case in both the real and the complex version). The above cited arXiv preprint summarizes all the single results in its introduction – so have a look if you are interested.
0 Research Papers: Flows in Complex Systems # Numerical and Experimental Investigations of the Three-Dimensional-Flow Structure of Tandem Cascades in the Sidewall Region [+] Author and Article Information Martin Böhle Professor Chair of Fluid Mechanics and Fluid Machinery, Department of Mechanical and Process Engineering, Technical University Kaiserslautern, Gottlieb Daimler Strasse, Kaiserslautern 67663, Germany e-mail: [email protected] Thomas Frey Chair of Fluid Mechanics and Fluid Machinery, Department of Mechanical and Process Engineering, Technical University Kaiserslautern, Gottlieb Daimler Strasse, Kaiserslautern 67663, Germany e-mail: [email protected] 1Corresponding author. Contributed by the Fluids Engineering Division of ASME for publication in the JOURNAL OF FLUIDS ENGINEERING. Manuscript received April 4, 2013; final manuscript received February 13, 2014; published online May 6, 2014. Assoc. Editor: Zvi Rusak. J. Fluids Eng 136(7), 071102 (May 06, 2014) (13 pages) Paper No: FE-13-1220; doi: 10.1115/1.4026880 History: Received April 04, 2013; Revised February 13, 2014 ## Abstract Tandem blades can be superior to single blades, particularly when large turning angles are required. This is well documented in the open literature and many investigations have been performed on the 2D-flow of tandem cascades to date. However, much less information on the flow near the sidewalls is available. Thus, the question arises as to how the geometry of a tandem cascade should be chosen near the sidewall in order to minimize the flow losses for large turning angles. The present work examines the 3D-flow field in the region of the sidewall of two high turning tandem cascades. A large spacing ratio was chosen for the forward blade of the first tandem cascade ($(t/l)1=1.92$). The second tandem cascade possessed a smaller spacing ratio for the forward blades ($(t/l)1=1.0$). Both cascades had the same total spacing ratio of $t/l=0.6$. Flow phenomena, such as the corner stall of the 3D boundary layer near the sidewall, are examined using both numerical and experimental methods. The empirical correlations of Lieblein and Lei are applied. The flow topology of both tandem cascades is explained and the locations of loss onset are identified. In addition, oil pictures from experiments and streamline pictures of the numerical simulations are shown and discussed for the flow close to the sidewalls. Finally, design rules such as the aerodynamic load splitting and the spacing ratio of the forward- and aft-blades, etc. are taken into account. The examinations are performed for tandem cascades designed for flow turning of approximately $50 deg$ at a Reynolds number of $8×105$. The tandem cascades consist of NACA65 blades with circular camber lines and an aspect ratio of $b/l=1.0$. <> ## References Ohashi, H., 1959, “Theoretical and Experimental Investigations on Tandem Pump Cascades With High Deflection,” Ing. Archiv., 27, pp. 201–226. Raily, J. W. and El-Sara, M. E., 1965, “An Investigation of the Flow Through Tandem Cascades,” Proc. Inst. Mech. Eng., Part 3F, 180. pp. 66–73. Wu, G., Zhuang, B., and Guo, B., 1985, “Experimental Investigation of Tandem Blade Cascades With Double Circular Arc Profiles,” ASME Paper No. 85-IGT-94. Bammert, K. and Staude, R., 1979, “Optimization for Rotor Blades of Tandem Design for Axial Flow Compressors,” ASME Paper No. 79-GT-125. McGlumphy, J., Ng, W., Wellborn, S., and Kempf, S., 2007, “Numerical Investigation of Tandem Airfoils for Subsonic Axial-Flow Compressor Blades,” ASME Paper No. IMECE2007-43929. Lieblein, S., 1965, “Aerodynamical Design of Axial-Flow Compressors,” Technical Report No. NASA S-36. McGlumphy, J., Ng, W., Wellborn, S., and S. K., 2008, “3D Numerical Investigations of Tandem Airfoils for a Core Compressor Rotor,” ASME Paper No. GT2008-50427. Schluer, C., Boehle, M., and Cagna, M., 2009, “Numerical Investigation of the Secondary Flows and Losses in a High Turning Tandem Compressor Cascade,” Proceedings of the 8th European Turbomachinery Conference, Graz, Austria. Lakshminaryana, B., 1996, Fluid Dynamics and Heat Transfer of Turbomachinery, John Wiley and Sons, New York. Greitzer, E., Tan, C. S., and Graf, M., 2004, Internal Flow—Concepts and Applications, Cambridge University Press, Cambridge, UK. Cumpsty, N., 2004, Compressor Aerodynamics, Krieger, Malabar, FL. Gbadebo, S., Cumpsty, N., and Hynes, T., 2005, “Three-Dimensional Separations in Axial Compressors,” ASME J. Turbomach., 127(2), p. 331. Lei, V.-M., Spakovoszky, Z., and Greizer, E., 2008, “A Criterion for Axial Compressor Hub-Corner Stall,” ASME J. Turbomach., 130(3), p. 031006. FLUENT, 2009, “Software Package,”Lebanon, NH. Spalart, P. and Allmaras, S., 1992, “A One-Equation Turbulence Model for Aerodynamic Flows,” AIAA Technical Report No. 92-0439. Launder, B. and Spalding, D., 1974, “The Numerical Computation of Turbulent Flows,” Comput. Methods Appl. Mech. Eng., 3, pp. 269–289. Wilcox, D., 1986, “Multiscale Model for Turbulent Flows,” AIAA 24th Aerospace Sciences Meeting. Menter, F., 1994, “Two Equation Eddy-Viscosity Turbulence Models for Engineering applications,” AIAA J., 32(8), pp. 1598–1605. Schobeiri, M., Abdelfattah, S., and Chibli, C., 2012, “Investigating the Cause of Computational Fluid Dynamics Deficiencies in Accurately Predicting the Efficiency and Performance of High Pressure Turbines: A Combined Experimental and Numerical Study,” ASME J. Fluids Eng., 134(10), p. 101104. ## Figures Fig. 2 Fig. 1 Fig. 4 Calculation domain Fig. 5 Velocity profile on inlet Fig. 6 2D-numerical result: ζ¯ and Δβ in dependence of β11 Fig. 3 Diffusion numbers, load split, and (t/l)2 in dependence of the spacing ratio Fig. 7 3D-Numerical result: ζ¯ and Δβ in dependence of β11; losses were mass averaged over the whole exit plane (see the light-gray diamond shaped plane of Fig. 4) Fig. 8 Numerical result: streamlines close to the sidewall of cascade A, β11 = 50 deg Fig. 9 Numerical result: streamlines close to the sidewall of cascade B, β11= 50 deg Fig. 10 Numerical result: streamlines close to the sidewall of cascade A, β11 = 54 deg Fig. 11 Numerical result: streamlines close to the sidewall of cascade B, β11 = 54 deg Fig. 12 Numerical result: ζ-distribution of cascade A, β11 = 50 deg Fig. 14 Numerical result: ζ-distribution of cascade A, β11 = 54 deg Fig. 15 Numerical result: ζ-distribution of cascade B, β11 = 54 deg Fig. 16 Numerical result (qualitative): ζ-distribution in tandem cascade B, β11 = 54 deg Fig. 13 Numerical result: ζ-distribution of cascade B, β11 = 50 deg Fig. 17 Fig. 18 Five hole probe, angles, and velocity components Fig. 19 Measured flow losses at midspan in comparison with the numerical results for cascades A and B Fig. 20 Cascade A: measured and calculated flow losses in the wake at midspan for β11 = 50 deg and β11 = 54 deg Fig. 21 Cascade B: measured and calculated flow losses in the wake at midspan for β11 = 50 deg and  oβ11 = 54 deg Fig. 22 Cascade A: local measured flow losses for β11 = 50 deg and β11 = 54 deg Fig. 23 Cascade B: local measured flow losses for β11 = 50 deg and β11 = 54 deg Fig. 24 Cascade A: oil picture for β11 = 50 deg Fig. 25 Cascade A: oil picture for β11 = 54 deg Fig. 26 Cascade B: oil picture for β11 = 50 deg Fig. 27 Cascade B: oil picture for β11 = 54 deg ## Errata Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
# Modular curves over finite fields I'm looking for a detailed reference for modular curves over finite fields, such as $X(N)$, $X_1(N)$, and $X_0(N)$. There seems to be a lot of literature dealing with them over $\mathbb{C}$, but I'm specifically interested in them from the perspective of algebraic geometry. Also, do there exist tables of their point counts over $\mathbb{F}_q$ for some small (but hopefully at least up to $N=15$ or so) values of $N$, or algorithms (e.g. in Sage) for calculating such information? For genus 0 the Riemann hypothesis for curves over finite fields gives an exact formula, of course. In the classical theory, you see that the set-theoretic points of the complex manifolds $Y(N)$, $Y_1(N)$, etc., are in bijection with certain complex tori plus some additional data (level structure) (up to isomorphism). But they are in fact moduli spaces in the more precise sense (involving Yoneda's lemma) for complex analytic elliptic curves over more general complex analytic spaces (think of them as families of complex analytic elliptic curves, or just complex tori). This involves their functors of points, as opposed to just their set-theoretic ones, and is a hint of the existence of more general moduli spaces $Y(N)$, etc., which live over (localizations of) the ring of integers and, upon base changing to $\mathbf{C}$ and taking $\mathbf{C}$-rational points (the $\mathbf{C}$-analytification), yield the familiar complex manifolds. Exceedingly thorough treatments of these integral models of modular curves (which can be reduced modulo various primes $p$ to get modular curves over finite fields, which are not always nonsingular) can be found in Deligne-Rapoport (giant paper) or Katz-Mazur (giant book). Both also consider the "compactifications" $X(N)$, $X_1(N)$, etc., but their points of view are somewhat different. Also, while I have not thoroughly read either (more of KM), they are both challenging, requiring a fairly good knowledge of algebraic geometry from the modern point of view (although KM essentially avoids the explicit use of algebraic stacks, using lots of descent theory, they don't get the moduli interpretation of the set of cusps in terms of "generalized elliptic curves" that DR do). I would say KM is somewhat easier (relatively speaking), and, if you're a native English-speaker, it has the minor advantage of being in English. Basically, there exist integral versions of (compactified) modular curves (under assumptions on $N$, depending on the moduli problem under consideration) whose functor of points are related to (generalized) elliptic curves over very general bases (one inverts the level to get regular schemes when the moduli problems are representable, but their bad reduction, i.e. primes dividing the level, is of great interest as well, and well studied). These objects can be reduced modulo various primes, and the algebro-geometric nature of the resulting curves depends on whether or not $p$ divides the level. I guess this material is considered "classical" at this point. The compatibility with the analytic theory is not entirely trivial, as again, one has to formulate the correct moduli problem for elliptic curves over complex analytic spaces properly (at which point the compatibility is a consequence of functoriality of analytification and Yoneda's lemma). I'm not sure about point counts in finite fields. Certainly there are descriptions of the curves over finite fields one gets by reducing modulo "bad primes" in DR and KM (involving "supersingular points"). I'm not sure how much more detailed I can\should be. There are definitely people active on this site (or if not here, then on MO) who are absolute experts on this subject, and could likely give much more enlightening information. EDIT: I should note that there are probably less technically demanding approaches to reduction of modular curves, at least in special cases, but I haven't learned them. At least the two references I cite are (among) the definitive sources for the thoroughly "modern" algebra-geometric approach. • Thanks a lot for your elaborate response! Just one question: do you know of any references that thoroughly elaborate how one can tell how many cusps of a modular curve are actually defined over $\mathbb{F}_q$? If not, do you know how this works? – mlbaker Jun 30 '15 at 3:36 The short answer is given by the Grothendieck-Lefschetz trace formula $$\#X(\mathbf F_p) = \sum (-1)^i \mathrm{Tr}(\mathrm{Frob}_p \mid H^i_c(X;\mathbf Q_\ell)).$$ For $X$ a compact modular curve associated with a congruence subgroup $\Gamma$, this means that you need to understand the Galois representation $H^1(X,\mathbf Q_\ell)$, as the trace on $H^0$ and $H^2$ just gives you $p+1$. For $p$ not dividing the level, the trace on $H^1$ is given by the Eichler-Shimura theorem. Specifically, $H^1(X)$ is (possibly after extending scalars) the direct sum of the $2$-dimensional Galois representations attached to Hecke eigenforms of weight 2 and level $\Gamma$, and the trace of $\mathrm{Frob}_p$ on $H^1(X)$ is the same as the trace of the Hecke operator $T_p$ acting on this space of cusp forms. This is the same as the sum of the $p$th Fourier coefficient of all normalized Hecke eigenforms. So you can indeed compute the number of $\mathbf F_p$-points in SAGE: this is equivalent to finding a list of all Hecke eigenforms of weight 2 for this level, and telling SAGE to spit out their Fourier coefficients.
def latex cauldron.display.latex() Add a mathematical equation in latex math-mode syntax to the display. Instead of the traditional backslash escape character, the @ character is used instead to prevent backslash conflicts with Python strings. For example, \delta would be @delta. Arguments Required Optional source str The string representing the latex equation to be rendered.
# Recent history for gatsu 5 years ago posted a comment 5 years ago posted a comment 6 years ago posted a comment The natural metric of a phase space and the Lyapunov exponent 7 years ago posted an answer Is there a Lagrangian formulation of statistical mechanics? 7 years ago answer commented on Is there a Lagrangian formulation of statistical mechanics? 7 years ago posted a comment Is there a Lagrangian formulation of statistical mechanics? 7 years ago answer commented on Is there a Lagrangian formulation of statistical mechanics? 7 years ago posted a comment Is there a Lagrangian formulation of statistical mechanics? 7 years ago posted a comment Is there a systematic way to determine the relevant variables needed to describe a nonequilibrium system? 7 years ago posted a comment Is there a systematic way to determine the relevant variables needed to describe a nonequilibrium system? 7 years ago posted an answer What is phenomenological equation and phenomenological model? 7 years ago 7 years ago 7 years ago 7 years ago
## Algebra 2 (1st Edition) $k=1/2$ Adding $-4-6k$ to both sides, $7-4=12k-6k$ Simplifying, $3=6k$ or $k=0.5$ Plugging it back in, $3+7=6+4$ which is true.
# The metafor Package A Meta-Analysis Package for R ### Site Tools tips:different_tau2_across_subgroups ## Allowing $\tau^2$ to Differ Across Subgroups In a meta-analysis, we often want to examine if the size of a particular effect differs across different groups of studies. While the focus in such a 'subgroup analysis' tends to be on the size of the effect across the different groups, we might also be interested in examining whether the amount of heterogeneity differs across groups (i.e., whether the effect sizes are more/less consistent in some of the groups). Below, I illustrate different methods for conducting such a subgroup analysis that not only allows the size of the effect but also the amount of heterogeneity to differ across groups. ### Data Preparation For this illustration, we will use the data from the meta-analysis by Bangert-Drowns et al. (2004) on the effectiveness of writing-to-learn interventions on academic achievement. library(metafor) dat <- dat.bangertdrowns2004 dat id author year grade . ni yi vi 1 Ashworth 1992 4 . 60 0.650 0.070 2 Ayers 1993 2 . 34 -0.750 0.126 3 Baisch 1990 2 . 95 -0.210 0.042 4 Baker 1994 4 . 209 -0.040 0.019 5 Bauman 1992 1 . 182 0.230 0.022 . . . . . . . . 44 Weiss & Walters 1980 4 . 25 0.630 0.168 45 Wells 1986 1 . 250 0.040 0.016 46 Willey 1988 3 . 51 1.460 0.099 47 Willey 1988 2 . 46 0.040 0.087 48 Youngberg 1989 4 . 56 0.250 0.072 Variable yi contains the standardized mean differences (with positive values indicating a higher mean level of academic achievement in the intervention group) and variable vi contains the corresponding sampling variances. Variable grade indicates the grade level of the participants (1 = elementary school; 2 = middle school; 3 = high-school; 4 = college). The variable is coded numerically, but we want to treat it as a categorical grouping variable in the analyses below, so we will turn it into a factor. dat$grade <- factor(dat$grade) ### Random-Effects Model To start, we fit a standard random-effects model to the data. res <- rma(yi, vi, data=dat) res Random-Effects Model (k = 48; tau^2 estimator: REML) tau^2 (estimated amount of total heterogeneity): 0.0499 (SE = 0.0197) tau (square root of estimated tau^2 value): 0.2235 I^2 (total heterogeneity / total variability): 58.37% H^2 (total variability / sampling variability): 2.40 Test for Heterogeneity: Q(df = 47) = 107.1061, p-val < .0001 Model Results: estimate se zval pval ci.lb ci.ub 0.2219 0.0460 4.8209 <.0001 0.1317 0.3122 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 The results indicate that the estimated (average) effect size is positive (0.2219) and significantly different from 0 (p < .0001). For standardized mean differences, the size of the effect could be considered relatively small (although such labels are somewhat arbitrary). The Q-test suggests that the true effects are heterogeneous (i.e., that the effectiveness of writing-to-learn interventions differs across studies), with an estimate of $\tau^2$ (i.e., the amount of variance in the true effects) approximately equal to 0.05. ### Subgroup Analysis via Meta-Regression To examine whether the size of the effect differs across grade levels, we can fit a meta-regression model that uses the grade factor as a moderator. res <- rma(yi, vi, mods = ~ grade, data=dat) res Mixed-Effects Model (k = 48; tau^2 estimator: REML) tau^2 (estimated amount of residual heterogeneity): 0.0539 (SE = 0.0216) tau (square root of estimated tau^2 value): 0.2322 I^2 (residual heterogeneity / unaccounted variability): 59.15% H^2 (unaccounted variability / sampling variability): 2.45 R^2 (amount of heterogeneity accounted for): 0.00% Test for Residual Heterogeneity: QE(df = 44) = 102.0036, p-val < .0001 Test of Moderators (coefficients 2:4): QM(df = 3) = 5.9748, p-val = 0.1128 Model Results: estimate se zval pval ci.lb ci.ub intrcpt 0.2639 0.0898 2.9393 0.0033 0.0879 0.4399 ** grade2 -0.3727 0.1705 -2.1856 0.0288 -0.7069 -0.0385 * grade3 0.0248 0.1364 0.1821 0.8555 -0.2425 0.2922 grade4 -0.0155 0.1160 -0.1338 0.8935 -0.2429 0.2118 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 The omnibus test of the factor is not significant (p = .1128) and hence we do not have statistically significant evidence to reject the null hypothesis that the average effect is the same for all grade levels, although the individual contrasts suggest that the average effect for grade level 2 (middle school) may be quite a bit lower (and even significantly so, with p = .0288) than the average effect for grade level 1 (elementary school).1) We can use the predict() function to obtain the estimated average effect for each grade level. predict(res, newmods=rbind(c(0,0,0),diag(3))) pred se ci.lb ci.ub pi.lb pi.ub 1 0.2639 0.0898 0.0879 0.4399 -0.2241 0.7519 2 -0.1088 0.1450 -0.3929 0.1753 -0.6454 0.4278 3 0.2888 0.1027 0.0874 0.4901 -0.2090 0.7865 4 0.2484 0.0735 0.1044 0.3924 -0.2290 0.7258 Hence, we see that the estimated average effect size is quite similar in grade levels 1, 3, and 4, but a bit lower (and in fact negative) in grade level 2 (middle school). However, as noted above, since the test of the factor as a whole is not actually significant, we should not overinterpret this finding. ### Separate Meta-Analyses A subgroup analysis, comparing the size of the (average) effect across the four grade levels, could also be conducted by simply fitting separate random-effects models in the four groups. We can do this by using the subset argument and collecting the four models in a list.2) res1 <- list(rma(yi, vi, data=dat, subset=grade==1), rma(yi, vi, data=dat, subset=grade==4)) For easier inspection of the results, we then extract the estimated average effect in each group, the corresponding standard error, the estimate of $\tau^2$, its standard error, and the number of studies included in each model, and put everything into a data frame. comp1 <- data.frame(estimate = sapply(res1, coef), stderror = sqrt(sapply(res1, vcov)), tau2 = sapply(res1, \(x) x$tau2), se.tau2 = sapply(res1, \(x) x$se.tau2), k = sapply(res1, \(x) x$k)) rownames(comp1) <- paste0("grade", 1:4) round(comp1, digits=4) estimate stderror tau2 se.tau2 k grade1 0.2571 0.0823 0.0412 0.0319 11 grade2 -0.0998 0.1367 0.0421 0.0688 6 grade3 0.3111 0.1300 0.1150 0.0790 10 grade4 0.2409 0.0694 0.0434 0.0297 21 These results are similar, but not quite identical to the ones we obtained above using the meta-regression approach. The reason for this is that the meta-regression approach above assumes that the amount of heterogeneity is the same within all four grade levels (i.e., the amount of 'residual heterogeneity' is assumed to be homoscedastic), while we obtain different estimates of$\tau^2$if we fit a separate random-effects model within each of the levels of the grouping factor. Also, it is worth noting that some of the groups are quite small (especially the one for grade level 2), which should lead to further caution not to overinterpret the lower estimated effect for grade level 2. We can now conduct a Wald-type test to test the null hypothesis that the average effect is identical across the different subgroups (analogous to the test of the factor as a whole) as follows. wld <- rma(estimate, sei=stderror, mods = ~ rownames(comp1), data=comp1, method="FE") anova(wld) Test of Moderators (coefficients 2:4): QM(df = 3) = 6.2496, p-val = 0.1001 The test is not significant (p = .1001), so we again do not obtain statistically significant evidence that the average effect differs across the groups. We can also compare the various estimates of$\tau^2$. The values indicate quite similar amounts of heterogeneity in grade levels 1, 2, and 4, but a higher amount of heterogeneity in grade level 3 (i.e., this suggests that the effectiveness of writing-to-learn interventions might be more variable in high-school students compared to the other grade levels). Again, we should be cautious not to overinterpret this finding. In principle, we could also do a Wald-type test using the$\tau^2$estimates (and their corresponding standard errors) as input in the same manner as was done above. wld <- rma(tau2, sei=se.tau2, mods = ~ rownames(comp1), data=comp1, method="FE") anova(wld) Test of Moderators (coefficients 2:4): QM(df = 3) = 0.7939, p-val = 0.8509 Based on this, we cannot actually reject the null hypothesis$H_0: \tau^2_1 = \tau^2_2 = \tau^2_3 = \tau^2_4$(p = .8510). However, I would generally not recommend this approach, because the test assumes that the sampling distributions of the$\tau^2$estimates are approximately normal, which is unlikely to be the case (variance components tend to have skewed sampling distributions). Hence, Wald-type tests of variance components are unlikely to perform well. We will examine other ways of testing whether the amount of heterogeneity differs across groups further below. ### Model with Different Amounts of Heterogeneity Instead of the rma() function, we can use the rma.mv() function to fit a meta-regression model that not only allows the average effect to differ across the subgroups, but also the amount of heterogeneity. Note that the data do not have a multilevel/multivariate structure – we are simply using the rma.mv() function to fit a model that allows$\tau^2$to differ across the subgroups. The appropriate syntax for this is as follows. res2 <- rma.mv(yi, vi, mods = ~ 0 + grade, random = ~ grade | id, struct="DIAG", data=dat, cvvc=TRUE) Instead of examining the full output, let us again just pull out the relevant pieces of information as we did above. comp2 <- data.frame(estimate = coef(res2), stderror = sqrt(diag(vcov(res2))), tau2 = res2$tau2, se.tau2 = sqrt(diag(res2$vvc)), k = c(res2$g.levels.k)) round(comp2, digits=4) estimate stderror tau2 se.tau2 k grade1 0.2571 0.0823 0.0412 0.0304 11 grade2 -0.0998 0.1367 0.0421 0.0647 6 grade3 0.3111 0.1300 0.1150 0.0869 10 grade4 0.2409 0.0694 0.0434 0.0336 21 These results are identical to the ones we obtained when fitting a separate random-effects model within each grade level (comp1), except for the standard errors of the $\tau^2$ estimates. The reason for this will be discussed later on. ### Location-Scale Model with a Scale Factor Another possibility to allow $\tau^2$ to differ across the subgroups is to make use of a location-scale model, where we use the grouping variable not only as a moderator of the average effect, but also as a predictor for the (log transformed) amount of heterogeneity in the effects. res3 <- rma(yi, vi, mods = ~ 0 + grade, scale = ~ 0 + grade, data=dat) Again, let's collect the relevant pieces into a data frame. comp3 <- data.frame(estimate = coef(res3)$beta, stderror = sqrt(diag(vcov(res3)$beta)), tau2 = predict(res3, newscale=diag(4), transf=exp)$pred) comp3$se.tau2 <- predict(res3, newscale=diag(4))$se * comp3$tau2 comp3$k <- c(table(dat$grade)) round(comp3, digits=4) estimate stderror tau2 se.tau2 k grade1 0.2571 0.0823 0.0412 0.0304 11 grade2 -0.0998 0.1367 0.0421 0.0647 6 grade3 0.3111 0.1300 0.1150 0.0869 10 grade4 0.2409 0.0694 0.0434 0.0336 21 Note that the default 'link function' for the scale part of the model is the log link. Therefore, when using predict() to compute the estimated $\tau^2$ value of each grade level, we use transf=exp to transform the estimated log transformed $\tau^2$ values back to the original scale. We can see that this yields the same estimates for the amount of heterogeneity within the subgroups as we obtained above using the other two approaches. The estimated average effects for the different grade levels are also the same as the ones we obtained earlier. Sidenote: The standard errors of the back-transformed $\tau^2$ estimates were obtained by taking the standard errors of the log transformed estimates and using the delta method (for an exponential transformation). Note that this yields the same values as obtained above using the rma.mv() function. Instead of using a log link for the scale part of the model, it is possible to also use an identity link. In this case, the scale part directly predicts the value of $\tau^2$ based on the scale variable(s) in the model. This can be done with: res4 <- rma(yi, vi, mods = ~ 0 + grade, scale = ~ 0 + grade, comp4 <- data.frame(estimate = coef(res4)$beta, stderror = sqrt(diag(vcov(res4)$beta)), tau2 = predict(res4, newscale=diag(4))$pred, se.tau2 = predict(res4, newscale=diag(4))$se, k = c(table(dat$grade))) round(comp4, digits=4) estimate stderror tau2 se.tau2 k grade1 0.2571 0.0823 0.0412 0.0304 11 grade2 -0.0998 0.1367 0.0421 0.0647 6 grade3 0.3111 0.1300 0.1150 0.0869 10 grade4 0.2409 0.0694 0.0434 0.0336 21 These results are again identical to the ones we obtained above using the previous two approaches. ### Comparing the Fit of the Models We can compare the log likelihoods of the different models with: c(res1 = sum(sapply(res1, logLik)), res2 = logLik(res2), res3 = logLik(res3), res4 = logLik(res4)) res1 res2 res3 res4 -15.5456 -15.5456 -15.5456 -15.5456 Note that for the first approach (where we fitted separate random-effects models within the various subgroups), we can simply add up the log likelihoods. As we can see above, the values are identical, indicating that we are essentially fitting the same model, just using different approaches/parameterizations thereof. ### Testing for Differences in$\tau^2$Across Subgroups At the beginning, we used a Wald-type test to test the null hypothesis of no differences in$\tau^2$values across subgroups. This is not a recommended approach for the reasons outlined earlier. As a better alternative, we can conduct a likelihood ratio test to compare a model that allows$\tau^2$to differ across subgroups (as we have done above) with a model that assumes a common (homoscedastic) value of$\tau^2$. For the approach using rma.mv(), we can do this as follows: res2 <- rma.mv(yi, vi, mods = ~ 0 + grade, random = ~ grade | id, struct="DIAG", data=dat) res0 <- update(res2, struct="ID") anova(res0, res2) df AIC BIC AICc logLik LRT pval QE Full 8 47.0912 61.3647 51.2055 -15.5456 102.0036 Reduced 5 42.2069 51.1278 43.7858 -16.1034 1.1157 0.7733 102.0036 For the location-scale models, we can do this with: res3 <- rma(yi, vi, mods = ~ 0 + grade, scale = ~ 0 + grade, data=dat) res0 <- update(res3, scale = ~ 1) anova(res0, res3) and res4 <- rma(yi, vi, mods = ~ 0 + grade, scale = ~ 0 + grade, data=dat, link="identity") res0 <- update(res3, scale = ~ 1) anova(res0, res4) The output (not shown) is identical to the one above. The test is not significant (p = .7733) and hence we do not have statistically significant evidence for any differences in$\tau^2$values across the different grade levels. As we can see with this example, a nice property of the likelihood ratio test is that it is invariant under different parameterizations of the model. Hence, we obtain the same result with all three approaches. Moreover, the likelihood ratio test is likely (no pun intended) to have better statistical properties than the Wald-type test in general (although in this example, the conclusion happens to be the same). An interesting (and generally even better) approach for testing for differences in$\tau^2$values across subgroups is to use a permutation test. Such a test can be carried out with the permutest() function. Note that the following code takes a few minutes to run.3) res3 <- rma(yi, vi, mods = ~ grade, scale = ~ grade, data=dat) permutest(res3, seed=1234) Test of Scale Coefficients (coefficients 2:4):¹ QS(df = 3) = 1.2195, p-val = 0.5406 Here, we are interested in the test of the scale coefficients (and only the output for this test is shown). The permutation test is not significant (p = .5406), again indicating lack of evidence for differences in the amount of heterogeneity across the different grade levels. While computationally intensive, the permutation test is in a certain sense 'exact' and (assuming that it is run with a sufficiently large number of iterations) should provide the best statistical properties. We can also run the permutation test based on the location-scale model that uses an identity link. res4 <- rma(yi, vi, mods = ~ grade, scale = ~ grade, data=dat, link="identity") permutest(res4, seed=1234) Test of Scale Coefficients (coefficients 2:4):¹ QS(df = 3) = 0.6652, p-val = 0.7982 Again, the test is not significant (p = .7982). This example demonstrates that the permutation test is not invariant under different parameterizations of the same model (note that the p-value differs from the one we obtained above using the default log link), which could be argued to be a disadvantage compared to the likelihood ratio test. However, generally, I would expect the conclusions from the permutation test to coincide irrespective of the parameterization in most cases. ### Standard Errors of$\tau^2$Estimates As pointed out earlier, the standard errors of the$\tau^2$estimates obtained by fitting separate random-effects models within the subgroups (comp1) with the rma() function are slightly different than the ones we obtained from the other approaches. The reason for this is somewhat technical. The rma() function computes the standard error of a$\tau^2$estimate based on the Fisher information matrix, while the other approaches compute the standard errors based on the Hessian matrix (i.e., based on the observed Fisher information matrix). Both approaches provide estimates of the standard errors, but do not yield identical values. Which values are to be preferred is debatable. However, note that we do not actually need the standard errors when conducting a likelihood ratio test and while the permutation test does make use of the standard errors, it only does so for constructing the permutation distribution of the test statistic under the permuted data, not for conducting the test itself. ### Conclusion As we can see above, models can be fitted that relax the assumption that$\tau^2$is identical within subgroups when conducting a subgroup analysis. A question that often comes up in this context is whether one should do so (as opposed to sticking to the more standard meta-regression model that assumes a single homoscedastic value for$\tau^2$within subgroups). Two papers that addresses this issue to some extent are: • Rubio-Aparicio, M., Sánchez-Meca, J., López-López, J. A., Botella, J. & Marín-Martínez, F. (2017). Analysis of categorical moderators in mixed-effects meta-analysis: Consequences of using pooled versus separate estimates of the residual between-studies variances. British Journal of Mathematical and Statistical Psychology, 70(3), 439-456. https://doi.org/https://doi.org/10.1111/bmsp.12092 • Rubio-Aparicio, M., López-López, J. A., Viechtbauer, W., Marín-Martínez, F., Botella, J., & Sánchez-Meca, J. (2020). Testing categorical moderators in mixed-effects meta-analysis in the presence of heteroscedasticity. Journal of Experimental Education, 88(2), 288-310. https://doi.org/10.1080/00220973.2018.1561404 Generally, even if the$\tau^2$values are heteroscedastic but we ignore this (and hence assume a common$\tau^2$across subgroups), then this tends to have a relatively minor impact on the statistical properties of the test of the group factor (i.e., whether the size of the effect differs across the groups).4) However, if we are specifically interested in examining whether the amount of heterogeneity differs across the subgroups (e.g., whether the effectiveness of writing-to-learn interventions tends to be more/less consistent within a particular grade level), then of course we need to allow$\tau^2$to differ across the grouping variable. As a final note, the example above also provides evidence for the correctness of the underlying code for fitting the various models. The algorithm underlying the rma() function for fitting a standard random-effects model is entirely different than what is used in the rma.mv() function and also what is used within the rma() function for fitting location-scale models, yet all three approaches yielded identical results, at least when rounded to four decimal places (with less rounding, one will notice small discrepancies between the results, which are a consequence of using different model fitting algorithms). 1) The possibility that the omnibus test can be non-significant when one or more of the individual model coefficients differ significantly from zero is discussed here. Also, the somewhat peculiar fact that the estimate of$\tau^2$can be larger in a meta-regression model compared to the random-effects model is discussed here. 2) We could also use the more concise syntax res1 <- lapply(1:4, \(g) rma(yi, vi, data=dat, subset=grade==g)) to accomplish the same thing. Even more general would be res1 <- lapply(sort(unique(dat$grade)), \(g) rma(yi, vi, data=dat, subset=grade==g)) which doesn't require us to pre-specify the various values of the grouping variable. The full story is a bit more complex and depends on whether the subgroups are of roughly equal size or not and, when they are unbalanced in sizes, how differences in the amount of heterogeneity are paired together with the subgroup sizes. Also, in general, the standard Wald-type test of the fixed effects in a meta-regression model tends to have a slightly inflated Type I error rate especially when $k$ is small, but this can be counteracted by using the 'Knapp-Hartung method' (i.e., by setting test="knha" when fitting the meta-regression model with the rma() function).
# What digital camera do you like? ### Help Support The Rocketry Forum: #### MaxQ ##### Tripoli 2747 What digital camera do you like? With Circuit City selling off inventory...I was thinking of getting a new one that takes good launch photos, maybe close ups as well... #### Gillard ##### Well-Known Member i'm a complete idiot when it comes to digital camera, i can't do anything that involves setting etc. but for a good idiot proof point and shoot, then any of the Fuji Finpix cameras are okay, and they are cheapish. #### propbeany ##### Well-Known Member I'm looking as well -- I've used both olympus and fuji point-and-shoot models, but the first shot delay were so atrocious, I have more pics of smoke trails than launches #### rstaff3 ##### Oddroc-eteer I like the uber expensive digital SLR's, but can not afford one. (Well, I technically could, but it would eat into my rocket bidget BIG TIME ) I have a Panasonic Lumix. What sold me was it has 10x optical zoom and a nice big 3" display. There was only one other reasonably priced and small camera with these features. Plus, the Panasonic was on sale. It works fine and has a good lens. There are some puts and takes on the other features. #### Gillard ##### Well-Known Member I'm looking as well -- I've used both olympus and fuji point-and-shoot models, but the first shot delay were so atrocious, I have more pics of smoke trails than launches i hear you on that one. i've now got pretty good at pressing the button a second before i want the photo, but i've had my share of smoke columns #### Kaycee ##### Well-Known Member I've had a few Canon Power Shot cameras and they do a nice job for rocket action with rapid fire of ~5fps. I don't think you can go wrong with any of them. The only downer is there wasn't a battery indicator on mine so when the batteries are used up the camera turns off with no warning. A minor annoyance. The better camera is a 8 Megapixel Canon SX100 IS I got a gift card from dell when I bought a computer it was on sale to so I snagged a $250-$300 MSRP camera for $50. It is in between the point and click disposable digital and the SLR's. It's got 10x optical zoom and another x4 digital so I can zero in on things from a pretty good distance. And the picture quality is great! Plenty of settings to play with, plus it takes AA and not a special proprietary battery that will cost an arm and a leg once you can't recharge it anymore. #### Hogan3276 ##### Well-Known Member I am not sure if there is a Point and Shoot that doesn't have a delay...they are really all to slow to capture crisp pictures of action photos - whether it is rocketry or sports...having said that, if you can get the timing down, you can still get some great shots with some of the Point & Shoots... The better option is to step up to a DSLR Camera...for$500 you can get a decent Nikon (D40) or one of the competitors - Canon, Sony, etc...this price typically includes 2 lenses - one for close up (18 - 55MM) and one telephoto (55 - 200MM). I bought the D40 last year and it has made all the difference. Now if you truly have the photo bug and like to take liftoff shots, the D40 takes 3 frames per second (FPS). This is plenty to get get a good lift off shot... More serious/expensive cameras take 4, 5, 6, or 8 FPS...but they do come at a greater cost. I now have a D300 - which will take up to 8 FPS - and can gobble data rapidly and can store it so that it doesn't bog down the processor. The D40 tended to bog down after 3 - 5 pics (when shooting continuous). The D300 can go 20 pics or more in continuous before it slows to process the data. Canon makes great cameras as well. Several of my buddies have them. As for the Point & Shoots as well as some good overall info - check out this website... http://www.kenrockwell.com/tech/recommended-cameras.htm I hope this helps... Here is a liftoff shot taken with my D300 as well as one with the D40... Todd #### georgegassaway For years, I had a 35mm camera with a nice zoom lens. I could take good liftoff shots thanks to the zoom lens and the lack of lag. But with the zoom lens, the camera was pretty big to have around and it got to the point where often the camera was nearby, but not close enough to take spur-of-the-moment photos (It was in the car, in the prep area, or so forth). It was just too annoying to use the neckstrap and wear it when I might go hours without taking a photo. In 1998 I took it to three major events and did not get one photo, as I did not get around to just doing nothing but take photos. And once I started having a website, I had poor results scanning pics (though it was more a scanner issue at the time than anything else). So in 1999 I got a nice little point and shoot camera, an Olympus D-450, my first digital camera. It was small enough to put into a case attached to my belt, so I could just wear it. So, I got a lot of spur-of-the moment shots. Of course, I knew when I got that one it would not be good for liftoff shots, due to the lag. I did luck out a few times though. In 2004 I got a Canon Powershot S3-1S. It was bigger, but not too big to still be able to use a case on my belt. It was a more advanced camera. With a 10:1 zoom lens (optical, ignore digital zoom), shutter priority option to 1/2000, and so forth. And very little shutter lag, I specifically shopped around (hands-on) and compared reviews to see which cameras had very little shutter lag, and settled on the S3. So THEN I could take good liftoff photos again, and with the 10X zoom, get closer-in views than I had with the D-450&#8217;s 3X zoom. I loved it. Last summer, shortly before NARAM, I got an upgrade, the Canon Powershot S5-1S. Larger photo image, better quality, 12X zoom, a 2.5" LCD instead of the 1.8" LCD screen of the S3, and so forth. I was going to sell the S3-1S to help defray the cost of the new one but before I got around to selling it I lost it to an apartment break-in a couple of months later (would have lost the S5 also but I had it with me at a launch when it happened). Last August, Canon came out with a replacement for the S5-1S. Apaprently, rather than S6, they jumped to calling it the SX10-1S. I would suggest looking at the Best Buy website to see the kind of cameras I am talking about, though this is a list of some that ARE similar to the S3, S5, and now SX-10, but some definitely are not: The SX10-1S (regularly $399) definitely is an upgrade to the S3-1S and S5-1s series: http://www.bestbuy.com/site/olspage.jsp?skuId=9051045&type=product&id=1218012527719 The SX110IS ($250) is sort of similar, though not the same class as the S3-S5-SX10. But the shutter lag time does seem to be as good. At under 1/100 sec (prefocused) Whatever cameras you are considering, check out the reviews on this website: http://www.imaging-resource.com/ I found it to be extremely useful when I was considering many cameras to buy in 2004, and again last summer to confirm upgrading to the S5-1S. For the S5-1S, they have 8 pages worth of review info, exhaustive use and testing. Here is the page that addressed the lag time: http://www.imaging-resource.com/PRODS/S5IS/S5ISA6.HTM A key to solving lag time for liftoff shots is to pre-focus, holding the shutter button down half-way. For the S5, the lag time is .074 sec, so less than 1/10 second. That is VERY GOOD for a digital camera. Anything above .2 sec starts to be a problem. Use the above website to check out the test specs for whatever camera you are considering buying. FWIW - Examples of photos I shot with the S5-1S are here: Chan Stevens' Soyuz at NARAM-50 (Chan on left) http://homepage.mac.com/georgegassaway/GRP/CONTEST/NARAM/N-50/IMG_1012.JPG Alyssa Stenberg's D Boost Glide R/C model at NARAM-50 (Alyssa and her father in background at right) http://homepage.mac.com/georgegassaway/GRP/CONTEST/NARAM/N-50/IMG_0434.JPG James Duffy's Little Joe-I at the 2008 WSMC (Bill Stine in background) http://homepage.mac.com/georgegassaway/GRP/FAI/2008WSMC-team/IMG_2192.JPG First 2.5 pages of August BRB launch, ending at IMG_1198: http://www.birminghamrocketboys.com/BRBGallery/main.php?g2_itemId=33622 First page plus first 5 pics of page 2 of July BRB launch: http://www.birminghamrocketboys.com/BRBGallery/main.php?g2_itemId=33373 Take note that in many pics, the Zoom lens is used a lot. Especially for the first three individual photo links listed above (and attached below), the zoom allowed the people in the background to look a lot larger than they would have been without a zoom, or with a wimpy 3:1 zoom. It takes a bit of learning to get the most out of cameras like these, I am still learning. The beauty is that unlike my old 35mm film camera, it costs very little to take hundreds of shot (cost of batteries), and you can see the results very quickly to learn a lot. The night shots defintely were a whole new leanring game, fortunately I learned a lot from the NARAM-48 night launch (lots of bad pics till near the end of the night), so I had a lot of good pics at the BRB night launch last August. Edit - added night pic below - Steward Jones' "chandelier" model. For more, see the BRB August Launch page, linked above. - George Gassaway Last edited: #### RocketT.Coyote I was using a $20 Vivitar for a while, but it wouldn't photograph indoors unless the area was well-lit. It was good enough for ebay pics or a "Throwaway" when on trips but it ate up AAA batteries quickly. Upgraded to a discontinued Argus QuikClix 5150 and was getting a lot of use with it, even film clips. The battery cover wouldn't hold with SD card inserted, but that was remedied with a silicone rubber band. Then it wouldn't take outdoor shots without overexposure. The help desk couldn't be of assistance there, yet reconditioned models are being sold on their site. I used NiMh AAs with it and they seem to recharge when connected to my PC. The viewscreen was rather tiny but the controls were easy to master. Now using an Argus DC-5185 which has easier SD card access, but uses AAA cells and is more difficult to set up for timed shots. Trying to review photos already shot is also a bother. #### bsexton ##### Well-Known Member Some good advice so far. I for one am partial to Canon and Nikon cameras in general. For pocket-type cameras I prefer the Canon and for SLR the Nikon (D40 model and up). I think Ken Rockwell's site is one of the best in terms of practical advice and easy to understand. #### Fred22 ##### Well-Known Member Excellent advice folks - don't know what best buy sells but I use a Canon 40D with a 70 to 200 L series lenses. The results can be nice with rapid advance and dial in 400 ISO. I am using my blackberry right now so I can't supply samples. One thing to keep in mind is lots and lots of lovely practice with whatever you buy Cheers Fred #### Mr Peabody ##### Member I've had a few Canon Power Shot cameras and they do a nice job for rocket action with rapid fire of ~5fps. I don't think you can go wrong with any of them. The only downer is there wasn't a battery indicator on mine so when the batteries are used up the camera turns off with no warning. A minor annoyance. The ones I've seen do have a low battery indicator. The problem is that the indicator doesn't come on until you hit "one shot left" #### o1d_dude ##### 'I battle gravity' TRF Supporter I have a Canon Powershot A630 that has done everything I've asked of it. I no longer carry a camcorder when I travel as a result. Bought it at Office Depot at a ridiculously low closeout price. SD memory cards are now dirt cheap, too. #### rocketace ##### Well-Known Member Im in love with my Olympus Stylus 840. 1. Very slim design 2. 5x Optical Zoom 3. 8 MP 4. 7 frames per second...I can take 200 photos sequentially 5. Optical and Digital Image Stabilization 6. Takes XD or Micro SD up to 8GB...ya thats a lot of photos/video 7. I got mine for$199.99 If I were to get a new camera it would be the Olympus Stylus 1010, its the Same except 1. 7x Optical zoom 2. 10MP 3. 11 frames per second ##### Well-Known Member What digital camera do you like? With Circuit City selling off inventory...I was thinking of getting a new one that takes good launch photos, maybe close ups as well... Research the prices before you go to Circuit City. The liquidators reprice all the merchandise to the highest price it's been sold for and then start marking it down. By the time the discounts get good the people who didn't do the research have will have cleaned off the shelves. ##### Roger Smith As others had said, an SLR camera is certainly the best choice for capturing lift-off photos. But, the newer P&S cameras take very good photogaphs and some have modes where they can capture multiple frames per second. You won't get those super-sharp, close-up lift-off shots that you see in the magazines, but you should be able to get decent wider-angle photos. My recommendation is to pick just one or two features that are important to you then compare the cameras with those features when you get in to store. For example, for rocketry you might want a camera with both an LCD and an optical viewfinder (the LCDs are usually hard to see in the sun) and you want one that can take at least two or three frames per second. Go into the store and compare just the cameras that meet your criteria in your price range. That should narrow it down to five or six models at the most. They all take very good photos, so pick the one you like best based mainly on how comfortable you are holding and using it. Especially don't worry about which camera has the most "megapixels." Each new model of camera seems to have more pixels, bet a small sensor. A smaller sensor with more pixels results in a much noiser image. So, generally more megapixels doesn't mean a better picture. It's just something they advertise to try to make the newer models look better than the older ones. You might save money buying an older model and actually get a better picture. -- Roger #### Reed Goodwin ##### Well-Known Member I currently use a Canon 20D. It's not the best nor the newest, but it gets the job done. When it dies, I'll get something newer, but until then, I'm happy with that I have. Reed #### Hogan3276 ##### Well-Known Member I am with Roger - Megapixels is a gimmick to separate you from the money in your wallet...unless you regularly make your shots into posters, you'll never notice the difference in MP. Reed, the 20D is a great camera... #### troj ##### Wielder Of the Skillet Of Harsh Discipline, Potent I currently use a Canon 20D. It's not the best nor the newest, but it gets the job done. When it dies, I'll get something newer, but until then, I'm happy with that I have. Yep. Once you have a decent sensor and processor, the glass you put in front of it makes a much bigger difference! If I had $3000 to invest in camera gear, my 20D would just get some really nice new lenses, instead of me replacing the body. -Kevin #### troj ##### Wielder Of the Skillet Of Harsh Discipline, Potent The Canon DSLRs are nice - the EOS 50D looks excellent. For a bit less money, the Rebel series are nice - the Rebel XSI and XTI are excellent cameras and are fairly cheap as DSLRs go. Agreed. The entire Rebel line is awesome; I know a number of people who have them, and they've had nothing but great results. For folks who aren't Canon fines, the Nikon line is very good, as well. The difference between a point & shoot and a DSLR is significant. That said, everyone has to buy within their budget. -Kevin #### als57 ##### Well-Known Member really be it for any kind of pics with rapid motion. Hope to find out myself this year. Just snagged a Sony A300. Probably biased by having a few KM AF lenses. The$150 instant rebate thay had earlier this month didn't hurt. Made it way less painful. I've gotten decent pics using my Cannon 570IS and Panasonic TZ5. Just have to understand how much shutter lag there is and compensate. Al BTW its nice to be back. #### BHP ##### Well-Known Member DO NOT buy any camera unless it is supported by CHDK scripts! http://chdk.wikia.com/wiki/CHDK These software only camera mods are just too cool not to have available. (This is for many of the CANON line of cameras only.)
+0 # Hi guys... Can you pls halp me out? 0 197 2 +1692 Two circles have the same center O. Point X is the midpoint of segment OP. What is the ratio of the area of the circle with radius OX to the area of the circle with radius OP? Express your answer as a common fraction. I got so confused so I drew a pic.... IDK what to do from there... Aug 30, 2019 #1 +23884 +3 Two circles have the same center O. Point X is the midpoint of segment OP. What is the ratio of the area of the circle with radius OX to the area of the circle with radius OP? $$\begin{array}{|rcll|} \hline A_1 &=& \pi \times OX^2 \\ A_2 &=& \pi \times OP^2 \\ \hline \dfrac{A_1}{A_2} &=& \dfrac{\pi\times OX^2} {\pi\times OP^2} \\\\ \dfrac{A_1}{A_2} &=& \dfrac{ OX^2} { OP^2} \quad | \quad OP = 2\times OX \\\\ \dfrac{A_1}{A_2} &=& \dfrac{ OX^2} { \left(2\times OX \right)^2} \\\\ \dfrac{A_1}{A_2} &=& \dfrac{ OX^2} { 4\times OX^2} \\\\ \mathbf{\dfrac{A_1}{A_2}} &=& \mathbf{ \dfrac{1} { 4 } } \quad \text{ or } \quad \mathbf{\dfrac{A_1}{A_2}} = \mathbf{ \left(\dfrac{1}{2}\right)^2 }\\ \hline \end{array}$$ Aug 30, 2019 edited by heureka  Aug 30, 2019 #2 +1692 0 Thank you- can you just do (1/2)^2? tommarvoloriddle  Aug 30, 2019
## Controlled Lagrangians and stabilization of Euler-Poincaré mechanical systems with broken symmetry. II: Potential shaping.(English)Zbl 1494.93080 Summary: We apply the method of controlled Lagrangians by potential shaping to Euler-Poincaré mechanical systems with broken symmetry. We assume that the configuration space is a general semidirect product Lie group $$\mathsf{G}\ltimes V$$ with a particular interest in those systems whose configuration space is the special Euclidean group $$\mathsf{SE}(3) = \mathsf{SO}(3)\ltimes\mathbb{R}^3$$. The key idea behind the work is the use of representations of $$\mathsf{G}\ltimes V$$ and their associated advected parameters. Specifically, we derive matching conditions for the modified potential exploiting the representations and advected parameters. Our motivating examples are a heavy top spinning on a movable base and an underwater vehicle with non-coincident centers of gravity and buoyancy. We consider a few different control problems for these systems, and show that our results give a general framework that reproduces our previous work on the former example and also those of Leonard on the latter. Also, in one of the latter cases, we demonstrate the advantage of our representation-based approach by giving a simpler and more succinct formulation of the problem. For Part I, see [the authors, “Controlled Lagrangians and stabilization of Euler-Poincaré mechanical systems with broken symmetry”, Preprint, arXiv:2003.10584]. ### MSC: 93D05 Lyapunov and other classical stabilities (Lagrange, Poisson, $$L^p, l^p$$, etc.) in control theory 70Q05 Control of mechanical systems 34H15 Stabilization of solutions to ordinary differential equations 70H33 Symmetries and conservation laws, reverse symmetries, invariant manifolds and their bifurcations, reduction for problems in Hamiltonian and Lagrangian mechanics GEOMetrics Full Text: ### References: [1] Blankenstein, G.; Ortega, R.; van der Schaft, AJ, The matching conditions of controlled Lagrangians and IDA-passivity based control, Int J Control, 75, 9, 645-665 (2002) · Zbl 1018.93006 [2] Bloch AM, Leonard NE, Marsden JE (1999) Potential shaping and the method of controlled lagrangians. In: Proceedings of the 38th IEEE conference on decision and control (Cat. No.99CH36304), vol. 2, pp 1652-1657 [3] Bloch, AM; Leonard, NE; Marsden, JE, Controlled Lagrangians and the stabilization of Euler-Poincaré mechanical systems, Int J Robust Nonlinear Control, 11, 3, 191-214 (2001) · Zbl 0980.93065 [4] Bloch, AM; Leonard, NE; Marsden, JE, Controlled Lagrangians and the stabilization of mechanical systems. I. The first matching theorem, IEEE Trans Autom Control, 45, 12, 2253-2270 (2000) · Zbl 1056.93604 [5] Bloch, AM; Chang, DE; Leonard, NE; Marsden, JE, Controlled Lagrangians and the stabilization of mechanical systems. II. Potential shaping, IEEE Trans Autom Control, 46, 10, 1556-1571 (2001) · Zbl 1057.93520 [6] Borum AD, Bretl T (2014) Geometric optimal control for symmetry breaking cost functions. In: 53rd IEEE conference on decision and control, pp 5855-5861 [7] Borum, AD; Bretl, T., Reduction of sufficient conditions for optimal control problems with subgroup symmetry, IEEE Trans Autom Control, 99, 3209-3224 (2016) · Zbl 1370.49010 [8] Bullo F, Lewis AD (2004) Geometric control of mechanical systems, volume 49 of texts in applied mathematics. Springer · Zbl 1066.70002 [9] Cendra, H.; Holm, DD; Marsden, JE; Ratiu, TS, Lagrangian reduction, the Euler-Poincaré equations, and semidirect products, Amer Math Soc Trans, 186, 1-25 (1998) · Zbl 0989.37052 [10] Chang, DE; Marsden, JE, Reduction of controlled Lagrangian and Hamiltonian systems with symmetry, SIAM J Control Optim, 43, 1, 277-300 (2004) · Zbl 1076.70020 [11] Chang, DE; Bloch, AM; Leonard, NE; Marsden, JE; Woolsey, CA, The equivalence of controlled Lagrangian and controlled Hamiltonian systems, ESAIM: COCV, 8, 393-422 (2002) · Zbl 1070.70013 [12] Chyba M, Haberkorn T, Smith RN, Wilkens GR (2007) Controlling a submerged rigid body: a geometric analysis. In: Bullo F, Fujimoto K (eds), Lagrangian and Hamiltonian methods for nonlinear control 2006. Springer, Berlin, pp 375-385 · Zbl 1140.70479 [13] Contreras C, Ohsawa T (2020) Controlled Lagrangians and stabilization of Euler-Poincaré mechanical systems with broken symmetry I: Kinetic shaping [14] Hamberg J (1999) General matching conditions in the theory of controlled Lagrangians. In: Proceedings of the 38th IEEE conference on decision and control, 1999, vol 3, pp 2519-2523 [15] Hamberg J (2000) Controlled Lagrangians, symmetries and conditions for strong matching. In IFAC Lagrangian and Hamiltonian methods for nonlinear control [16] Holm, DD; Marsden, JE; Ratiu, TS, The Euler-Poincaré equations and semidirect products with applications to continuum theories, Adv Math, 137, 1, 1-81 (1998) · Zbl 0951.37020 [17] Holm DD, Schmah T, Stoica C (2009) Geometric mechanics and symmetry: from finite to infinite dimensions. Oxford texts in applied and engineering mathematics. Oxford University Press · Zbl 1175.70001 [18] Leonard, NE, Stability of a bottom-heavy underwater vehicle, Automatica, 33, 3, 331-346 (1997) · Zbl 0872.93061 [19] Leonard, NE, Stabilization of underwater vehicle dynamics with symmetry-breaking potentials, Syst Control Lett, 32, 1, 35-42 (1997) · Zbl 0901.93057 [20] Leonard, NE; Marsden, JE, Stability and drift of underwater vehicle dynamics: mechanical systems with rigid motion symmetry, Physica D, 105, 1-3, 130-162 (1997) · Zbl 0963.70528 [21] Marsden, JE; Ratiu, TS, Introduction to mechanics and symmetry (1999), New York: Springer, New York [22] Marsden, JE; Ratiu, TS; Weinstein, A., Semidirect products and reduction in mechanics, Trans Am Math Soc, 281, 1, 147-177 (1984) · Zbl 0529.58011 [23] Marsden JE, Ratiu TS, Weinstein A (1984) Reduction and Hamiltonian structures on duals of semidirect product Lie algebras. In Fluids and plasmas: geometry and dynamics, volume 28 of Contemporary Mathematics. American Mathematical Society · Zbl 0546.58025 [24] Nijmeijer, H.; van der Schaft, A., Nonlinear dynamical control systems (1990), New York: Springer, New York · Zbl 0701.93001 [25] Ortega R, Perez J, Nicklasson P, Sira-Ramirez H (1998) Passivity-based Control of Euler-Lagrange systems: mechanical. Electrical and electromechanical applications. Communications and control engineering. Springer, London [26] Ortega, R.; van der Schaft, AJ; Mareels, I.; Maschke, B., Putting energy back in control, IEEE Control Syst, 21, 2, 18-33 (2001) [27] Ortega, R.; Spong, MW; Gomez-Estern, F.; Blankenstein, G., Stabilization of a class of underactuated mechanical systems via interconnection and damping assignment, IEEE Trans Autom Control, 47, 8, 1218-1233 (2002) · Zbl 1364.93662 [28] Smith, RN; Chyba, M.; Wilkens, GR; Catone, CJ, A geometrical approach to the motion planning problem for a submerged rigid body, Int J Control, 82, 9, 1641-1656 (2009) · Zbl 1190.93021 [29] Spong, MW; Bullo, F., Controlled symmetries and passive walking, IEEE Trans Autom Control, 50, 7, 1025-1031 (2005) · Zbl 1365.93329 [30] van der Schaft, AJ, Stabilization of Hamiltonian systems, Nonlinear Anal Theory Methods Appl, 10, 10, 1021-1035 (1986) · Zbl 0613.93049 [31] Woolsey, CA; Leonard, NE, Stabilizing underwater vehicle motion using internal rotors, Automatica, 38, 12, 2053-2062 (2002) · Zbl 1015.93044 [32] Woolsey, CA; Techy, L., Cross-track control of a slender, underactuated AUV using potential shaping, Ocean Eng, 36, 1, 82-91 (2009) This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
Lemma 52.8.5. Let $I, J$ be ideals of a Noetherian ring $A$. Let $M$ be a finite $A$-module. Let $s$ and $d$ be integers. With $T$ as in (52.8.0.1) assume 1. $A$ is $I$-adically complete and has a dualizing complex, 2. if $\mathfrak p \in V(I)$ no condition, 3. $\text{cd}(A, I) \leq d$, 4. if $\mathfrak p \not\in V(I)$, $\mathfrak p \not\in T$ then $\text{depth}_{A_\mathfrak p}(M_\mathfrak p) \geq s \quad \text{or}\quad \text{depth}_{A_\mathfrak p}(M_\mathfrak p) + \dim ((A/\mathfrak p)_\mathfrak q) > d + s$ for all $\mathfrak q \in V(\mathfrak p) \cap V(J) \cap V(I)$, 5. if $\mathfrak p \not\in V(I)$, $\mathfrak p \not\in T$, $V(\mathfrak p) \cap V(J) \cap V(I) \not= \emptyset$, and $\text{depth}(M_\mathfrak p) < s$, then one of the following holds1: 1. $\dim (\text{Supp}(M_\mathfrak p)) < s + 2$2, or 2. $\delta (\mathfrak p) > d + \delta _{max} - 1$ where $\delta$ is a dimension function and $\delta _{max}$ is the maximum of $\delta$ on $V(J) \cap V(I)$, or 3. $\text{depth}_{A_\mathfrak p}(M_\mathfrak p) + \dim ((A/\mathfrak p)_\mathfrak q) > d + s + \delta _{max} - \delta _{min} - 2$ for all $\mathfrak q \in V(\mathfrak p) \cap V(J) \cap V(I)$. Then there exists an ideal $J_0 \subset J$ with $V(J_0) \cap V(I) = V(J) \cap V(I)$ such that for any $J' \subset J_0$ with $V(J') \cap V(I) = V(J) \cap V(I)$ the map $R\Gamma _{J'}(M) \longrightarrow R\Gamma _ J(M)^\wedge$ induces an isomorphism on cohomology in degrees $\leq s$. Here ${}^\wedge$ denotes derived $I$-adic completion. Proof. For an ideal $\mathfrak a \subset A$ we have $R\Gamma _\mathfrak a = R\Gamma _{V(\mathfrak a)}$, see Dualizing Complexes, Lemma 47.10.1. Next, we observe that $R\Gamma _ J(M)^\wedge = R\Gamma _ I(R\Gamma _ J(M))^\wedge = R\Gamma _{I + J}(M)^\wedge = R\Gamma _{I + J'}(M)^\wedge = R\Gamma _ I(R\Gamma _{J'}(M))^\wedge = R\Gamma _{J'}(M)^\wedge$ by Dualizing Complexes, Lemmas 47.9.6 and 47.12.1. This explains how we define the arrow in the statement of the lemma. We claim that the hypotheses of Lemma 52.8.2 are implied by our current hypotheses on $M$. The only thing to verify is hypothesis (3). Thus let $\mathfrak p \not\in V(I)$, $\mathfrak p \in T$. Then $V(\mathfrak p) \cap V(I)$ is nonempty as $I$ is contained in the Jacobson radical of $A$ (Algebra, Lemma 10.96.6). Since $\mathfrak p \in T$ we have $V(\mathfrak p) \cap V(I) = V(\mathfrak p) \cap V(J) \cap V(I)$. Let $\mathfrak q \in V(\mathfrak p) \cap V(I)$ be the generic point of an irreducible component. We have $\text{cd}(A_\mathfrak q, I_\mathfrak q) \leq d$ by Local Cohomology, Lemma 51.4.6. We have $V(\mathfrak pA_\mathfrak q) \cap V(I_\mathfrak q) = \{ \mathfrak qA_\mathfrak q\}$ by our choice of $\mathfrak q$ and we conclude $\dim ((A/\mathfrak p)_\mathfrak q) \leq d$ by Local Cohomology, Lemma 51.4.10. Observe that the lemma holds for $s < 0$. This is not a trivial case because it is not a priori clear that $H^ i(R\Gamma _ J(M)^\wedge )$ is zero for $i < 0$. However, this vanishing was established in Dualizing Complexes, Lemma 47.12.4. We will prove the lemma by induction for $s \geq 0$. The lemma for $s = 0$ follows immediately from the conclusion of Lemma 52.8.2 and Dualizing Complexes, Lemma 47.12.5. Assume $s > 0$ and the lemma has been shown for smaller values of $s$. Let $M' \subset M$ be the maximal submodule whose support is contained in $V(I) \cup T$. Then $M'$ is a finite $A$-module whose support is contained in $V(J') \cup V(I)$ for some ideal $J' \subset J$ with $V(J') \cap V(I) = V(J) \cap V(I)$. We claim that $R\Gamma _{J'}(M') \to R\Gamma _ J(M')^\wedge$ is an isomorphism for any choice of $J'$. Namely, we can choose a short exact sequence $0 \to M_1 \oplus M_2 \to M' \to N \to 0$ with $M_1$ annihilated by a power of $J'$, with $M_2$ annihilated by a power of $I$, and with $N$ annihilated by a power of $I + J'$. Thus it suffices to show that the claim holds for $M_1$, $M_2$, and $N$. In the case of $M_1$ we see that $R\Gamma _{J'}(M_1) = M_1$ and since $M_1$ is a finite $A$-module and $I$-adically complete we have $M_1^\wedge = M_1$. This proves the claim for $M_1$ by the initial remarks of the proof. In the case of $M_2$ we see that $H^ i_ J(M_2) = H^ i_{I + J}(M) = H^ i_{I + J'}(M) = H^ i_{J'}(M_2)$ are annihilated by a power of $I$ and hence derived complete. Thus the claim in this case also. For $N$ we can use either of the arguments just given. Considering the short exact sequence $0 \to M' \to M \to M/M' \to 0$ we see that it suffices to prove the lemma for $M/M'$. Thus we may assume $\text{Ass}(M) \cap (V(I) \cup T) = \emptyset$. Let $\mathfrak p \in \text{Ass}(M)$ be such that $V(\mathfrak p) \cap V(J) \cap V(I) = \emptyset$. Since $I$ is contained in the Jacobson radical of $A$ this implies that $V(\mathfrak p) \cap V(J') = \emptyset$ for any $J' \subset J$ with $V(J') \cap V(I) = V(J) \cap V(I)$. Thus setting $N = H^0_\mathfrak p(M)$ we see that $R\Gamma _ J(N) = R\Gamma _{J'}(N) = 0$ for all $J' \subset J$ with $V(J') \cap V(I) = V(J) \cap V(I)$. In particular $R\Gamma _ J(N)^\wedge = 0$. Thus we may replace $M$ by $M/N$ as this changes the structure of $M$ only in primes which do not play a role in conditions (4) or (5). Repeating we may assume that $V(\mathfrak p) \cap V(J) \cap V(I) \not= \emptyset$ for all $\mathfrak p \in \text{Ass}(M)$. Assume $\text{Ass}(M) \cap (V(I) \cup T) = \emptyset$ and that $V(\mathfrak p) \cap V(J) \cap V(I) \not= \emptyset$ for all $\mathfrak p \in \text{Ass}(M)$. Let $\mathfrak p \in \text{Ass}(M)$. We want to show that we may apply Lemma 52.8.1. It is in the verification of this that we will use the supplemental condition (5). Choose $\mathfrak p' \subset \mathfrak p$ and $\mathfrak q' \subset V(\mathfrak p) \cap V(J) \cap V(I)$. 1. If $M_{\mathfrak p'} = 0$, then $\text{depth}(M_{\mathfrak p'}) = \infty$ and $\text{depth}(M_{\mathfrak p'}) + \dim ((A/\mathfrak p')_{\mathfrak q'}) > d + s$. 2. If $\text{depth}(M_{\mathfrak p'}) < s$, then $\text{depth}(M_{\mathfrak p'}) + \dim ((A/\mathfrak p')_{\mathfrak q'}) > d + s$ by (4). In the remaining cases we have $M_{\mathfrak p'} \not= 0$ and $\text{depth}(M_{\mathfrak p'}) \geq s$. In particular, we see that $\mathfrak p'$ is in the support of $M$ and we can choose $\mathfrak p'' \subset \mathfrak p'$ with $\mathfrak p'' \in \text{Ass}(M)$. 1. Observe that $\dim ((A/\mathfrak p'')_{\mathfrak p'}) \geq \text{depth}(M_{\mathfrak p'})$ by Algebra, Lemma 10.72.9. If equality holds, then we have $\text{depth}(M_{\mathfrak p'}) + \dim ((A/\mathfrak p')_{\mathfrak q'}) = \text{depth}(M_{\mathfrak p''}) + \dim ((A/\mathfrak p'')_{\mathfrak q'}) > s + d$ by (4) applied to $\mathfrak p''$ and we are done. This means we are only in trouble if $\dim ((A/\mathfrak p'')_{\mathfrak p'}) > \text{depth}(M_{\mathfrak p'})$. This implies that $\dim (M_\mathfrak p) \geq s + 2$. Thus if (5)(a) holds, then this does not occur. 2. If (5)(b) holds, then we get $\text{depth}(M_{\mathfrak p'}) + \dim ((A/\mathfrak p')_{\mathfrak q'}) \geq s + \delta (\mathfrak p') - \delta (\mathfrak q') \geq s + 1 + \delta (\mathfrak p) - \delta _{max} > s + d$ as desired. 3. If (5)(c) holds, then we get \begin{align*} \text{depth}(M_{\mathfrak p'}) + \dim ((A/\mathfrak p')_{\mathfrak q'}) & \geq s + \delta (\mathfrak p') - \delta (\mathfrak q') \\ & \geq s + 1 + \delta (\mathfrak p) - \delta (\mathfrak q') \\ & = s + 1 + \delta (\mathfrak p) - \delta (\mathfrak q) + \delta (\mathfrak q) - \delta (\mathfrak q') \\ & > s + 1 + (s + d + \delta _{max} - \delta _{min} - 2) + \delta (\mathfrak q) - \delta (\mathfrak q') \\ & \geq 2s + d - 1 \geq s + d \end{align*} as desired. Observe that this argument works because we know that a prime $\mathfrak q \in V(\mathfrak p) \cap V(J) \cap V(I)$ exists. Now we are ready to do the induction step. Choose an ideal $J_0$ as in Lemma 52.8.2 and an integer $t > 0$ such that $(J_0I)^ t$ annihilates $H^ s_ J(M)$. The assumptions of Lemma 52.8.1 are satisfied for every $\mathfrak p \in \text{Ass}(M)$ (see previous paragraph). Thus the annihilator $\mathfrak a \subset A$ of $H^ s(R\Gamma _ J(M)^\wedge )$ is not contained in $\mathfrak p$ for $\mathfrak p \in \text{Ass}(M)$. Thus we can find an $f \in \mathfrak a(J_0I)^ t$ not in any associated prime of $M$ which is an annihilator of both $H^ s(R\Gamma _ J(M)^\wedge )$ and $H^ s_ J(M)$. Then $f$ is a nonzerodivisor on $M$ and we can consider the short exact sequence $0 \to M \xrightarrow {f} M \to M/fM \to 0$ Our choice of $f$ shows that we obtain $\xymatrix{ H^{s - 1}_{J'}(M) \ar[d] \ar[r] & H^{s - 1}_{J'}(M/fM) \ar[d] \ar[r] & H^ s_{J'}(M) \ar[d] \ar[r] & 0 \\ H^{s - 1}(R\Gamma _ J(M)^\wedge ) \ar[r] & H^{s - 1}(R\Gamma _ J(M/fM)^\wedge ) \ar[r] & H^ s(R\Gamma _ J(M)^\wedge ) \ar[r] & 0 }$ for any $J' \subset J_0$ with $V(J') \cap V(I) = V(J) \cap V(I)$. Thus if we choose $J'$ such that it works for $M$ and $M/fM$ and $s - 1$ (possible by induction hypothesis – see next paragraph), then we conclude that the lemma is true. To finish the proof we have to show that the module $M/fM$ satisfies the hypotheses (4) and (5) for $s - 1$. Thus we let $\mathfrak p$ be a prime in the support of $M/fM$ with $\text{depth}((M/fM)_\mathfrak p) < s - 1$ and with $V(\mathfrak p) \cap V(J) \cap V(I)$ nonempty. Then $\dim (M_\mathfrak p) = \dim ((M/fM)_\mathfrak p) + 1$ and $\text{depth}(M_\mathfrak p) = \text{depth}((M/fM)_\mathfrak p) + 1$. In particular, we know (4) and (5) hold for $\mathfrak p$ and $M$ with the original value $s$. The desired inequalities then follow by inspection. $\square$ [2] For example if $M$ satisfies Serre's condition $(S_ s)$ on the complement of $V(I) \cup T$. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
00:19:28 # Silky , , , and ## 赛事预测 Select a game Statistics: T W L D W% L% 35 10 25 0 28 71 Last 10 matches: L W L L L L L L L W 1541979000000 Finished Finished 1541819700000 Finished Finished 1534012500000 Finished Finished 1534010400000 Finished Finished 1532264460000 Finished Finished 1532257800000 Finished Finished 1531423080000 Finished Finished 1531416600000 Finished Finished 1529880180000 Finished Finished 1529863560000 Finished Finished 1529179200000 Finished Finished 1529172960000 Finished Finished 1529170800000 Finished Finished 1523824200000 Finished Finished 1523819700000 Finished Finished 1523816400000 Finished Finished 1497052800000 Finished Finished 1496142300000 Finished Finished 1495930200000 Finished Finished 1495537200000 Finished Finished -->
top of page • BGC Metro Baltimore You Woke Updated: Mar 23, 2021 Hear he Hear he! The earth has spoke We thought it was a mispoke There were clear signs Like Prince "The sign of the times" We were procrastinating When we should have been perservitating We were hesitating That's why we are now meditating Now we get a reprieve Do we repeat the same scene Now we should know how much we need each other So let's understand how to treat one another She has spoke Are you woke
# A post-processing for BFI inferenceClosedPublicActions Authored by spupyrev on May 27 2021, 3:36 PM. # Details Reviewers hoy wenlei wmi davidxl Commits rG0a0800c4d10c: A post-processing for BFI inference Summary The current implementation for computing relative block frequencies does not handle correctly control-flow graphs containing irreducible loops. This results in suboptimally generated binaries, whose perf can be up to 5% worse than optimal. To resolve the problem, we apply a post-processing step, which iteratively updates block frequencies based on the frequencies of their predesessors. This corresponds to finding the stationary point of the Markov chain by an iterative method aka "PageRank computation". The algorithm takes at most O(|E| * IterativeBFIMaxIterations) steps but typically converges faster. It is turned on by passing option use-iterative-bfi-inference and applied only for functions containing profile data and irreducible loops. Tested on SPEC06/17, where it is helping to get correct profile counts for one of the binaries (403.gcc). In prod binaries, we've seen a speedup of up to 2%-5% for binaries containing functions with hot irreducible loops. # Diff Detail ### Event Timeline spupyrev created this revision.May 27 2021, 3:36 PM spupyrev requested review of this revision.May 27 2021, 3:36 PM Herald added a project: Restricted Project. May 27 2021, 3:36 PM spupyrev edited the summary of this revision. (Show Details)May 27 2021, 3:41 PM spupyrev edited the summary of this revision. (Show Details)May 27 2021, 3:44 PM hoy added reviewers: .May 27 2021, 3:46 PM wlei added a subscriber: wlei.May 27 2021, 3:51 PM spupyrev edited the summary of this revision. (Show Details)May 27 2021, 3:52 PM thanks for working on this issue. A high level question -- is it possible to do the fix up on a per (irreducible) loop basis? llvm/test/Transforms/SampleProfile/profile-correlation-irreducible-loops.ll 2 why -enable-new-pm = 0? 11 It will be helpful to draw a simple text art CFG to demonstrate the expected bb counts. spupyrev updated this revision to Diff 349009.Jun 1 2021, 10:15 AM Adding asci representation of the test CFGS spupyrev marked an inline comment as done.Jun 1 2021, 10:30 AM thanks for working on this issue. A high level question -- is it possible to do the fix up on a per (irreducible) loop basis? Would you mind expanding on why you'd prefer a per-loop solution? In general, we found that processing the entire control-flow graph (in opposite to identifying some "problematic" subgraphs first) is much easier from the implementation point of view, while it still keeps the alg fairly efficient. We have a notion of "active" blocks that are being updated, and the algorithm processes only such active vertices. Thus if the input counts are incorrect in a single loop, the algorithm will quickly learn that and will not touch the rest of the graph. llvm/test/Transforms/SampleProfile/profile-correlation-irreducible-loops.ll 2 Without the option, I get Cannot specify -analyze under new pass manager, either specify '-enable-new-pm=0', or use the corresponding new pass manager pass, e.g. '-passes=print<scalar-evolution>'. For a full list of passes, see the '--print-passes' flag. thanks for working on this issue. A high level question -- is it possible to do the fix up on a per (irreducible) loop basis? Would you mind expanding on why you'd prefer a per-loop solution? Mainly to reduce compile time overhead, but you have explained that it is not an issue. In general, we found that processing the entire control-flow graph (in opposite to identifying some "problematic" subgraphs first) is much easier from the implementation point of view, while it still keeps the alg fairly efficient. We have a notion of "active" blocks that are being updated, and the algorithm processes only such active vertices. Thus if the input counts are incorrect in a single loop, the algorithm will quickly learn that and will not touch the rest of the graph. llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1388 why is this map needed (which adds a layer of indirection)? 1397 is it possible, given the blocks are hot? llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1388 The map is used to index successors/predecessors of "hot" blocks, see line 1603. As an optimization, we don't process all the blocks in a function but only those that can be reached from the entry via branches with a positive probability. These are HotBlocks in the code. Typically, the number of HotBlocks is 2x-5x smaller than the total number of blocks in the function. In order to find an index of a block within the list, we either need to do a linear scan over HotBlocks, or have such an extra map. 1397 In theory, there is no guarantee that at least one of getFloatingBlockFreq is non-zero. (Notice that our "definition" of hot blocks does not rely on the result of the method). In practice, I've never seen this condition satisfied in our extensive evaluation. So let me change it to an assertion. spupyrev updated this revision to Diff 349340.Jun 2 2021, 11:51 AM llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1439 can this overflow? 1440 why not using ScaledNumber::get(uint64_t) interface? 1443 why multiplying by Freq.size()? Should the option description reflect this? 1451 Can this loop be moved into computation of probMatrix and pass the succ vector in to avoid redundant computation. 1489 Does it apply to other backedges too? spupyrev marked an inline comment as done.Jun 3 2021, 5:58 PM llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1440 here we convert double EPS = 1e-12 to Scaled64, so need some magic. ScaledNumber::get(uint64_t) won't work for values < 1 1443 good point! I renamed the option and adjusted the description 1451 Here Successors represent successors of each vertex in the (auxiliary) graph. It is different from Succs object in the original CFG. (In particular the auxiliary graph contains jumps from all exit block to the entry) Also I find the current interface a bit cleaner: the main inference method, iterativeInference, takes the probability matrix as input and returns computed frequencies. Successors is an internal variable needed for computation. 1489 not sure I fully understand the question, but we need an adjustment only for self-edges; blocks without self-edges don't need any post-processing I added a short comment before the loop llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1489 NewFreq /= OneMinusSelfProb looks like multiply the block freq (one iteration loop) with the average trip count -- that is why I asked if this applies to other backedges. spupyrev marked an inline comment as done.Jun 4 2021, 1:46 PM llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1489 Here is the relevant math: we want to find a new frequency for block I, Freq[I], such that it is equal to \sum Freq[J] * Prob[J][I], where the sum is taken over all (incoming) jumps (J -> I). These are "ideal" frequencies that BFI is trying to compute. Clearly if I-th block has no self-edges, then we simply assign Freq[I]:=\sum Freq[J] * Prob[J][I] (that is, no adjustment). However, if there are self_edges, we need to assign Freq[I]:=(\sum Freq[J] * Prob[J][I]) / (1 - Prob[I][I]) (the adjustment in the code) llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1489 I wonder why the special treatment is needed in the first place. Suppose we have BB1 (init freq = 50) | V <----------------- BB2 (int freq = 0) | / \ 90% | / 10%\____________| < With iterative fixup, BB2's frequency will converge to 500, which is the right value without any special handling. llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1489 Excellent example! The correct inference here is Freq[BB1] = 50, Freq[BB2] = 500, which is found after 5 iterations using the diff. If we remove the self-edge adjustment, we don't get the right result: it converges to Freq[BB1] = 50, Freq[BB2] = 50 after ~100 iterations. (Observe that we do modify the frequency of the entry block, it is not fixed) In general, I do not have a proof that the Markov chain always converges to the desired stationary point, if we incorrectly update frequencies (e.g., w/o the self-edge adjustment) -- I suspect it does not. llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1489 By entry frequency, do you mean BB1's frequency? BB1 won't be active after the first iteration right? llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1489 Yes I meant BB1's frequency. Notice that in order to create a valid Markov chain, we need to add jumps from all exists to the entry. In this case, from BB2 to BB1. So BB1 will be active on later iterations llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1489 Can you verify if it still works without the adjustment: in the small example, split BB2 into two BBs. llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1489 I've commented above: If we remove the self-edge adjustment, we don't get the right result: it converges to Freq[BB1] = 50, Freq[BB2] = 50 after ~100 iterations. In general, I do not have a proof that the Markov chain always converges to the desired stationary point, if we incorrectly update frequencies (e.g., w/o the self-edge adjustment) -- I suspect it does not. What is the concern/question here? In my mind, this is not a "fix/hack" but the correct way of applying iterative inference. llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1489 There is not much concerns and the patch is almost good to go in. Just want to make sure the algo works for all cases. Also thanks for the patience! llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1385 nit: it looks like this is just finding reachable/live blocks instead of hot blocks, hence the naming could be misleading. 1388 I think we could avoid having the index and extra map if the ProbMatrix (and other data structure) use block pointer instead of index as block identifier, and still remove cold blocks in the processing - i.e. replacing various vectors with map<BasicBlock*, ..>. I think that may be slightly more readable, but using index as identifier is closer to the underlying math.. Either way is fine to me. 1489 Does self probability map to damping factor in original page rank? llvm/lib/Analysis/BlockFrequencyInfoImpl.cpp 60 perhaps iterative-bfi-precision or something alike is more reflective of what it does? It'd be helpful to mention somewhere in the comment or description the trade off between precision and run time (iterations needed to converge). llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 993 Nit: how about giving the type a name, like using ProbMatrixType = std::vector<std::vector<std::pair<size_t, Scaled64>>>; ? 1495 Wondering if it makes sense to not set I active. When I gets an noticeable update on its counts, its successors should be reprocessed thus they should be set active. But not sure I itself should be reprocessed. 1595 Should the probability of parallel edges be accumulated? llvm/test/Transforms/SampleProfile/profile-correlation-irreducible-loops.ll 3 The pseudo-probe pass is probably not needed since the test IR comes with pseudo probes. spupyrev marked 3 inline comments as done.Jun 9 2021, 10:17 AM llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1388 Sure that's an option but map access is costlier than using raw indices. Since the hottest loop of the implementation needs such an access, (i bet) the change will yield perf loss 1489 No I don't think damping factor is the same. Self-edges are regular jumps in CFG where the source and the destination blocks coincide. (they are not super frequent e.g., in SPEC, but do appear sometimes). You can always replace a self-edge from block B1->B1 with two jumps B1->BX and BX->B1, where BX is a "dummy" block containing exactly one incoming and one outgoing jump. Then the inference problem on the new modified CFG (that contains no self-edges) is equivalent to the original problem. This transformation also shows that we cannot simply ignore self-edges, as the inference result might change. 1495 This is a very good question, thanks! I had exactly the same feeling and tried to modify this part as suggested. Unfortunately, it does result in (significantly) slower convergence in some instances, while not providing noticeable benefits. I don't have a rigorous explanation (the alg is a heuristic anyway), but here is my intuition: We update frequencies of blocks in some order, which is dictated by ActiveSet (currently that's simply a queue). This order does affect the speed of convergence: For example, we want to prioritize updates of frequencies of blocks that are a part of a hot loop. If at an iteration we modify frequency of I, then there is a higher chance that block I will need to be updated later. Thus, we explicitly add it to the queue so that it's updated again as soon as possible. There are likely alternative strategies here, e.g., having some priority-based queues and/or smarter strategy for deciding when I needs to be updated. I played quite a bit with various versions but couldn't get significant wins over the default (simplest) strategy. So let's keep this question as a future work. 1595 In my tests, I see parallel edges are always coming with exactly the same probability, and their sum might exceed 1.0. I guess that's an assumption/invariant used in BPI. spupyrev updated this revision to Diff 350941.Jun 9 2021, 10:22 AM ProbMatrixType hoy accepted this revision.Jun 10 2021, 9:47 AM llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h 1495 Thanks for the explanation. Looks like the processing order matters here but hard to track the exact order. Sounds good to keep the current implementation. 1595 You're right. getEdgeProbability returns the sum of all raw edge probabilities from Src to Dst. /// Get the raw edge probability calculated for the block pair. **This returns the /// sum of all raw edge probabilities from Src to Dst.** BranchProbability BranchProbabilityInfo::getEdgeProbability(const BasicBlock *Src, const BasicBlock *Dst) const { if (!Probs.count(std::make_pair(Src, 0))) return BranchProbability(llvm::count(successors(Src), Dst), succ_size(Src)); auto Prob = BranchProbability::getZero(); for (const_succ_iterator I = succ_begin(Src), E = succ_end(Src); I != E; ++I) if (*I == Dst) Prob += Probs.find(std::make_pair(Src, I.getSuccessorIndex()))->second; return Prob; } This revision is now accepted and ready to land.Jun 10 2021, 9:47 AM davidxl accepted this revision.Jun 10 2021, 4:29 PM lgtm wenlei accepted this revision.Jun 11 2021, 9:30 PM lgtm, thanks for working on this Sergey! @davidxl @wmi We found this iterative bfi to work better comparing to irreducible loop header metadata approach. Curious to know if it would produce better results for your workload too. This revision was landed with ongoing or failed builds.Jun 11 2021, 9:52 PM Closed by commit rG0a0800c4d10c: A post-processing for BFI inference (authored by spupyrev, committed by wenlei). This revision was automatically updated to reflect the committed changes. Will evaluate it. If the results are good, we can flip it on by default.
# Calculate the $\int_0^{2\pi}\cos(mx)\cos(nx)dx$ I'm having trouble with this problem: Consider the integral: $$\tag 1\int_0^{2\pi}\cos(mx)\cos(nx)dx$$ a. Write $\cos(mx)$ and $\cos(nx)$ in terms of complex exponentials and compute $\cos(mx)\cos(nx)$ b. Show that, for integer $L$: $$\int \exp(iLx)dx = \begin{cases} 2\pi, & \text{ if } L=0 \\ 0, & \text{otherwise}, \end{cases}$$ (where i is a complex number) c. Compute the integral in $(1)$ by using the above. - That's a pretty good program you have here. Have you tried to apply it? – 1015 Feb 19 '13 at 3:02 "where $i$ is a complex number" -- I'm sure that in this context $i$ is a square root of $-1$, not just any complex number. – Jonas Meyer Feb 19 '13 at 3:10 @julian: why should Leslie try, if people here will solve it for him/her, within 10 minutes? – GEdgar Feb 19 '13 at 3:26 Using complex exponentials, or more elementarily the cosine addition laws $\cos(A+B)=\cos A\cos B-\sin A\sin B$ and its twin $\cos(A-B)=\cos A\cos B+\sin A\sin B$, we find that $$\cos(mx)\cos(nx)=\frac{1}{2}\left(\cos((m+n)x)+\cos((m-n)x)\right).$$ Integrate. Be careful about the case $m=n$. - and also about $m+n=0?$ – lab bhattacharjee Feb 19 '13 at 4:40 That too, if we are interested in negative integers. In the trigonometric series context, which this question usually belongs to, the integers are non-negative. – André Nicolas Feb 19 '13 at 4:44 Hint: $$\cos kx=\frac{e^{ikx}+e^{-ikx}}{2}\Longrightarrow \cos mx\cos nx=\frac{1}{4}\left(e^{i(m+n)x}+e^{-i(m+n)x}+e^{i(m-n)x}+e^{-i(m-n)x}\right)$$ - Recall that $$\cos x = \frac{e^{ix}+e^{-ix}}{2}$$ Then expand $$\cos (mx)\cos (nx) = \left(\frac{e^{imx}+e^{-imx}}{2}\right) \left(\frac{e^{inx}+e^{-inx}}{2}\right)$$ so it becomes a sum of complex exponentials. For the second part, merely find the antiderivative of $\exp(i L x), \, \forall L \neq 0$ and then find that the definite integral over $[0, 2\pi]$. Then do the same for when $L = 0$. Combine the two above results to conclude what the integral of the cosines is. -
# Photographing the State of the Union Photographer John Harrington (http://photobusinessforum.blogspot.com) covered how (and where) photographers shot last night's State of the Union address. Check out his video below, it's an interesting behind-the-scenes look. Photographer John Harrington (http://photobusinessforum.blogspot.com) covered how (and where) photographers shot last night's State of the Union address. Check out his video below, it's an interesting behind-the-scenes look.
# Linear systems separating points Is it easy to find an example of a complete linear system on a smooth projective curve (say over $$\mathbb C$$) which separates points but which is not an embedding? (for just a linear system, one can take the linear system induced by a linear projection (in an embedding) from a point which is not on any secant line of the curve but lies on a tangent line of the curve). Take for $$X$$ a trigonal curve of genus $$\geq 5$$; that is, $$X$$ carries a unique degree 3 pencil $$P$$. Suppose some divisor of $$P$$ is of the form $$p+2q$$. Consider the linear system $$|K-p|$$. If $$r,s$$ are two distinct points of $$X$$, we have $$h^0(p+r+s)=1$$ by unicity of the $$g^1_3$$, hence $$h^0(K-p-r-s)=h^0(K-p)-2$$ by Riemann-Roch; thus $$|K-p|$$ separates $$r$$ and $$s$$. But $$h^0(K-p-2q)=h^0(K-p)-1$$, which means that the associated map is not an embedding at $$q$$. I believe that on any non-rational and non-hyperelliptic curve , the complete linear system $$L = K_X(2p)$$ will work. In such a case $$2p - p_1 -p_2$$ will always be a non-trivial divisor of degree zero unless $$p = p_1 = p_2$$ and hence $$h^0(L) = g+1$$ and $$h^0(L(-p_1 -p_2) ) = g-1$$ unless as stated $$p = p_1 = p_2$$.
# from option A and D which is right because both answers are same but in solution optuon D is given correct so which is right option A or D? DR STRANGE 212 Points 5 years ago Hi Vedant A and D answer are not same ! D is in the indeterminant form. a,b,c are similar $lim_{n \to \infty } (real number)^{x}$ HOPE IT CLEARS
### Sixational The nth term of a sequence is given by the formula n^3 + 11n . Find the first four terms of the sequence given by this formula and the first term of the sequence which is bigger than one million. Prove that all terms of the sequence are divisible by 6. ### Squaresearch Consider numbers of the form un = 1! + 2! + 3! +...+n!. How many such numbers are perfect squares? ### Loopy Investigate sequences given by $a_n = \frac{1+a_{n-1}}{a_{n-2}}$ for different choices of the first two terms. Make a conjecture about the behaviour of these sequences. Can you prove your conjecture? # Summing Squares ### Why do this problem? This problem offers an opportunity to visualise in three dimensions and gives practice in working with sequences that do not grow linearly. It provides a possible introduction to the formula for the sum of the first $n$ square numbers. By considering how the cuboids grow from the $n^{th}$ to the $(n+1)^{th}$ the foundations are laid for learning about proof by induction. ### Possible approach Set the scene for the problem - we are building up cuboids from a sequence of square prisms, adding on six square prisms each time. Show an image of the first 3 by 2 by 1 cuboid and the second 5 by 3 by 2 cuboid. Give the students time to consider the first problem - is it possible to make the second cuboid by adding the six blue square prisms to it, without splitting any of them? Different people visualise this in different ways so if isometric paper and cubes are available, some students may wish to use them to share their ideas. Allow plenty of time for students to share in pairs and then with the rest of the class their justifications for why at least one of the square prisms needs to be split. Encourage those who can do it by only splitting one to share their methods. Next, set the problem of continuing the sequence by adding on the six pink square prisms. There is an image in the hint showing the new cuboid formed. Again, the question is whether any square prisms need to be split, and if so, how many. Explain that they are trying to find a way to describe how to assemble the next cuboid when six new pieces are added on, and give the students time to investigate this. Encourage them to record their thinking in a way that captures the way they "see" it. Pause to reflect on what has been discovered so far: $$6 \times 1^2 = 3 \times2 \times1$$ $$6\times(1^2+2^2) = 5\times3\times2$$ $$6\times(1^2+2^2+3^2) = 7\times4\times3$$ Ask them to predict how to continue this sequence, making reference to the cuboids involved. Once students have a consistent way of making the next cuboid by adding on six square prisms, work can be done to express the dimensions and the volume of the $n^{th}$ cuboid. Small groups could produce a diagram and explanation to show how they would add pieces on to make the $(n+1)^{th}$ cuboid. Finally, once a formula for the volume of the $n^{th}$ cuboid has been expressed, students can consider the relationship between what they have found and the sum of the first $n$ square numbers. ### Key questions Can you make the second cuboid by adding the six blue square prisms? Will you need to split any? Can you visualise a way of making the third cuboid from the second, by adding the six pink prisms? Can you visualise a way of making any cuboid from the previous one in this way? Are you sure you will always be able to make the next cuboid? ### Possible extension Sums of Powers shows another way of considering the sum of the first $n$ square numbers.
# Creating a Curriculum Vitæ with LaTeX & the CurVe Document Class Whilst benefitting from different LaTeX strengths, such as professional typography and beautiful documents, the CurVe class also handles different CVs for different occasions particularly well. The CurVe class has a variety of options which allows customisation of your résumé. The documentation for this document class is in-depth and may be overwhelming for LaTeX learners with little experience. However, Didier Verna, the author of the document class, published an article in PracTeX Journal (2006) which demonstrates the basic functionality of the CurVe class in a more accessible fashion: LaTeX curricula vitae with the CurVe class. The article is more readable and entry-level user-friendly than the documentation with its abundance of options. If you don't want to go through a whole tutorial, the cheat sheet «From blank page to CV with CurVe & \LaTeX» may be the faster route to get it done. A sidenote & disclaimer: I am a German, writing this article in the UK, where most refer to the curriculum vitæ as a CV, curriculum vita or curriculum vitae. Visitors from North America know this document as a resumé or résumé. I use the different terms interchangeably, even though Wikipedia tells me that there is an apparent difference in what constitutes a résumé and a CV in different cultural contexts. And the last bit of fineprint, before we're getting started: this is an article about how to produce a LaTeX CV – I don't wish or aim to offer advice on the content a CV should present. I'm no expert in that department and all the example items are, well, exemplary to show off the CurVe class. Finally, let's get started! ## Getting started: document structure Using the CurVe class requires you to use a highly structure approach: You have a main document (CV.ltx) and, depending on the number of sections in your CV, various files that contain the different sections, such as education.ltx, work.ltx, skills.ltx etc. These document parts are stored in separate files and are called from within the main CV.ltx document. This is similar to working on a larger project where you save parts of a document in separate files and load them into the master document via the commands \include, \includeonly or \excludeonly without having to include all parts at once. ### The main document: preamble The main or master document in CV.ltx includes the preamble and the integration of the separate CV sections. The following is a working example for the preamble of a document using the CurVe class: \NeedsTeXFormat{LaTeX2e} \usepackage[T1]{fontenc} % special characters, etc. \usepackage[english]{babel} % the language of your CV \usepackage{textcomp} % Trademarks and other symbols %%% setup the options for the hyperref package \hypersetup{% pdftex=true,% urlcolor=darkblue,% } % Who are you? e. g. Assistant Professor, Chief Engineer, Senior Executive: \subtitle{at the Peace Institute, Z"urich, Switzerland} % vertical space after subrubric \setlength\subrubricspace{3ex}% CurVe default: 5pt % special characters as bullet point symbols \usepackage{pifont} % only if you need them \prefix{\ding{43}} % pointing finger as a bullet list item symbol % if rubric is spread onto multiple pages, the rubric name is repeated and followed by: \continuedname{~~(continued)} % for a CV in English ### The main document: the CV sections The main document CV.ltx refers to the different sections of the CV in rubric documents (either as *.tex or *.ltx files). Let’s have a look at some common rubrics we may see in almost every résumé or curriculum vitæ: • experience.ltx – includes prior jobs, internships, volunteer gigs etc. • education.ltx – information about your eduction such as apprenticeships, degree programmes etc. • skills.ltx – What is your particular skillset? software, languages, etc. Other potential sections, especially if you are applying for academic positions, scholarships and so forth: • references.ltx – if you are already at the stage where you want to include contact details for referees • publications.ltx – if you have a list of publications Note that the rubric names are not defined by the file names you choose. However, as we will see later, the amount of files in your working directory can become confusing quite quickly. To avoid confusion you’d want to make navigating the different sections as easy as possible. One way to facilitate clarity is to use meaningful file names for your rubric files. The file names and the rubric names are yours to choose. However, make sure that the file names do not contain spaces, i.e. avoid file name.ltx use file_name.ltx, or even better: rubric_name.ltx, instead. The main document follows a very simple structure: \begin{document} \maketitle % call the rubric files in preferred order % make sure your file names are correct \makerubric{experience} % calls experience.ltx \makerubric{education} % calls education.ltx \makerubric{skills} % calls skills.ltx \makerubric{references} % calls references.ltx \makerubric{publications} % calls publications.ltx \end{document} % end of main document CV.ltx This is a minimalistic version that demonstrates how the CurVe class works. There are several different ways to adapt the style and form of the output beyond the supplied defaults. I urge anybody who wants to start tweaking the results to have a look at the documentation of the CurVe class. Make sure to refer to the correct version of the documentation which should match your installed version of the CurVe class. ### The sections of your CV: example of a rubric file A rubric file – in this example experience.ltx – is basically a single LaTeX file which only contains a list that represents the stages of your CV under a thematic headline, such as «Professional experience»: experience.ltx \begin{rubric}{Professional Experience} % rubric title, e.g. Work Experience, Education, Interests… \subrubric{Research} % subrubric title for the first subrubric / subsection of this rubric “Professional Experience”, e.g. research, volunteer work, internships… \entry*[Month/Year — Month/Year] % 1st step Graduate Assistant, Department for Science Fiction, Hogwarts School of Witchcraft and Wizardry, UK (xy hours/week) % description of your 1st step \entry*[MM/YY — MM/YY] % 2nd step Research Internship with the “Non-Violent Com-mu-ni-ca-tion”-Project at the Peace Institute, Z"urich, Switzerland % description of your 2nd step \subrubric{Honorary Office} % subrubric title for the second subrubric / subsection of this rubric “Professional Experience”, e.g. research, volunteer work, internships… \entry*[Month/Year — Month/Year] % 1st step in the second subrubric Charter member and organizer of the Association for Young Wizards, London, UK \entry*[MM/YY — MM/YY] % 2nd step in the second subrubric and even more volunteer work \end{rubric} The generic framework of a rubric file is very simple: \begin{rubric}{Rubric Title} % open the rubric, choose a title, e.g. Education, Work Experience, Interests, Skills, … \subrubric{Subrubric Title} % bspw. Research, Internships, Volunteering, … \entry*[Month/Year — Month/Year] % if you do not need a time period here, you can use keywords, umbrella terms etc. % A short description of this stage in your CV follows – don’t forget the action words! \end{rubric} % close the rubric A résumé, of course, includes several different rubric documents, but all follow the same basic structure. You have to decide for yourself how many rubrics, subrubrics and therefore, sections you will include in your CV. However, I think it advisable to avoid subrubrics with only one entry. If you run into the dilemma of having a rubric or subrubric with a singular entry, you may be able to rephrase another subrubric title or rubric title to accommodate this particular item. ### A special case: a list of publications If you have a list of publications you want to reference in your CurVe class CV you can include them via a BibTeX file. The output of bibliographic reference may be adapted using the BibLaTeX package. However, this is only viable if you already use BibLaTeX and your publications are already stored in the BibTeX format. If you don’t have any experience with the former, I think a complete introduction into the BibTeX universe is a bit too much only to be able to refer to two or three of your articles within your CV. Should you decide to use BibLaTeX be sure to load the package before you load the hyperref package! For a simple bibliography with only a few entries, thebibliography and \bibitem are your best options to produce something quickly. The thebibliography environment is supported by CurVe as it treats the environment as a special variety of the rubric structure. ## One CV – several variants CurVe allows «flavors» and enables the production of different outputs for one CV. The different sections of one CV are stored in separate «rubric» documents. If you need different variants of that CV – e.g. to apply for a academic type job and for an industry job – you can highlight different areas of your experience by applying these «flavors» to your CV. The section or rubric of your CV which is available in different forms are stored in rubric documents with prefixes to their file extension. Instead of one experience.ltx file you will need the (two) different versions of the file in the same folder as your main document. In this example there would be two files named according to the pattern rubricname.flavorname.ltx, as in experience.academic.ltx and experience.corp.ltx. By calling on \flavor{corp} in the preamble of your main document, CV.ltx, you include the experience.corp.ltx version of your job experience section. Respectively, you call on \flavor{academic} to include the experience.academic.ltx version of your job experience section. Note that each «flavored» rubric file contains a complete rubric with all the information you want to include with this flavor or for this purpose. The source code for the rubric file does not change – the information within the rubric is merely geared toward a specific purpose, specific types of applications, jobs etc. There are several ways to switch to a flavor: You may either chose the macro \flavor{flavorname} to call on a specific flavor in the main document, CV.ltx. In our example you would write \flavor{academic} or \flavor{corp}. Or you point the macro to «ask» you every time you typeset the document: \flavor{ask} will then prompt you to specify a flavor. ## Preparing different output styles In addition to having different types of content for your CV, the CurVe class also accommodates different styles for non-rubric-elements. This means that you could, for instance, have different styles for your title and subtitle elements. A typical example would be: you want a clickable element that links your name (in the title) to your professional homepage. You want to style this linked title for readers of your online CV. However, for people who will read the hardcopy print-out, you’d want a different, printer-friendly version of your CV without the link. The input-element enables you to use non-rubric elements accordingly. You will have two different versions of the title element in your working directory, saved in two separate files: title.print.ltx and title.online.ltx. title.print.ltx contains: \title{Print Version of Your Title} title.online.ltx contains: \title{Online Version of \href{http://link-to-my-homepage.com}{Your Title with Link}} You will call upon those different types in the preamble of your main document via the input command: \input{elementname.flavorname.ltx}. In this case via \input{title.online.ltx} or \input{title.print.ltx}. As this option presents you with new flexibility for putting together a CV that answers all your needs, I want to emphasise again the importance of meaningful file names and comments in your source code. Going back into the specifics of an unusual document class – six months after you put together your first documents – will lead you to start from square 1 again. So make sure you’ll have an easy time understanding what you did and re-acquainting yourself with the specifics rather quickly. ## Nota bene, tips & tricks The CurVe class accomodates various use cases and allows for great flexibility in putting together a résumé in LaTeX. However, as with all things that require a fair amount of attention to detail, the process can be frustrating. When you attempt to will the code into producing a certain output using tried and tested methods that work in other document classes, CurVe may surprise you with error messages or «breaking» your layout. Here are some of the tricks & discoveries I gathered using CurVe over the past five years: • Potential for mix-ups: The title commands \title, \subtitle, \maketitle and the macros and arguments for \leftheader, \rightheader and \makeheader are not the information that goes into a page header. To adapt page header and footer use the fancyhdr package. • Every rubric environment needs explicit opening and closing: (\begin{rubric} … \end{rubric}). • Subrubrics (\subrubric{title}) don’t even require a title entry – the \subrubric{} macro can be empty. • \entry*[]: Whatever you have in square brackets – the longest string within the brackets defines the width of your \entry*[]-column for the whole rubric. I.e., if you have only one \entry*[with an exceptionally long text] in one of your rubric files, the \entry*[]-column in this rubric will be exceptionally wide (and produces whitespace, thus putting your document out of balanced proportions). To avoid this, you’d want to keep entry elements as short as possible. • If you want to produce a consistent \entry*[]-column across all rubrics in your CV, you can include the \noentry{} element which does not produce a printed entry but defines a consistent textwidth for the key column in the rubric: \noentry{example string to produce consistent column width}. Note that the \noentry{} element has curly brackets, not square brackets. In order to be consistent across all the rubrics, you’ll have to include the same \noentry{} element in each rubric. • The document class option skipsamekey avoids the duplication of identical \entry*[elements]. If the same \entry*[element] repeats itself without a break, the second output of the \entry*[element] will be suppressed. The item description belonging to this \entry*[element] will be aligned with the other rubric entries as expected. • Indented line breaks within an \entry*[]-item is produced via the \newline command, not via //.
# Synopsis: Magnetic order is no match for the lattice Density-functional calculations provide a comprehensive picture of how magnetic order evolves with doping in two iron pnictide compounds. Understanding magnetic order in the two major families of iron-based superconductors with $\text{FeAs}$ layers—namely, electron-doped ${\text{LaFeAsO}}_{1-x}{\text{F}}_{x}$ and hole-doped ${\text{Ba}}_{1-2y}{\text{K}}_{2y}{\text{Fe}}_{2}{\text{As}}_{2}$—is important because magnetism seems to be intimately tied with superconductivity. At about $150\phantom{\rule{0.333em}{0ex}}\text{K}$, the undoped compounds ($x$ and $y$ = 0) acquire a ferromagnetic stripe ordering along the shorter axis of the square $\text{Fe}$ sublattice, while displaying antiferromagnetic ordering along the longer axis and between the $\text{Fe}$ layers. Doping with enough carriers suppresses the magnetic order and induces superconductivity in both compounds, though in ${\text{Ba}}_{1-2y}{\text{K}}_{2y}{\text{Fe}}_{2}{\text{As}}_{2}$ magnetism and superconductivity coexist for $0.10. Although density-functional calculations overestimate the value of the $\text{Fe}$ moment, they generally reproduce the observed magnetic structure for undoped pnictides. In an article appearing in Physical Review B, Alexander Yaresko, Guo-Qiang Liu, Viktor Antonov, and Ole Krogh Andersen at the Max-Planck Institute in Stuttgart, Germany, present comprehensive calculations on experimentally observed crystal structures of ${\text{LaFeAsO}}_{1-x}{\text{F}}_{x}$ and ${\text{Ba}}_{1-2y}{\text{K}}_{2y}{\text{Fe}}_{2}{\text{As}}_{2}$ to determine the magnetic behavior as a function of doping. They find that electron doping above $x>0.1$ in ${\text{LaFeAsO}}_{1-x}{\text{F}}_{x}$ destabilizes the stripe magnetic order and leads to an incommensurate spin spiral order, whereas hole-doping in ${\text{Ba}}_{1-2y}{\text{K}}_{2y}{\text{Fe}}_{2}{\text{As}}_{2}$ leaves the magnetic stripe order intact up to $y=0.25$. The authors find that in both compounds, the classical Heisenberg model with nearest and next-nearest neighbor spin interactions is inadequate to describe the magnetic order and may require additional terms to accurately compute the energy. – Sarma Kancharla More Features » ### Announcements More Announcements » Gravitation Read More » Nanophysics Read More » ## Related Articles Magnetism ### Synopsis: A Faster Diamond Magnetometer Diamond-defect magnetometers can now simultaneously determine all spatial components of a magnetic field, leading to a factor of 4 decrease in measurement times. Read More » Superconductivity ### Synopsis: Bismuthates Are Surprisingly Conventional Photoemission experiments challenge the long-held belief that the high-temperature superconductivity of certain bismuth oxides is of the unconventional type. Read More » Spintronics ### Synopsis: Material Covers All the Bases for Spintronic Memories Multilayer structures of cobalt and nickel have ideal properties for next-generation spintronic memory devices. Read More »
# classification of quadrics Consider the projective plane $\mathbb{R}P^2$ and a symmetric matrix $B \neq 0$ of a bilinear form that defines a quadric $Q := \{ [v] \in \mathbb{R}P^2 : v^tBv = 0\}$. Is the following ok? And for the affine part I would need help/tips. I am very new to this stuff and do not really know how to handle these "equivalences" and keep projective, affine, euclidian geometry/maps apart. First classify equivalent quadrics under projective transformations: Sylvester's law of inertia allows to diagonalize $B$ using $S^{t}BS$ where $S \in Gl(\mathbb{R},3)$ so that $B$ has only diagonal elements in $\{0,1,-1\}$. Since $S \in Gl(\mathbb{R},3)$ it is a projective map, and hence the resulting quadrics are equivalent under projective transformations. We have the following 5 equivalence classes of quadrics represented by there diagonal elements: $(1,1,1),\ (1,1,-1),\ (1,1,0),\ (1,-1,0),\ (1,0,0)$. Second assume now the projective plane $\mathbb{C}P^2$. Then in analogy we represent the 3 equivalence classes by there occuring diagonal elements $(1,1,1),\ (1,1,0),\ (1,0,0)$ after transformation. So here Sylvester provides us only with diagonal elements $\{0,1\}$ left and hence the number of classes of quadrics reduces. Third assume again the projective plane $\mathbb{R}P^2$ but now consider only equivalence using affine transformations. Here there should now be more than 5 cases, since less matrices for diagonalzation of $B$ are allowed, namely only those of affine transformations of the form $\begin{pmatrix}A & a \\ 0\ \ 0 & 1\end{pmatrix}$ where $A \in Gl(\mathbb{R},2), a \in \mathbb{R^2}$. But I do not know how to work this out now..help? For me this looks weird.. ## 1 Answer An affine transformation is a projective transformation which fixes the line at infinity. So start with the five classes from $\mathbb R\mathrm P^2$, and examine each further. 1. $(1,0,0)$ is a double line. This splits into two cases: either the line in question is the line at infinity, or it is a finite line. 2. $(1,-1,0)$ consists of two different lines. Here you have three cases to consider: either one of the lines is the line at infinity, or neither is but they still intersect at infinity (i.e. are parallel), or they intersect in a finite point. 3. $(1,1,0)$ are two complex conjugate lines intersecting in a real point. Neither complex line can be the line at infinity, since that is a real line. So you have two cases: the point of intersection is either finite or infinite. 4. $(1,1,-1)$ is the classical non-degenerate real conic. It intersects the ine at infinity in either zero or one or two real points, which corresponds to the classification in ellipse, parabola and hyperbola. 5. $(1,1,1)$ has no real points. So in particular, it has no real points in common with the line at infinity. For the real cases (1, 2, 4) I'm fairly convinced of my result: given two quadrics from the same equivalence class, I could always find an affine transformation to transform one into the other. For the complex cases (3, 5) I was not as sure, since intuition fails me. So in those cases you might want a more reliable proof. • $(1,1,1)$: Complex non-degenerate quadric. Any real line intersects the quadric in two complex conjugate points. So intersect your quadric with $z=0$ (the line at infinity) and with $y=0$ to obtain two pairs of points. Doing so for two quadrics of the same kind, you obtain two pairs of points, and there exists a unique projective transformation between these two which maps corresponding pairs to one another. This projective transformation will be affine, since the line at infinity will be fixed due to the intersection points with $z=0$. The matrix will also be real, since the conjugate of the matrix can be determined using the conjugates of the defining pairs, but each pair is its own conjugate, since it consists of conjugate points. • $(1,1,0)$: Two complex lines with real finite point of intersection. The same argument as above still holds, except you should not use the line $y=0$ but instead a line which avoids the real point of intersection so you get two distinct complex conjugate intersections. • $(0,1,1)$: Two complex lines with real infinite point of intersection. In this case you only intersect with a single finite line, one which avoids the real point of intersection and yields a single pair of complex conjugate points. The third defining point for the transformation will be the point of intersection at infinity, and the fourth point can be an arbitrary real point at infinity. The matrix mapping corresponding points will again fix the line at infinity, since two points at infinity are mapped to two points at infinity. It will also be real, since the one pair is its own complex conjugate, and the other points are real already, so each of them is its own conjugate. Furthermore, the quadric is really mapped to its image since each quadric is already defined by its class, the point of intersection at infinity and the two complex points of intersection. So now that the more doubtful cases have been verified, I'm convinced of my classification. You have $2+3+2+3+1=11$ classes to consider in total.
# Differential equation modeling glucose in a patient's body #### ForceBoy Problem Statement "A hospital patient is fed glucose intravenously (directly to the bloodstream) at a rate of r units per minute. The body removes glucose from the bloodstream at a rate proportional to the amount Q(t) present in the bloodstream at time t. " (Finney; Weir; Giordano, 452) A) write the differential eq. B) Solve the diff. eq $Q(0) = Q_0$ C) find the limit as t goes to infinity Relevant Equations The chapter this problem is found in is one on separable differential equations The rate at which glucose enters the bloodstream is $r$ units per minute so: $\frac{dI}{dt} = r$ The rate at which it leaves the body is: $\frac {dE}{dt} = -k Q(t)$ Then the rate at which the glucose in the body changes is: A) $Q'(t) = \frac{dI}{dt} + \frac {dE}{dt} = r - k Q(t)$ I don't see how this is a separable differential equation. I still try to solve it. $\frac{dQ}{dt} + k Q = r$ $Q e^{kt} = \int r e^{kt} dt$ $Q = \frac{r}{e^{kt}}\frac{e^{kx}}{k}$ B) $Q(t) = \frac{r}{k}$ This tells me that the glucose in the bloodstream at any point in time will be a constant. I know this is wrong. It would be appreciated if someone could point me onto the right path to solve this diff. eq. Thank you. Related Calculus and Beyond Homework News on Phys.org #### Orodruin Staff Emeritus Homework Helper Gold Member 2018 Award You missed the integration constant which you will need to adapt the solution to the initial condition. I don't see how this is a separable differential equation. What do you know about separable ODEs? The point of a separable ODE is that you should be able to write it on the form $$y'(t) f(y(t)) = g(t).$$ This is possible in this situation. #### ForceBoy What do you know about separable ODEs? The point of a separable ODE is that you should be able to write it on the form y′(t)f(y(t))=g(t).​ Thank you. I put my equation in the form you gave and solved just fine: $\frac{dQ}{dt} = Q'(t)$ $r - kQ(t) = g(Q(t))$ ________________________ $Q'(t) = g(Q(t))$ $\frac{Q'(t)}{g(Q(t))} =1$ $Q'(t) f(Q(t)) = 1$ $\frac{dQ}{dt} \frac{1}{r-kQ} = 1$ $\frac{dQ}{r-kQ} = dt$ $\int\frac{dQ}{r-kQ} = \int dt$ $\ln|r-kQ | = t +C$ (Can't forget the C now, thanks) $r-kQ = Ae^{t}$ $Q = \frac{r - A e^{t}}{k}$ If the above is correct, then I can solve the rest of the problem. #### Mark44 Mentor Thank you. I put my equation in the form you gave and solved just fine: $\frac{dQ}{dt} = Q'(t)$ $r - kQ(t) = g(Q(t))$ ________________________ $Q'(t) = g(Q(t))$ $\frac{Q'(t)}{g(Q(t))} =1$ $Q'(t) f(Q(t)) = 1$ $\frac{dQ}{dt} \frac{1}{r-kQ} = 1$ $\frac{dQ}{r-kQ} = dt$ $\int\frac{dQ}{r-kQ} = \int dt$ Things are OK but a bit verbose up to the line above. For example, you can go directly from $\frac{dQ}{dt} = r - kQ$ and separate the equation to $\frac{dQ}{r - kQ} = dt$, and skip several of the lines you wrote. ForceBoy said: $\ln|r-kQ | = t +C$ (Can't forget the C now, thanks) Mistake in the line above. What is your substitution when you do the integration? ForceBoy said: $r-kQ = Ae^{t}$ $Q = \frac{r - A e^{t}}{k}$ If the above is correct, then I can solve the rest of the problem. #### hilbert2 Gold Member $\displaystyle\int_{0}^{t}dt' = \int_{Q(0)}^{Q(t)}\frac{dQ}{r-kQ} = -\frac{1}{k}\int_{Q(0)}^{Q(t)}\frac{-kdQ}{r-kQ}$ Then you already have the integration constant in terms of the initial condition $Q(0)$. #### ForceBoy Mistake in the line above. What is your substitution when you do the integration? Oh, I hadn't caught that! Thanks alot. Here is the correct version: $Q(t) = \frac{r-Ae^{-kt}}{k}$ $\displaystyle\int_{0}^{t}dt' = \int_{Q(0)}^{Q(t)}\frac{dQ}{r-kQ} = -\frac{1}{k}\int_{Q(0)}^{Q(t)}\frac{-kdQ}{r-kQ}$ Then you already have the integration constant in terms of the initial condition $Q(0)$. This is a great tip. Thanks alot this will save me work. $\displaystyle\frac{-1}{k}\int_{Q(0)}^{Q(t)} \frac{-kdQ}{r- kQ} = \int_{0}^{t} dt'$ $\displaystyle\ln(r-kQ)_{Q(0)}^{Q(t)} = -kt$ $\displaystyle\frac{r-kQ(t)}{r-kQ(0)} = e^{-kt}$ $\displaystyle r- kQ(t) = (r-kQ(0))e^{-kt}$ $\displaystyle Q(t) = \frac{r-(r-kQ(0))e^{-kt}}{k}$ $\displaystyle Q(t) = \frac{r-re^{-kt}+kQ(0)e^{-kt}}{k}$ $\displaystyle Q(t) = \frac{r-(r-kQ_{0})e^{-kt}}{k}$ So this last equation must be the answer. Thank you all for your time #### Ray Vickson Homework Helper Dearly Missed The rate at which glucose enters the bloodstream is $r$ units per minute so: $\frac{dI}{dt} = r$ The rate at which it leaves the body is: $\frac {dE}{dt} = -k Q(t)$ Then the rate at which the glucose in the body changes is: A) $Q'(t) = \frac{dI}{dt} + \frac {dE}{dt} = r - k Q(t)$ I don't see how this is a separable differential equation. I still try to solve it. $\frac{dQ}{dt} + k Q = r$ $Q e^{kt} = \int r e^{kt} dt$ $Q = \frac{r}{e^{kt}}\frac{e^{kx}}{k}$ B) $Q(t) = \frac{r}{k}$ This tells me that the glucose in the bloodstream at any point in time will be a constant. I know this is wrong. It would be appreciated if someone could point me onto the right path to solve this diff. eq. Thank you. You can render the equation separable by changing to $P = Q- r/k$, so that $dP/dt +kP = 0$ (because, of course, $dP/dt = dQ/dt$). #### Orodruin Staff Emeritus Homework Helper Gold Member 2018 Award You can render the equation separable by changing to $P = Q- r/k$, so that $dP/dt +kP = 0$ (because, of course, $dP/dt = dQ/dt$). The equation was already separable. As demonstrated in #3 by the OP. "Differential equation modeling glucose in a patient's body" ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
### ChitreshApte's blog By ChitreshApte, history, 11 days ago, We start with 0 and we have to reach n We can perform 3 operations: 1. Multiply the current number by 2 (cost p) 2. Multiply the current number by 3 (cost q) 3. Increase/Decrease the current number by 1 (cost r) Find the min-cost to reach from 0 to n Constraints: 10 testcases 1 <= n <= 10^18 1 <= p,q,r <= 10^9 Example: n,p,q,r = 11,1,2,8 Order: +1, *3, *2, *2, -1 = 8 + 2 + 1 + 1 + 8 = 20 • +17 » 11 days ago, # | ← Rev. 2 →   0 Maybe generate all natural x and y such that $2^x 3^y \leq n$. There exist only $O(log^2(n))$ such pairs. Get the minimum cost seeing the solution generated by each pair.UPD: Not always the optimal strategy. By seeing the editorial provided in the comments below, it seems we should start backwards and either decrease until zero or decrease until find a multiple of 2 or 3 of the form $k \ floor(\frac{n}{k})$ or $k \ ceil(\frac{n}{k})$. So the solution is always of the form $\sum c_i + \sum 2^{a_j} 3^{b_j}$. • » » 11 days ago, # ^ |   0 The first step makes sense. How to implement the second as we can have many possibilities. • » » » 11 days ago, # ^ | ← Rev. 3 →   0 For each pair x, y you can write $n = 2^x 3^y + z$. The associated cost is $px + qy + rz$. » 11 days ago, # | ← Rev. 3 →   -9 I was mistaken I didn't see the contraints » 11 days ago, # |   -26 I guess you can simply use recursion. Just keeping check of the usage of +1/-1 will not give TLE. Something like this: def cost(curr): if curr == 0: return 0 if curr%2!=0 and curr%3!=0: return cost(curr - 1) + r ans1,ans2 = 10**18,10**18 if curr%2==0: ans1 = cost(curr//2) + p if curr%3==0: ans2 = cost(curr//3) + q return min(ans1,ans2) n = 10**18 p = 1 q = 2 r = 8 print(min(cost(n),cost(n+1)+r)) • » » 11 days ago, # ^ | ← Rev. 4 →   0 Intput -2 100 100 1Correct output - 2Your output — 101I can think of one improvement in your code $cost(n/2)+min(p,(n/2)*r)$ similarly for 3 case. • » » » 11 days ago, # ^ |   -19 def cost(curr): if curr == 0: return 0 if curr%2!=0 and curr%3!=0: return cost(curr - 1) + r ans1,ans2 = 10**18,10**18 if curr%2==0: ans1 = min( cost(curr//2) + p , cost(curr//2) + r*curr//2 ) if curr%3==0: ans2 = min( cost(curr//3) + q , cost(curr//3) + 2*r*curr//3 ) return min(ans1,ans2) n = 102 p = 100 q = 100 r = 1 print(min(cost(n),cost(n+1)+r)) Yeah I think this will work. Didn't consider the case when r < p,q. Also I suppose memoization can help improve the efficiency. • » » » » 11 days ago, # ^ |   +8 Now your code reports 20 for this testcase: n = 61 p = 2 q = 1000000 r = 1 While a lower cost 15 is possible by running 61 to 0 via doing +1 +1 +1 /2 /2 /2 /2 -1 -1 -1 -1 operations. • » » » » » 11 days ago, # ^ |   +19 In fact it can be done with cost 14: $0 \to 1 \to 2 \to 3 \to 4 \to 8 \to 16 \to 32 \to 31 \to 62 \to 61$. • » » » » » » 11 days ago, # ^ |   0 You are right. But being pedantic, I didn't claim that 15 was the minimal possible cost. Just that the last rakeshpanda's solution wasn't good enough and I successfully "hacked" it. • » » » » » 11 days ago, # ^ |   -10 Yeah this solution is horribly wrong. Don't think we can use recursion to solve this problem. • » » 11 days ago, # ^ |   +21 Your code never uses the "decrease by 1" operation? • » » » 11 days ago, # ^ | ← Rev. 3 →   -10 Yeah you are correct. I didn't think of many test cases and also used a wrong assumption. » 11 days ago, # |   +9 This problem was asked in previous Atcoder round. i am sharing the link below Problem Link : Pay to WinEditorial : this • » » 11 days ago, # ^ |   0 Thanks a lot
# Linear Regression - Conditions for unbiased estimate When is the linear regression estimate of $\beta_1$ in the model $$Y= X_1\beta_1 + \delta$$ unbiased, given that the $(x,y)$ pairs are generated with the following model? $$Y= X_1\beta_1 + X_2\beta_2 + \delta$$ We have that the expected value of $\beta_1$ is \begin{align*} E[\hat{\beta}_1|X_1,X_2) &= E[(X_1^TX_1)^{-1}X_1^T(X_1\beta_1+X_2\beta_2+\delta)|X_1,X_2]\\ &=\beta_1 + E[(X_1^TX_1)^{-1}X_1^TX_2\beta_2+(X_1^TX_1)^{-1}X_1^T\delta|X1,X2]\\ &= \beta_1+E[(X_1^TX_1)^{-1}X_1^TX_2\beta_2 | X_1,X_2] + 0\\ \end{align*} Now, when is the second term 0 (i.e., $\hat{\beta}_1$ is an unbiased estimator)? I have read that it is 0 if $X_1$ and $X_2$ are independent. But which property allows me to conclude that? • I don't see any justification for the isolated appearance of $\beta_1$ in the second line of the calculation. In fact, the first line does not appear to be the correct formula for the least squares estimate, because it ignores the presence of $\delta$ in the model. – whuber Feb 21 '16 at 20:04 • Isn't delta the error? – JohnK Feb 21 '16 at 20:05 • Yes, delta is the error – JC1 Feb 21 '16 at 20:10 • @whuber I believe the idea is that while it is the second model that is the correct one, we carelessly estimate the first one, the reduced model. Hence the LS estimate based on the $\mathbf{X}_1$ matrix. The point is to demonstrate the bias of the estimator in that case. Of course, it's better if the OP confirms that. – JohnK Feb 21 '16 at 20:13 • Yes, you're right John – JC1 Feb 21 '16 at 20:19 It is zero when the columns of $\mathbf{X}_1$ are perpendicular to the columns of $\mathbf{X}_2$ so that the column spaces are orthogonal to one another. This means that the variables need to be uncorrelated to one another, which is not quite the same thing as independence. It is in fact a weaker condition as independence implies zero correlation. If the variables are not uncorrelated, however, and you proceed to estimate $\boldsymbol{\beta}_1$ only, you will end up with a biased estimator. In fact, this is called omitted variable bias.
: +91 124 4007927 # Integrate limit 0 to two pie (1 divided by 1 plus e to the power sin x) dx Home » Integration » Integrate limit 0 to two pie (1 divided by 1 plus e to the power sin x) dx Evaluate : Let Posted on
## Complete proof of PAC learning of axis-aligned rectangles I have already read PAC learning of axis-aligned rectangles and understand every other part of the example. From Foundations of Machine Learning by Mohri, 2nd ed., p. 13 (book) or p. 30 (PDF), I am struggling to understand the following sentence of Example 2.4, which is apparently the result of a contrapositive argument: … if $$R(\text{R}_S) > \epsilon$$, then $$\text{R}_S$$ must miss at least one of the regions $$r_i$$, $$i \in [4]$$. i.e., $$i = 1, 2, 3, 4$$. Could someone please explain why this is the case? The way I see it is this: given $$\epsilon > 0$$, if $$R(\text{R}_S) > \epsilon$$, then $$\mathbb{P}_{x \sim D}(\text{R}\setminus \text{R}_S) > \epsilon$$. We also know from this stage of the proof that $$\mathbb{P}_{x \sim D}(\text{R}) > \epsilon$$ as well. Beyond this, I’m not sure how the sentence above is reached.
## Is there a term for the belief that "if it's legal, it's moral"? 44 7 Sometimes I hear arguments that seem to appeal to the fact that something is morally permissible because it is legally permitted. For example: • Abortion is moral because it's legally permitted. Killing two year olds is immoral because we have laws against it. • Putting your money into offshore bank accounts to avoid taxes is perfectly legal, thus I see nothing morally wrong with it. • The law allows me to keep a slave, therefore it is my moral right to do so. This appears to be fallacious, since there are cases where modern people assert that something is immoral that was historically legal (e.g. chattel slavery), thus something can still be legal yet immoral. Nonetheless, this belief seems to persist. Is there a term for this belief that "if it's legal, it's moral", either as a name for a fallacious argument or as a name for a philosophical belief? Let's assume we are talking about laws that are made by people, rather than by nature (i.e. natural law) or by God (i.e. divine law). 4Is it the idea that "if it is legal, then it must be moral in some way", or "there is only one moral duty: follow the laws of society, whatever it may be" ? While leading to the same result, both have very different premises. – armand – 2019-06-17T00:51:23.973 1 Some shorten it to "legal makes moral". But the motto leads to so much nonsense so quickly (especially if no basis is given to how the laws come to be laid down) that no serious thinker defended it. The closest I can think of is legal moralism, but it still bases legal on "society's collective moral judgments". – Conifold – 2019-06-17T05:55:11.220 2I understand the point behind "This appears to be fallacious, since there are cases where modern people assert that something is immoral that was historically legal (e.g. chattel slavery), thus something can still be legal yet immoral.", but that argument also highlights the unstated premise that what is moral and immoral does not change over time or in situations, with which some ethical theories would disagree. – Joshua Taylor – 2019-06-17T11:54:51.733 4Your examples are odd. Personally I’ve never heard your first or third example, although I accept that the third might have existed in the past. Similarly, offshore banking itself is obviously not a problem, but the morally objectionable form of it — hidden offshore bank accounts that you don’t pay taxes on — is also usually illegal. Do you have a better example, or can you justify why you picked these (to my mind) odd examples? – Konrad Rudolph – 2019-06-17T12:35:24.497 1I think you mean "we *are* talking about laws that are made by people". The question does also lead to the problematic question of "what is moral?" God is by His own admission not moral, as amply demonstrated by the Old Testament. "Natural law" tends to rely largely on the "ick" response which for most people has large inconsistencies when you start digging. Humanism is the best we can do, as demonstrated by wholesale rejection of Old Testament rules (when did you last kill someone for wearing mixed fibres, for example?), but tends not to give absolute answers. – Graham – 2019-06-17T13:50:56.630 @KonradRudolph I understood that as a (misinformed) reference to the practice of large companies using what amounts to legal loopholes to technically generate all their income in a country without income taxes, despite doing most of their work in, say, the US. Not quite offshore banking, but it hits enough common notes, is generally considered immoral, and is legal to the best of my knowledge. – Fund Monica's Lawsuit – 2019-06-17T14:39:05.647 @Graham Thanks, I've fixed the typo that you pointed out. – Thunderforge – 2019-06-17T14:54:17.380 @KonradRudolph Believe it or not, I've personally heard the first example by people who are in favor of abortion. The third one was made up to more obviously create the point; admittedly I lost the the point about it being examples I've heard. I can remove it if it's distracting. – Thunderforge – 2019-06-17T14:55:53.530 The fallacy is in treating moral as invariant against time and space. In ancient Sparta, it was immoral to let a malformed infant live. In modern USA it is immoral to let it die. Morals and laws of the past does not invalidate ours. – Agent_L – 2019-06-18T12:38:49.960 1Agent_L, I'd suggest replacing "it was immoral" with "it was considered immoral" and similarly for the USA case. – Josiah – 2019-06-18T22:46:35.733 "Putting your money into offshore bank accounts to avoid taxes is perfectly legal" - uh, not in any country which taxes on worldwide income (which is most in the West). – JBentley – 2019-06-19T20:09:42.127 @JBentley The only Western country that taxes worldwide income is the United States. – user76284 – 2019-06-19T23:53:05.817 @user76284 No, you are thinking of countries that tax its citizens who live abroad. I am referring to countries whose resident taxpayers pay tax on income wherever that income is earned (e.g. you live/pay tax in the UK but you receive income from France). Most Western / European countries fit into that category. – JBentley – 2019-06-21T09:30:42.080 50 We are talking about "Appeal to law" fallacy. When following the law is assumed to be the morally correct thing to do, without justification, or when breaking the law is assumed to be the morally wrong thing to do, without justification. It could also be taken as special form of appeal to authority or argumentum ad verecundiam fallacy, in which the authority that which is appealed to is laws of the land. Basically, the argument goes that laws are minimum agreed upon by society moral rules in the land. Therefore, by claiming that behavior is moral thanks to the laws, the person is making an argument in which the ultimate authority of moral behavior is group making and enforcing the laws. For the person who unflinchingly obeys laws despite that behavior not making sense in the situation or not being "good", the term used is Lawful Stupid. Edit: Warning, Lawful Stupid is tvtropes link! It can be an appeal to the law in case it is used in a context where two parties are arguing about a legal and ethical topic. But, in general if you adopt the view that something is legal because there is some moral reason for that, then that's legal interpretivism – SmootQ – 2019-06-17T18:19:05.510 And if we were to argue about something it would be fallacious to say that it is moral since it is legal. Otherwise it is still possible to agree that everything that is legal is based on morality and call ourselves legal interpretivists. So it is okay to think that legal is base on moral, but it is not okay to think that you can use that claim in arguments if the other party does not hold such claim. – SmootQ – 2019-06-17T18:21:38.533 @jo1storm "... appeal to authority ..." -> but, beware the "appeal to authority fallacy fallacy" :-). Which is somewhat covered in the authority you cite :-). – Russell McMahon – 2019-06-18T11:07:36.700 yeah, fallacy fallacy is a big problem :) – jo1storm – 2019-06-18T14:08:57.470 @DonBranson - in that moral framework, actions of Nazis would be moral, just like how drawing all the children in the world by god is moral in Christian moral framework. All morality is subjective, your own judgement on what is good or not is of no significance. – Davor – 2019-06-18T15:16:42.840 1@Davor - Interesting, thanks, that helps to understand where you're coming from. In the framework you accept, what would be the process of deciding whether a particular thing would become law or not? I guess, can you pick an example, like slavery, and explore what the process of deciding whether to make a law prohibiting it or not? That is, what factors would be considered and why. It seems we should be moving this to chat now, actually, based on the SO guidance. – Don Branson – 2019-06-18T15:39:07.723 @don-branson might be worth to mention Belgian king Leopold II or English/USA rulers wasting literally millions of lives for their own greed they cover up with some "democracy" that Aristoteles actually warned against (contrary to politia); a reality check moment from Russia with love :) – Michael Shigorin – 2019-06-18T16:53:53.410 @MichaelShigorin It's really so easy to broadly categorize people. For example, I could lump together all those who generally believe that morality is an emergent property of law. Rather, I'm interested to understand Davor as an individual, and see what his ideas are. – Don Branson – 2019-06-18T17:02:27.027 @MichaelShigorin So in those cases you mentioned, "Belgian king Leopold II or English/USA rulers wasting literally millions of lives for their own greed they..." Would you categorize those as immoral? Would you describe the foundation that you would use to declare their behavior as moral or immoral? – Don Branson – 2019-06-18T17:08:14.203 @DonBranson - I'm not personally espousing such a moral framework. Personally, I'm mostly utilitarian leaning. But I'm explaining how such a framework would work for people who do believe in it. In their opinion, morality derives from law, and law is derived either by a democratic process, or by a decree of a ruler (who in monarchy usually claims to have divine right to rule etc.). So, morality is really either following the will of majority, or the will of gods, if you believe in such. (Continued) – Davor – 2019-06-19T09:20:05.303 1@DonBranson - Really, this is how lawful good work in DnD. Paladins are religious and lawful zealots, if law says "gas the jews", than that must be the right thing to do in their view. Plenty of atrocities (in my and your opinion) were committed by people believing themselves right. Even frickin Hitler believed he was removing the dangers to his people. – Davor – 2019-06-19T09:21:24.610 @Davor "I'm not personally espousing such a moral framework. ... But I'm explaining how such a framework would work for people who do believe in it." Okay, thanks, a misunderstanding on my part - I thought you were explaining your views. In any given category, details differ, that's why I was asking. – Don Branson – 2019-06-19T12:54:55.870 @Davor - "...and your opinion..." how do you know? I haven't claimed anything in this discussion, only asked questions to understand your views. – Don Branson – 2019-06-19T12:58:15.003 Comments are not for extended discussion; this conversation has been moved to chat. – Geoffrey Thomas – 2019-06-19T13:08:06.250 The website you used CLEARLY STATES the fallacy named was made up by the AUTHOR. This alleged fallacy is not widely used or taught. You should have specified that FACT. – Logikal – 2019-06-19T17:22:11.990 @DonBranson hope you could see the perils of overgeneralizing when faced with a counter-example at least; I don't know what to tell to a person who wonders whether killing millions of people out of greed is moral or immoral, maybe your own life will do you a favour and an exam on that so that you understand with no further explanations. I hope to be an Orthodox Christian thanks to lessons learned from my own life, thanking it long afterwards... and I do consider genocide -- and usually a single murder as well -- immoral, if you ask. – Michael Shigorin – 2019-06-19T19:11:32.280 Friend @MichaelShigorin - there seems to be some confusion, so hopefully I can clarify. I mentioned before that my only interest here was to ask Davor questions to understand his views. That is, to explore the idea of legal=moral, which he later asserted that he doesn't hold. I'm not overgeneralizing in any way, and was very careful to explain that my interest was highly focused and not even remotely generalized. – Don Branson – 2019-06-19T20:43:36.157 @MichaelShigorin You also said, " "I don't know what to tell to a person who wonders whether killing millions of people..." I'm on the same page as you - I have no idea how I would respond to this claim. But, I'm not sure there are people that would assert this. However, people who hold the view that legal=moral need to be able to respond to this, so it's fair to ask. That's what I did with Davor, before he clarified his views. – Don Branson – 2019-06-19T20:46:08.810 @MichaelShigorin The examples you provided of other atrocities also serve just as well as mine to present to those holding the legal=moral view. – Don Branson – 2019-06-19T20:50:51.510 22 I think what you are looking for is called Legal Interpretivism, which, unlike Legal Positivism (which asserts that laws are distinct from morality), asserts that laws are based on morality, and that there is no separation between law and morality, so there must be an interpretation for why such and such is legal or illegal. In which case, the statement if it is legal then there must be moral reason for it to be legal would hold true, only if you consider an interpretivist point of view. Interpretation is a kind of moral processing of these norms. To interpret is to assess the norms constituted by institutional communication and adjust the set in order to make it more attractive in some way That is, to tweak and play with one's understanding of the laws, then interpret those laws in order for them to match some moral preferences, for example : Abortion is legal, and it is totally moral because women are free and have the right to their bodies, and you cannot kill a kid who was never born, so that must be the reason why it is legal. (and if it is illegal, a legal interpretivist would give a moral reason why it is illegal). Third, for interpretivism, the justifying role of principles is fundamental: for any legal right or obligation, some moral principles ultimately explain how it is that institutional and other nonmoral considerations have roles as determinants of the right or obligation. In the order of explanation, morality comes first. https://plato.stanford.edu/entries/law-interpretivist/ Of course there are other points which set Legal Interpretivism apart from Legal Positivism and Natural Law Theory...etc. Caveat : Is it an appeal to authority (law) fallacy? It is important to know when we say that such and such is a fallacy. A statement or assertion cannot be a fallacy, if I say : If x is legal then x is Moral , this is a claim, not a fallacy. That is, I claim that such and such is the case, that it matches some state of affairs in the actual world. And when arguing with others, I can use that statement as a premise. And the other party can check whether my argument is valid or not. • Premise 1: For all x, If x is legal then x must be moral • Premise 2:Abortion is legal. • Conclusion : therefore, Abortion must be moral. The other party would not argue about the validity of the argument, that argument is certainly valid and not fallacious. What remains is whether the other party accepts the premises as true. Whether they do or do not accept the first premise is not a fallacy, you believe that the conditional is true and they believe it is false. In either case, the one who accepts the first conditional as true is a Legal Interpretivist. But suppose two people are involved in a serious discussion about the subject, not just formal deductive arguments: • A : Abortion is moral. • B : Can you give me a good inductive or deductive argument as to why you think so? • A : Because the law says so, and you have to accept it. • B : mmm... okay ! Here, this is an informal fallacy (Appeal to the authority of the law), simply because A did not state the reason why they think so. It is a fallacy A did not take much time to formulate an argument and ask whether B agrees with the premises or not, they just presuppose that the fact that Law says so then B must also agree with it, which is an appeal to authority. So is legal interpretivism a form of apologetics, where the goal is to justify laws (whatever they might happen to be) by reference to moral principles? – ruakh – 2019-06-17T18:21:49.647 2In general, philosophical positions are claims, assertions or sets of claims and assertions. That can be used as premises if the two arguing parties agree with those premises. Otherwise it would be useless to argue with a legal positivist that abortion is legal because it is moral since a legal positivist would not accept that premise. So it is not for apologetics to use against those who do not agree. It is to use in the philosophy of law and not in general arguments. I would be a fool if I were to use an interpretivist argument against someone who is not an interpretivist. – SmootQ – 2019-06-17T18:30:53.773 I added a caveat to the answer : is it a fallacy of appeal to authority or not? – SmootQ – 2019-06-20T09:40:20.697 3 In a democracy, I don't think this is an appeal to authority anymore, nor is it entirely circular. It is clear that laws are based upon the shared moral sentiments of the population, if we are electing our legislators and even our judges. They have authority, but it is our authority. We may not feel represented by the composite, here, in the same way that the current form of the English language may not be the language we would choose to have, and we might disown, disparage, or ignore parts. But despite the unequal representation, various power relations and affordances to our own weaknesses and external pressures that any social compositing process involves, the rules our society in general enforces are our composite moral sentiments. We control them and they are built out of our coordinated decisions. We change them by winning or losing arguments about what is right, and they have no other source material, absent some outside antidemocratic force. (Capitalism does not really qualify, it is hardly 'outside'). The dark forces that shape our politics and form the worst aspects of our laws are just parts of our morality that we would rather not discuss. The analogy with language as a similar form of social construct holds: For instance, we have a language that chooses given forms of sex which are presumed to be unwanted as the primary metaphor for advantage-taking, lack of consideration, and bad luck. And that does represent a dominant culturally shared opinion of the people who take the roles alluded to in those metaphors. I may find sucking and being screwed to be glorious opportunities worth seeking after, and so might a whole lot of women, but we in the composite, as a culture do not wholly approve. We can then claim the culture really agrees with us, because we are on the more 'speakable' side of the issue right now. But when we do, we know we are lying. Otherwise, our shared aversion would change the language over time. Then isn't this sentiment just the fallacy of affirming the consequent? Laws represent our shared sense of order, including large parts of our composite morality. So to take them as determining morality is following the implication of the definitions involved in the wrong direction. However, one can follow any induction backward correctly in the negative, and there is neither a philosophical position, nor a fallacy involved. So it really depends upon whether you are arguing a necessary or a sufficient position, and where the negations of principles fall. You cannot determine that an act is moral from the law, but you can deduce that many are morally compromised. You can deduce, for instance, that unexpectedly violating the laws protecting other people, based on your own personal decisions, is at least partly immoral. People have entered into the institution of citizenship (voluntarily or otherwise) primarily for the purpose of stability, and you are depriving them of that stability. Unless you forfeit all the associated privileges, you are acting destructively and in bad faith. (Exceptions apply, but the argument, as far as it goes without falling prey to other issues, has some real moral force.) Appeals to authority and other circular arguments do not have this feature of having real applicability in one direction, but not in the other. "[In a democracy] it is clear that laws are based upon the shared moral sentiments of the population, if we are electing our legislators" - I don't agree that this is clear. In a democracy, laws are based on the shared desires of the population, not the shared moral sentiments. Perhaps some people desire what they think is moral, and perhaps others desire what they don't think is moral. The point is that morality is not a prerequisite to democratic participation, and a democratic outcome says nothing about the morality of that outcome. – JBentley – 2019-06-19T20:15:08.977 1That seems a bit idealistic, to be charitable. Democracies are not perfect (and are not actually democracies, most of the time) and laws are not morals. Some laws have moral underpinnings, but it's not necessarily the case. And if you've followed US politics for any length of time you'd be aware of many laws that the majority hate, or that the majority supported but were removed. – Harabeck – 2019-06-19T20:17:29.487 This is surely not idealism. It presents a very dark view of most people's real morality, built mostly of lies. An interactive process just does not let us lie as readily. I have edited this argument in, to avoid having this discussion a third time. – None – 2019-06-21T00:45:42.673 As many drug-dealers are now discovering, you may disapprove of a law in principle, but in fact rely upon it in every way, and when it really comes time to change things, you may find that you really approved all along. – None – 2019-06-21T01:42:30.560 2 As earlier posters have said, it can be interpreted as an appeal to authority (the law). Fundamentally, it is a form of circular reasoning based on the premise that all laws are moral: 1. The law is moral 2. The law allows for X 3. X is moral I take your point, but your current argument is just a syllogism and not actually circular reasoning. You're assuming that your own axiom is false, but that's not necessarily so. – lly – 2019-06-19T18:58:56.087 2 It seems that this is too varied a position to pin a single label on. One label would be "conventional" or "Law and order" morality, as used in Kohlberg's stages of moral development. It's worth noting that this is not a prescriptive theory by an ethicist about how people should think or behave, but a descriptivist theory by a psychologist about how they do. In summary, the claim is that most people defer moral reasoning to some sort of outside societal consensus, one example of which could be a codified law. Again reaching away from philosophy and toward sociologists, Haidt et al's Moral foundations theory suggests that most people's moral reasoning rests on some subset of six abstract principles: Care, Fairness, Loyalty, Authority, Sanctity and Liberty. Without looking into why these abstract principles are considered foundations for morality, some of these lend some support to a "legal implies moral" claim: liberty, authority, and fairness most obviously. A "Liberty" foundation echoes strongly with the legal principle "Everything which is not forbidden is allowed." That is, human freedom is respected by default, and there is a deep suspicion of any claims that would curtail it. "Authority" does not have quite the same sense as in "appeal to authority", in which the "authority" is assumed to know better. It is more a morality of deference. The authority defines better. This does not actually need the source of law to be unchanging or necessarily even "right". If your law-giver declares a hose-pipe ban, it would be subversive to water your garden with a hose pipe. If they then lift that ban, it would not. "Fairness" perhaps wants the most exposition: the argument would be that the law provides fairness by defining the constraints for everyone. Interestingly, this also allows for a model that there could be a changeable or bad law, which individual morality is still bound or released by. For example, I might believe in the abstract that a society which curtailed advertising would be better off; I might even want to push for laws restricting advertising. At the same time as a business owner in a society that does not curtail advertising, I might feel released to advertise as hard as I can, so as to compete on a level field with the rest of my industry. For a second example, consider the many pro-gun arguments (mostly in the USA) that have the basic form "If the bad guys have guns, the good guys should too." Moving away from Haidt, perhaps a rule consequentialist could decide that "Just follow the law of the land" is a good rule for the typical non-expert to practically maximise utility. This almost swings back to Kohlberg's conventionalism, but it's actually a higher level. Such a person is not just defering to society because it's never occured to them to think for themselves. Instead they have explored broader principles, recognised their own human falibility, and chosen to defer where that is likely better than trying to figure things out for themselves. If anything, such a person is more likely to to see codified law as something thought through by relative experts, and be less likely to see accidental consensus about etiquette or such as binding. There are other ways that one could arrive at "legal implies morally permissible" from other moral frameworks. My purpose here was just to illustrate some of the diversity. The question that remains is "is it a fallacy?" That is going to depend on what is actually being argued. Most of my suggested mechanisms would make it understandable that someone would go to it as a useful heuristic for what they should do in a moment. As with all heuristics, it would still remain defeasible. To reiterate, Kohlberg's theory is descriptive rather than normative, but it does leave room for progressing to the next stage and reasoning more abstractly. Haidt's foundations are mutually intertwined, so a liberty foundation might nevertheless be trumped by a care foundation. Rule consequentialism could maintain deference to society's experts in general, while following ones' own rules on some issues which one has taken the time to evaluate. It would probably be suboptimal to take the principle for individual decisions in which there is sufficient time to work through other morally relevant considerations. It is almost† certainly be fallacious to take the principle not for individual decisions but for guiding what society's laws should be. That would indeed be circular reasoning. † almost because time delays can break the circle. Precedent based legal systems work with this and are not circular: instead of "it's legal because it's legal" they say "it's (il)legal today because it was (il)legal yesterday". They tend to lean heavily on notions of "fairness" too: it would be unfair if the same action was punished in one case and not the other. But sane legal systems will have some form of mind change release valve to avoid the obvious sunk cost fallacy, and that release valve will require moral reasoning other than "it was legal yesterday." 2 From our western perspective, it certainly is an "Appeal to Law" fallacy and @jo1storm's answer deserves all the upvotes. In the West, with the notable partial exceptions of Machiavelli and Hobbes, the thoughtful kids have pretty much assumed―at least since Euthyphro came out―that true morality must be prior and superior to any lawgiver up to and including the gods. Anyone attempting to end a moral argument (in good faith) on an appeal to divine authority can be run through Socrates's questions until they realize their mistake; anyone attempting the same tack with an appeal to human lawgivers is going to hit the rocks even sooner. Since no one else has mentioned it yet, though, yes, there is a philosophical system that upholds the will of rulers as the actual criterion of morality. It's # Legalism (法家, Fǎjiā), the philosophical school most associated with Han Fei, his eponymous text, and the First Emperor. It's more nuanced and particularly Chinese in the details, but the short version is that the emperor gets what he wants and the proper thing for any subject to do in any situation is to obey. The Han subsequently overlaid this with a return to dynastic feudalism and an official endorsement of Confucianism, which came with a whole host of obligations and a higher morality that establishes who's ruling justly and who's a tyrant. In practice, any time the scholars tried to uphold those ideals on a serious point (e.g. Zhu Di's usurpation of his young nephew during the early Ming Dynasty), the scholars and everyone they knew were tortured and/or executed until everyone fell in line behind the emperor's will. What you described is still an appeal to authority. It is not a philosophical system. It expresses whoever is in charge must be correct or else . . . – Logikal – 2019-06-19T20:36:40.267 It's not an appeal to authority in the conventional sense. As described, it's perhapse more Argumentum ad baculum (https://en.wikipedia.org/wiki/Argumentum_ad_baculum) – Josiah – 2019-06-19T20:55:43.803 @Logikal It certainly is a philosophical system. You may very much dislike its premises and results, but that doesn't make Chinese philosophy "failed" Western thought... and you should be aware of the very unpleasant history associated with people whose thought did tend that way... – lly – 2019-06-21T14:21:38.127 @Logikal Comparing others to animals and claiming that they are the ones misunderstanding and misusing terms in a "common" way is preternaturally offensive, regardless of the preface you attach to your words. Beyond which, you remain both mistaken and highly blinkered: legalism remains an actual ethnical system, not a logical or conceptual error of Western philosophy. – lly – 2019-06-21T16:04:32.313
# Algebra, Topology and the Grothendieck-Teichmüller group August 28, 2022 to September 2, 2022 Europe/Zurich timezone ## Graph complexes, operadic mapping spaces and embedding calculus - a survey Sep 1, 2022, 5:00 PM 50m ### Speaker Benoit Fresse (Lille) ### Description I propose to give an account on results of a collaboration with Victor Turchin and Thomas Willwacher about rational models of operads operads and their applications to the study of the rational homotopy type of embedding spaces. In a preliminary part, I will give a brief review of the rational homotopy theory of operads. Then I will explain a graph complex description of the rational homotopy of mapping spaces of $E_n$-operads, and applications of results of the Goodwillie-Weiss calculus of embeddings to check that this computation gives a description of a delooping of embedding spaces of Euclidean spaces. If time permit, I will also explain a generalization of our constructions for the computation of the rational homotopy of the embedding spaces of manifolds into Euclidean spaces. The homotopy automorphism spaces of $E_n$-operads represent generalizations of the Grothendieck-Teichmüller, which concern the case $n=2$. These spaces have a description in terms of graph complexes too, and another option (depending on the interests of the audience) is to explain this result in detail, giving in particular some precision on the computation of the monoid structure associated to these spaces. ### Presentation materials There are no materials yet.
We have collected for you the most relevant information on Quark Error 10057, as well as possible solutions to this problem. Take a look at the links provided and find the solution that works. Other people have encountered Quark Error 10057 before you, so use the ready-made solutions. ### How to Find the Meaning of QuarkXPress Server Error Codes ... https://support.quark.com/en/support/solutions/articles/19000057772-how-to-find-the-meaning-of-quarkxpress-server-error-codes Yes, there. You can find QXPS Error code lists and meanings in the Web Integration Guide available in the Documents folder of the QuarkXPress Server folder. Each ... ### Error 10003: ''This server does not ... - Quark Software Inc. https://support.quark.com/en/support/solutions/articles/19000057673-error-10003-this-server-does-not-allow-remote-document-changes- Error: Invalid Username or Password when the user tries to login with domain account in Quark Publishing Platform Error: ''This document cannot be opened by this version of QuarkXPress [16]'' in QuarkXPress 10. ### Vulnerability Summary for the Week of August 26, 2019 CISA https://us-cert.cisa.gov/ncas/bulletins/sb19-245 comelz -- quark: comelz Quark before 2019-03-26 allows directory traversal to locations outside of the project directory. 2019-08-23: 5.0: CVE-2019-15520 MISC: cookie_project -- cookie: An issue was discovered in the cookie crate before 0.7.6 for Rust. Large integers in the Max-Age of a cookie cause a panic. 2019-08-26: 5.0: CVE-2017-18589 MISC ... ### Solved: Quark file locking Error Code -54 unable to save ... https://www.experts-exchange.com/questions/22908483/Quark-file-locking-Error-Code-54-unable-to-save-files.html When he tries to save again, Quark has deleted the file and replaced it with one with a file name like QXP11557875 so he has to save the file again under its original name. This has been happening for the past couple of weeks but it is now happening almost …Reviews: 10 ### Affinity Publisher - Page 4 - Affinity on Desktop ... https://forum.affinity.serif.com/index.php?/topic/10057-affinity-publisher/page/4/ ### (PDF) Observability of Light Charged Higgs Decay to Muon ... https://www.researchgate.net/publication/51940146_Observability_of_Light_Charged_Higgs_Decay_to_Muon_in_Top_Quark_PairEvents_at_LHC Sep 25, 2011 · The signal process is the top quark pair production with one of the top quarks decaying to a charged Higgs (non SM anomalous top quark decay) and the other decaying to a W boson which is assumed ... ### (PDF) A like-sign dimuon charge asymmetry at Tevatron ... https://www.researchgate.net/publication/47626989_A_like-sign_dimuon_charge_asymmetry_at_Tevatron_induced_by_the_anomaloustop_quark_couplings Oct 29, 2010 · The cross section $$\sigma _{t\bar t}$$ for the production of $$t\bar t$$ quark pairs at the Tevatron and the forward-backward asymmetry $$A_{FB}^{p\bar p}$$ in this process were calculated and ... ### (PDF) Quark-antiquark states and their radiative ... the description of the comp osite quark–antiq uark systems means the reconstruction of quark. in teraction. T o this a im, one needs the information on the wa v e function of a lev el, and the. ### Framework OWASP Testing Guide / Code / Diff of /OWASP-SM ... https://sourceforge.net/p/frameworkowasptestingguide/code-0/1/tree/OWASP-SM/ZAP_2.2.2/dirbuster/directory-list-2.3-small.txt?diff= Framework OWASP Testing Guide Framework with tools for OWASP Testing Guide v3 Brought to you by: wushubr ### (PDF) The semileptonic form factors of B and D mesons in ... https://www.researchgate.net/publication/225566107_The_semileptonic_form_factors_of_B_and_D_mesons_in_the_Quark_Confinement_Model quark mass, in f a ct, the quark mass and spin decouple f r o m the dynamic s of Preprint su bmitted to Elsevier Preprin t 1 F ebru ary 2008 the deca y … ## Quark Error 10057 Fixes & Solutions We are confident that the above descriptions of Quark Error 10057 and how to fix it will be useful to you. If you have another solution to Quark Error 10057 or some notes on the existing ways to solve it, then please drop us an email.
# When is the image of a non Lebesgue-measurable set measurable? Hi MathOverflow, I'm not sure if it makes sense to ask this question in the general setting, but: Are there any necessary conditions for a function, such that if $N$ is a not Lebesgue measurable, $f(N)$ is Lebesgue measurable? I am working on a problem, which seems to suggest that there are no 'trivial' conditions on the function (in particular, $f$ can be injective, which is a surprise to me). The problem is a as follows: Pick a non Lebesgue measurable set $N \subset (0,1) \subset \mathbb{R}$ and write $x \in (0,1)$ in an infinite binary expansion, i.e. $x = 0.x_1x_2...$ with $x_i = 0$ or $1$ and infinitely many $x_i$'s equal to $1$ (this is ok, since $0.1 = 0.0111...$). Now, take $f(x) = 2 \sum_{i=1}^{\infty} x_i 3^{-i}$. Then $f(N)$ is Lebesgue measurable, since it maps any set to a Cantor-like set (of measure zero) (thanks to Tapio Rajala for the easy solution). $f$ just takes $x$ to a base $3$ representation with no $1$'s in the expansion, thus is clearly injective. It sort of "spreads out" the elements of set $N$. Also, clearly $f(N) \subset (0,1)$. The thing that bothers me is that this seems to suggest that this $f$ is able to transform any non-measurable set into a measurable one, without really "loosing information" about it (because it is injective), which just sounds too good to be true. I tried to look for sources on functions applied on non-Lebesgue measurable sets, but failed to find anything, so if anyone could guide me to some I would highly appreciate it too. Thanks. - Image under what kind of map? Why can't I just take my favourite non-measurable set, find a measureable set of the same cardinality, and map one to the other? – Yemon Choi Dec 13 '11 at 8:36 In your definition of $f$ the $r$ is $i$?. This function maps everything to a set of measure zero (a Cantor set) and therefore it has the desired property. – Tapio Rajala Dec 13 '11 at 8:57 @Tapio Rajala: yes, sorry. edited. And thanks for the solution. @Yemon Choi: uhm, I guess you can, but I don't see how that answers the question since you're not saying anything about the actual map? – Ignas Dec 13 '11 at 9:13 @Ignas: I was trying to suggest that you make the question more precise, and in particularly specify: what is the domain and range of your function $f$? Is it supposed to send every non-measurable set to a measurable one? Is it supposed to admit some kind of explicit description? As it stands, I find it hard to work out what the precise question actually is – Yemon Choi Dec 13 '11 at 9:19 @Yemon: Well, my question is precisely what are the minimal restrictions that we need to impose on $f$ such that it maps an arbitrary non-measurable set to a measurable set (and is not something stupid like a constant function). It doesn't have to map every non-measurable, but if one can find necessary conditions for that, it would be interesting too. So yes, my question is quite open, but I inteded it to be so. See Tapio's answer for example. – Ignas Dec 13 '11 at 9:29 My guess is that the characterization is the following: A function $f$ maps every non-measurable set into a measurable set if and only if the domain or the image of $f$ has measure zero. One direction is trivial. For the other direction assume that the image of $f$ is positive. Take a non-measurable subset $N$ of the image and a measurable subset $M$ of the image so that 1. $N$ and $M$ are well separated. 2. $f^{-1}(N)$ and $f^{-1}(M)$ are well separated. 3. $f^{-1}(M)$ has positive measure. Take a non-measurable subset $K$ of $f^{-1}(M)$ and consider $K \cup f^{-1}(N)$. This set is non-measurable and so is its image under $f$. Are there more mistakes hidden somewhere? - It seems you need additional hypotheses on the domain of $f$. After all, we can map a measure zero set continuously to a set with positive measure, and such a function will vacuously have the property that it maps every non-measurable set to a measurable set, but fail your criterion. – Joel David Hamkins Dec 13 '11 at 10:23 Thank you Joel, I modified the condition to take this into account. – Tapio Rajala Dec 13 '11 at 10:40 I have deleted my second objection, because it was incorrect. – Joel David Hamkins Dec 13 '11 at 15:27 Following Joel's example: I have deleted my comments on the second objection, because they were correct. :) – Tapio Rajala Dec 13 '11 at 15:30 Suppose $A \subset I = [0,1]$ is Lebegue non-measurable, $B \subseteq I$ Lebesgue measurable, and $f: I \to I$ is a measurable function with $A = f^{-1}(B)$. By inner regularity, $B$ is the disjoint union of sets $C$ and $D$ where $C$ is an $F_\sigma$ and $D$ has measure 0. Then $A$ is the disjoint union of $f^{-1}(C)$, which is Lebesgue measurable, and $f^{-1}(D)$. Thus the only way an injective measurable function can map a nonmeasurable set onto a measurable one is that it maps some nonmeasurable subset to a set of measure 0. -
Consecutive Numbers An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. Calendar Capers Choose any three by three square of dates on a calendar page... Golden Thoughts Rectangle PQRS has X and Y on the edges. Triangles PQY, YRX and XSP have equal areas. Prove X and Y divide the sides of PQRS in the golden ratio. Powerful Order Stage: 3 and 4 Short Challenge Level: List the following three numbers in increasing order: $$2^{25}, 8^8, 3^{11}$$ If you liked this problem, here is an NRICH task which challenges you to use similar mathematical ideas. This problem is taken from the UKMT Mathematical Challenges. View the archive of all weekly problems grouped by curriculum topic View the previous week's solution View the current weekly problem
Chemistry and Chemical Reactivity (9th Edition) a) Both these carbons are $sp^2$ hybridized because they make a double bond each. b) Since all carbons are $sp^2$ hybridized, all these angles are approximately 120°. c)Yes, the -COH group can be either above or below the axis of the C=C bond.
# Amenable Banach algebras — homological characterization Recall from Theorem VII.2.19 of Helemskii's monograph "The homology of Banach and Topological Algebras" that amenability of a Banach algebra $A$ is equivalent to any of the conditions below: (i) derivations into dual modules are inner, (ii) $\mathcal{H}_1(A,X)=0$ and $\mathcal{H}_0(A,X)$ is Hausdorff for all $A$-bimodules $X$. The condition $\mathcal{H}_0(A,X)$ means that the image of the map $d_0\colon A\widehat{\otimes}X\to X,\,\,\,\,d_0(a\otimes x):=a\cdot x-x\cdot a$ is closed. Now, when proving $(i)\Rightarrow(ii)$ the closedness of $\operatorname{im}d_0$ follows by a general fact (see Lemma 0.5.1 in this same Helemskii's monograph). My question is: is it possible to show closedness of $\operatorname{im}d_0$ directly? The reason for a direct proof lies in the fact that I am working beyond Banach algebra category where -- in particular -- Open Mapping Theorem is not available. • What category are you using? If I recall correctly, going from "$H^1(A,X^*)=0$" to "$H_0(A,X)$ is Hausdorff" uses duality theory of Banach spaces, but not the open mapping theorem. However, it has been a long time since I worked through the details from first principles – Yemon Choi Jul 12 '17 at 3:41 • If I guess correctly, you're thinking about the relation $X$ - flat $\Leftrightarrow$ $X^*$ - injective. In the category I am working in (DF-spaces) duals of DF are not DF. Therefore I turned my attention to Lemma 0.5.1. But this Lemma needs the OMT. Therefore my last chance'' is a direct proof. That's at least all I can figure out. – Krzysztof Jul 13 '17 at 8:33 • Does projecteuclid.org/euclid.hha/1251832561 help at all? – Yemon Choi Jul 13 '17 at 18:07 • I know this paper by Pirkovskii - it is mainly devoted to Fr\'echet algebras. The crucial fact (from the view point of my category) he is using is the duality between exact sequences, i.e. a short sequence of Fr\'echet spaces is exact iff its dual sequence is exact. Although DF-spaces are dual to Fr\'echet ones, the above fact is not true in this category. Of course I can take the bidual sequence to the initial one but a DF-space is not in general a subspace in its second dual. – Krzysztof Jul 18 '17 at 12:03
# Most Efficient way to read a Settings Configuration File I have been working on a game for quite a while, and I am using Ogre3D for the rendering engine. It is getting to the point I need to move adjustable settings to a configuration file such as video settings/options, player keybindings, etc. I am using RapidXML for parsing and loading my scenes, but I am not sure this is the best way to go about doing configurations. As a long time fan of Valve games, I know there's are just a long list of settings, basically no grouping, just a list. Where as UT games do something like [VideoOptions] ... ... [GameSettings] ... ... • XML is not the answer to everything. Actually, it's rarely the correct answer whatever the question is. – o0'. Sep 27 '10 at 13:39 • XML is like violence: if it doesn't work, use more. :) – dash-tom-bang Sep 27 '10 at 19:27 This probably isn't the optimal solution, but I'm currently using Ogre's ConfigFile class (here is the API ref for it) for simple gfx/control/whatever config settings. Not a terribly robust solution, but it's worked fine for simple stuff. Its format is similar to the UT example you gave, for instance: [GraphicsSettings] Resolution=1024 x 768 Antialiasing=2 [ControlSettings] InvertLook=No Sensitivity=2.0 ... • The thing with this is that it's super easy to parse, so it's easy to write code to read it. It's also easy to change settings, you don't need to match tags or anything like that. – dash-tom-bang Sep 27 '10 at 19:29 You may also be interested in Boost.PropertyTree. It let's you access properties in a hierarchical manner, e.g. settings.get<int>("Graphics.Resolution.Width"). The are parsers/loaders available for XML, JSON, and more. Another option is to embed a simple language like Lua and expose the configuration values you want to Lua. Then your configuration file becomes a series of assignments, and can be as complex or as simple as you like. I probably wouldn't do this just for loading configuration files, but scripting languages do come in very handy for many aspects of game development so it's worth considering. • This is how the excellent SciTE editor does its configuration, and while documentation on the settings could be better it's pretty convenient to be able to edit in such a "computer friendly" format. – dash-tom-bang Sep 29 '10 at 0:38
## Current PartA - Compute the equivalent resistance of thenetwork in the figure  (Part A figure)  .The battery has negligible internal resistance. Part B Find the current in the 1.00 resistor. Part C Find the current in the 3.00 resistor. Part D Find the current in the 7.00 resistor. Part E Find the current in the 5.00 resistor.
# Chirality of buta-1,2-diene My teacher says buta-1,2-diene is symmetric and hence optically inactive but I think it's the opposite, because allenes with an even number of double bonds are optically active because they have no plane of symmetry. Which answer is correct? Let's see if a picture can help. Newman projections can be handy in analyzing the stereochemistry of allenes. On the left is a Newman projection of methyl allene (buta-1,2-diene). There is a sigma plane of symmetry that is perpendicular to the screen and contains the 3 allenic carbons plus the methyl group. Any molecule with a plane of symmetry is achiral. On the other hand, look at the Newman projection to the right, 1,3-dimethyallene (penta-2,3-diene). There is no plane symmetry in this case, only a $\ce{C_2}$ axis. Draw the mirror image of the pictured allene and you will find that it is not superimposable on its mirror image. 1,3-dimethylallene is a chiral molecule. 2,3-butadiene is not a IUPAC name. It should be 1,2-butadiene. I think you are talking about 1,3-butadiene (natural rubber). In any case, neither of the molecules have a chiral center; therefore, no optical activity. • Buta-1,3-dien is not an Allene, so I think the OP is actually talking about Buta-1,2-dien. Apr 28 '14 at 7:47 • Yes, exactly martin Apr 28 '14 at 10:01 • OK, my mistake. But there is no chiral center since C1 has 2 hydrogens. 2,3-hexadiene would be chiral. – LDC3 Apr 28 '14 at 13:37 • As is penta-2,3-diene (1,3-dimethylallene). – ron Apr 28 '14 at 14:40 • Also, natural rubber is comprised of isoprenes (i.e. isopentadienes), not butadienes. Apr 21 '15 at 15:21
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Magnetic Resonance Spectroscopy-based Metabolomic Biomarkers for Typing, Staging, and Survival Estimation of Early-Stage Human Lung Cancer ## Abstract Low-dose CT has shown promise in detecting early stage lung cancer. However, concerns about the adverse health effects of radiation and high cost prevent its use as a population-wide screening tool. Effective and feasible screening methods to triage suspicious patients to CT are needed. We investigated human lung cancer metabolomics from 93 paired tissue-serum samples with magnetic resonance spectroscopy and identified tissue and serum metabolomic markers that can differentiate cancer types and stages. Most interestingly, we identified serum metabolomic profiles that can predict patient overall survival for all cases (p = 0.0076), and more importantly for Stage I cases alone (n = 58, p = 0.0100), a prediction which is significant for treatment strategies but currently cannot be achieved by any clinical method. Prolonged survival is associated with relative overexpression of glutamine, valine, and glycine, and relative suppression of glutamate and lipids in serum. ## Introduction Despite extensive research over the past decade to improve lung cancer (LuCa) detection and treatment, LuCa presents persistent clinical challenges. The leading cause (>26%) of cancer death in the United States for both men and women, LuCa results in the number of deaths equivalent to the combination of the next four highest causes of cancer death: breast, prostate, colon, and pancreatic. LuCa is usually diagnosed at late stages, with >70% of patients dying from the disease; the ratios for breast and prostate cancer are about 16% and 17%, respectively1. This reality is largely attributable to the lack of a widespread, early screening test for LuCa. In its absence, the vast majority of patients seeking medical advice for symptoms of LuCa harbour locally advanced or metastatic disease. At present, advanced radiological examinations, especially low-dose spiral CT (LDCT), can detect small LuCa lesions2,3,4,5. The many reports published by the National Lung Screening Trial (NLST) evaluating LDCT efficacy nonetheless question its cost-effectiveness as a screening tool6 and raise the issue of potential over-diagnosis7 and the impact of screening on quality of life8. Recently, the American Thoracic Society and American College of Chest Physicians published a joint official policy statement to guide the safe, effective development of LDCT screening programs9. Nevertheless, implementation of LDCT as a LuCa screening tool would entail considerable logistic and scientific concerns, ranging from high cost10,11,12,13 to, most importantly, possible radiation hazard for screened populations14,15,16,17. Thus, while LDCT enables detection of small lung nodules, its implementation as a screening tool for the general population is not feasible. Therefore, novel, low-cost, and safe LuCa tests that can prompt patients with suspicious screening results to seek further radiological evaluation are needed. Current investigations of circulating blood biomarkers to develop LuCa screening methods are based on the fundamental physiology fact that all cardiac output passes through the lungs, with 20% of blood in the lungs at any given time. Thus, LuCa-associated molecules would likely be carried from lung lesions into the circulating blood. Alternatively, since metabolites in the circulating blood provide necessary nutrients for all biological and pathological processes throughout the entire body, consumption of specific metabolites by LuCa to sustain and enhance malignant processes may be measurable in blood. As a result, metabolites produced by or consumed by lung cancer lesions could serve as biomarkers. Previously, studies of blood LuCa biomarkers have reported LuCa-associated microRNA18,19,20 and RNA fragments21 detected by quantitative real-time-PCR and mass spectrometry as promising markers. Inspired by the achievements in genomics, proteomics, and transcriptomics, cancer metabolomics, which reflect the functional read-outs of these upstream biological processes, can yield measures of global metabolite profiles associated with various metabolic pathways influenced by oncological developments. To evaluate LuCa, particularly low grade LuCa and identify potential LuCa biomarkers, we investigated paired tissue and serum samples obtained from the same patients. These analyses were carried out with the special technique of high resolution magic angle spinning magnetic resonance spectroscopy (HRMAS MRS), which we developed for metabolomic analysis of intact biological tissue22,23 and complex fluids. This technique allows subsequent histopathology analyses of the same tissue samples, enabling spectroscopic data to be interpreted according to tissue pathologies. Since HRMAS MRS can also measure the complex biofluid of serum and obtain spectra of high resolution, tissue and serum metabolomic measures can be correlated in order to investigate the associations between metabolites of potential LuCa biomarkers measured from paired tissue and serum samples. ## Results This study included tissue and serum pairs from 93 patients of the two major types of non-small cell LuCa (NSCLC): squamous cell carcinoma (SCC, n = 42, F = 15, M = 27, age = 67.7 ± 8.3) and adenocarcinoma (Adeno, n = 51, F = 26, M = 25, age = 63.7 ± 8.7), as well as 29 serum samples from healthy controls (Ctrl, F = 10, M = 19, Age = 66.8 ± 12.6). The patients were recruited as previously described from an ongoing study of lung cancer survival24. We selected tumour samples that had at least 70 percent tumour cellularity, with histology of tissue samples confirmed by a pathologist after MRS analyses. With a specific emphasis on studying early stage LuCa, the studied patient population included more low grade Stage I (n = 58, SCC = 27, Adeno = 31, F = 24, M = 34) cases than the more advanced stages (II, III, and IV, n = 35, F = 17, M = 18) combined (Supplementary Table S1 lists patient clinical and demographic information). We further randomly sub-divided Stage I (n = 58) and control (n = 29) cases into Training (SCC = 14, Adeno = 19, Ctrl = 14) and Testing (SCC = 13, Adeno = 12, Ctrl = 15) cohorts and tested when needed. From HRMAS MRS measurements of these samples, we identified 32 spectral regions (width: 0.026 ± 0.010 ppm) that presented measurable spectral intensities in more than 90% of spectra for both tissue and serum samples. We present results using spectral regions, rather than individual metabolites, since each region may include contributions from multiple metabolites, and conversely, a single metabolite can contribute to multiple spectral regions. However, we discuss major possible contributing metabolites when relevant. Principal component analyses (PCA) can be used to reduce data dimensions by identifying PCs that have eigenvalues greater than 1.0 and can be further analysed. PCA performed on these 32 regions for tissue and serum MRS data sets independently produced eight PCs, all with eigenvalues greater than 1.0, for both tissue and serum MRS data sets, respectively. The eight PCs accumulatively represent 79.2% and 77.2% of variance for tissue and serum, respectively. All the statistically significant results presented below were verified by co-variance analyses of age and smoking status (packyear). Results presented here will include the following four aspects: (1) serum MRS data that differentiate LuCa from healthy controls and among LuCa types and stages; (2) tissue MRS data that differentiate LuCa types and stages; (3) correlations between serum and tissue MRS measurements; and (4) predictions of LuCa overall survival with metabolomics. ### Serum MRS – identifying LuCa from controls and differentiating LuCa types and stages We defined the serum relative spectral intensity, RelInt(Ser), for spectral region m (m = 1, 2, … 32) and samples = 1, 2, … 93 as: $${{\rm{RelInt}}}_{{\rm{m}},{\rm{s}}}({\rm{Ser}})( \% )=({\mathrm{Exp}}_{{\rm{m}},{\rm{s}}})\times 100/\sum _{i=1}^{32}(Ex{p}_{i,s})$$ (1) where, (Expm,s) represents the experimental value for spectral region m and $$\sum _{i=1}^{32}(Ex{p}_{i,s})$$ represents the sum of measured values for all 32 spectral regions, for each of the 93 samples. Results from serum MRS showed significant differences in spectral relative intensities between groups of interest. Figure 1 summarizes the observed statistical significances of relative spectral intensities for 19 among 32 analysed spectral regions and 5 out of 8 PCs measured from serum samples that differentiate healthy controls from different LuCa groups (central column), as well as 8 spectral regions and 4 PCs to differentiate among different groups of LuCa types and stages (right column), according to Student’s t-test (for normal distributions with or without equal variance) or Mann-Whitney-Wilcoxon test (for non-normal distributions). The notations of statistical significance levels in Fig. 1, and for the rest of the report, are as follows: “*”p < 0.05; “**”p < 0.005; and “***”, Bonferroni-corrected thresholds of statistical significance of p < 0.0016 or p < 0.0063 for 32 individual regions or 8 principal components (PC), respectively. The star symbols in this figure, and figures hereafter, denote statistical significance values after calibration with false discovery rate (FDR) analyses. Multiple spectral regions showed statistical significance in differentiating LuCa from controls in Fig. 1. In Fig. 2a, Stage I are compared with control cases with three panels presenting examples of three significant regions: lactate (4.10–4.11 ppm), glutamate (2.05–2.07 ppm), and GPC & PC (3.21–3.23 ppm). While these metabolic regions can significantly differentiate Stage I LuCa cases from controls both for the entire tested populations, and for Training and Testing cohorts, respectively, overlap between control and LuCa samples is also obvious, represented by modest receiver operating characteristic (ROC) curves (area-under-curve, AUC = 71~83%), as well as by the closeness of the two 3D ellipsoids presented in Fig. 2b. Invoking the metabolomic concept of multi-dimensional comparisons (in contrast to a single metabolite evaluation), leave-one-case-out (LOCO) cross-validated linear discriminator (LD) analyses involving all 19 spectral regions presented in Fig. 1, in Fig. 2c, Drastically improved differentiations between LuCa and control (vertical panel) presented as well separated 3D ellipsoids (ROC AUC = 98.9%), as well as among all three groups (horizontal panel) were observed. Figure 2d,e further detail metabolomic differentiation among all three groups and between LuCa and controls, respectively. The indication of the existence of metabolomic differentiations between LuCa and controls led us to further test the validity of the observation with the above defined Training and Testing cohorts. Figure 2f demonstrates the significant LuCa and control differentiation results measured with LD canonical correlation analysis for the Testing cohort by using analytic parameters obtained from the Training cohort. ### Tissue MRS – differentiating LuCa types and stages Unlike serum samples, which are homogenous fluids, tissues are heterogeneous mixtures that are comprised of both diseased and healthy pathological components. Metabolite levels vary in different pathological features, so tissue MRS results must be interpreted in the context of tissue pathologies. The most significant advantage of HRMAS MRS – its ability to preserve tissue architecture for subsequent pathological evaluations – enables us to conduct pathological analyses after MRS measurement to calibrate contributions of tissue pathologies, and their inherent metabolic differences, towards the observed MRS values. For the studied LuCa tissues, four major pathology features were quantified for each specimen after HRMAS MRS: vol% of LuCa (in 58/93 measured samples, with Max: 94.9%, Median: 26.1%), Fibrosis/Inflammation (FI, 89/93, Max: 100%, Median: 74.4%), Necrosis (Nec, 31/93, Max: 100%, Median: 25%), and Cartilage/Normal (CN, 8/93, Max: 89.7%, Median: 47.5%). We determine the relationship between tissue MRS and pathologies using a least-square regression of an over-determined linear model (LSR-ODLM), which includes 93 linear equations comprised of four pathology features and the experimentally measured value (Expm,s) for all 32 spectral regions m according to the following equation, for samples s = 1, 2, … 93: $$\begin{array}{c}[{{\rm{C}}}_{{\rm{LuCa}},{\rm{m}}}\times {\rm{LuCa}}{ \% }_{{\rm{s}}}]+[{{\rm{C}}}_{{\rm{FI}},{\rm{m}}}\times {\rm{FI}}{ \% }_{{\rm{s}}}]+[{{\rm{C}}}_{{\rm{Nec}},{\rm{m}}}\times {\rm{Nec}}{ \% }_{{\rm{s}}}]\\ \,+[{{\rm{C}}}_{{\rm{CN}},{\rm{m}}}\times {\rm{CN}}{ \% }_{{\rm{s}}}]+{{\rm{a}}}_{{\rm{m}}}={\mathrm{Exp}}_{{\rm{m}},{\rm{s}}},\end{array}$$ (2) where the contribution coefficients (Cx,m) of pathology feature (x = LuCa, FI, Nec, and CN) percentage towards the experimental value in region m are determined solely from the spectral data without any additional assumptions or weighting for the quantified pathological features; am is a spectral region-specific constant. The contribution coefficients from each of these four pathological features for the 32 analysed regions can be found in Supplementary Fig. S1. To evaluate these coefficients, we calculated the estimated spectral intensity (Estm,s) for each spectral region, m, and each sample, s, based on the pathological compositions of the sample and the contribution coefficients: $$\begin{array}{c}[{{\rm{C}}}_{{\rm{LuCa}},{\rm{m}}}\times {\rm{LuCa}}{ \% }_{{\rm{s}}}]+[{{\rm{C}}}_{{\rm{FI}},{\rm{m}}}\times {\rm{FI}}{ \% }_{{\rm{s}}}]+[{{\rm{C}}}_{{\rm{Nec}},{\rm{m}}}\times {\rm{Nec}}{ \% }_{{\rm{s}}}]\\ \,+[{{\rm{C}}}_{{\rm{CN}},{\rm{m}}}\times {\rm{CN}}{ \% }_{{\rm{s}}}]+{{\rm{a}}}_{{\rm{m}}}={{\rm{Est}}}_{{\rm{m}},{\rm{s}}}\end{array}$$ (3) After calibration for pathology contributions, the difference between Expm and Estm, Expm − Estm, can be considered to be independent from tissue pathological compositions. This is supported by the comparisons of the linear regression analyses conducted between tissue pathological compositions (vol%) and Expm, as well as (Expm − Estm) values. For instance, the results of linear regression analyses evaluated between LuCa vol% and Expm values for tissue samples presented statistically significant (p < 0.050) correlations for 22 out of the 32 spectral regions, with p values ranging from <0.001 to 0.045 (mean = 0.010 ± 0.003). However, when linear regression analyses were evaluated between LuCa vol% and (Expm − Estm) values, no significant linear correlation was seen. The p values for the same 22 regions were determined to be between 0.110 and 1.00 (mean = 0.710 ± 0.074). Therefore, the values of Expm − Estm, or the values of Expm/Estm (to avoid negative values), after calibration of the tissue pathological contributions, could be attributed to tissue MRS results that are largely reflecting patient disease status rather than pathology features. The total calibrated spectral intensity from 32 regions for sample s is: $${\rm{Total}}\,{{\rm{Int}}}_{{\rm{s}}}({\rm{Tis}})=\sum _{i=1}^{32}(Ex{p}_{i,s}/Es{t}_{i,s}).$$ (4) Then, the calibrated tissue relative spectral intensity, RelIntm,s(Tis), for spectral region m and samples s is: $${{\rm{RelInt}}}_{{\rm{m}},{\rm{s}}}({\rm{Tis}})( \% )=({\mathrm{Exp}}_{{\rm{m}},{\rm{s}}}{/\mathrm{Est}}_{{\rm{m}},{\rm{s}}})\times 100/{{\rm{TotalInt}}}_{{\rm{s}}}({\rm{Tis}}).$$ (5) Using the same conventions established in Fig. 1 for serum results, Fig. 3 presents differentiation of LuCa groups according to tissue pathological feature-calibrated MRS results as defined in Eq. 5. Here, 9 of 32 spectral regions present significant differentiation among various LuCa groups. The effects of pathology calibrations on tissue metabolites can be appreciated by an example of alanine (Ala) differentiating Stage I SCC from Adeno groups, as shown in Fig. 4, where significant differentiation was only observed after the applied pathological feature calibration. ### Correlating serum and tissue metabolomic profiles for LuCa differentiation Studying tissue-serum pairs from the same patient enabled us to use the tissue data set as a training cohort to investigate correlations between serum and tissue MRS data. The successes demonstrated in Fig. 5a,b, further guided us to test with randomly determined Training and Testing cohorts, including Stage I LuCa and control cases, previously presented with Fig. 2a,f. In Fig. 5c,d, linear discriminant canonical correlation analyses were conducted with the 19 spectral regions (Fig. 1), for tissue and serum MRS data of the Training cohort, respectively. The capabilities of the resulting canonical scores in differentiating SCC from Adeno groups were tested with the Testing cohort presenting indications of differentiations, but the serum result (p = 0.0502) was just above the level of significance. However, the study design of paired tissue and serum samples obtained from the same patients permitted us to conduct a further canonical analysis including both tissue and serum canonical scores for the Training cohort, and we tested the resulting canonical score on the Testing cohort with improved statistical significance (p = 0.009) when compared with the score obtained from serum data alone. ### Predictions of LuCa overall survival with MRS metabolomics Clinical records (1997–2012) for the studied LuCa patients indicate the average survival time after surgery to be 41.3 ± 4.6 months (nAdeno = 27, Mean: 43.4 ± 6.4 months, and nSCC = 27, Mean: 39.2 ± 6.7 months). Using 41.3 mo as a threshold to define short vs. prolonged surviving, we observed a number of tissue and serum spectral regions that can differentiate between the two groups. Most importantly, some of these spectral regions from both tissue and serum can further provide statistically significant Kaplan-Meier estimates of 10-year overall patient survival for the entire population, as well as for certain subgroups (see Fig. S4). To systematically evaluate the prognostic potential of serum metabolomics, we randomly divided 93 cases into eight groups (maximum number of cases per group: 14; minimum: 9). We combined seven groups to form the training cohort, and used the one group left out as the corresponding testing cohort. We iterated the leave-one-group-out process eight times to cover all eight groups (i.e. the eight corresponding testing cohorts). For each training cohort, we identified regions among the 32 spectral regions that can differentiate short vs. prolonged surviving groups with statistical significance (p < 0.05). The numbers of spectral regions thus identified ranged from one to seven (average: 3.75 ± 0.88) for the eight training cohorts. For each training cohort, a canonical correlation analysis including the identified spectral regions was first conducted to discover CCA loadings to discriminate short from prolonged surviving groups within the cohort. The loadings obtained from the training cohort were applied to the cases in the corresponding testing cohort to obtain the CCA scores for cases in the testing cohort. Upon the completion of all eight iterations, the CCA scores for all cases obtained when they were considered as testing cohort cases were combined into a single ensemble. The median value for the ensemble was determined and used as the threshold to evaluate the 10-year overall survival based on the Kaplan-Meier curves, which displayed statistical significance between short (red) and prolonged living groups (green) (p = 0.0325), as shown in Fig. 6a. The effects of tissue pathology calibration can also be seen when the measured tissue metabolic intensities are used to predict patient overall survival. For instance, pathology-calibrated spectral intensities of 3.91–3.90 ppm region are sensitive to predicting patient overall survival for the entire tested population, for SCC cases, and particularly for Stage 1 cases of SCC. However, this significant separation for a single disease stage, which cannot be differentiated by currently known clinical parameters, would be invisible without the pathology calibrations (Fig. S4). While the leave-one-group-out analyses indicated the potential existence of metabolomic discriminators between short and prolonged LuCa patient survival, the method cannot provide a single set of parameters able to evaluate the status of a future case. Nevertheless, this proof of the potential existence of the survival-related intrinsic LuCa metabolomics encouraged us to further analyse all cases in a single data set. With all 93 cases, we identified nine serum spectral regions (Fig. 6d) that show significant differentiation between short and prolonged survival groups. By including these nine spectral regions in a canonical analysis to discriminate (p < 0.0001) short (<41.3 months, nSCC = 15, nAdeno = 14, CCA score = −0.652 ± 0.186; nSCC,St=I = 9, nAdeno,St=I = 5, CCA score = −0.837 ± 0.254) from prolonged (>41.3 months, nSCC = 12, nAdeno = 13, CCA score = 0.756 ± 0.200; nSCC,St=I = 8, nAdeno,St=I = 10, CCA score = 0.766 ± 0.224) survival (Supplementary Table S1), we were able to predict 10-year Kaplan-Meier overall survival estimates for both the entire LuCa population (SCC = 42, Adeno = 51) (Fig. 6b) and the Stage I cases alone (SCC = 27, Adeno = 31) (Fig. 6c) by using their respective median of the canonical scores as the discriminators. Prolonged survival is associated with relative overexpression of Gln, Val, and Gly, and relative suppression of Glu and lipids in serum. This last conclusion obtained from sera of stage I LuCa patients is of critical importance for its potential utility in clinic. At present, the criteria used in the LuCa clinicians for patient assessments are mostly based on clinical experiences accumulated from symptomatic and late-stage patients that cannot be applied to the assessment of asymptomatic patients with early stage disease which is now detected through advanced radiological tests. Thus, new prediction parameters for survival of Stage I LuCa will assist the advancement of the LuCa clinic. The heatmap in Fig. 6d illustrates that metabolite intensities for the prolonged living group (P) are either closer to the control group (C) or the short living group (S), whereas the intensities for C and S groups are noticeably different. ## Discussion The aim of the current study is to evaluate potential human blood serum LuCa metabolomic markers that may be used to screen high-risk individuals for advanced imaging for detection of LuCa at early and asymptomatic stages. However, while identification of cancer blood screening biomarkers is extremely attractive due to the less invasive nature of specimen collection, research in this area is often challenged by low specificity, since blood circulates throughout the entire body. To associate serum metabolites with LuCa, we designed the study to include paired LuCa tissue and serum samples from the same patients. With such an experimental set-up, metabolites quantified in the serum samples could be investigated in conjunction with those measured from tissue samples. Each analysed sample, either a tissue specimen of ~10 mg or a drop of serum of ~10 µl, produces a single MRS spectrum. However, a tissue sample, even as small as on a mg scale, represents a mixture of various pathological components, such as cancer, inflammation, fibrosis, and necrosis. Metabolite values are affected by not only the stage of disease from which the tissue is acquired but also the amount of cancerous and other cells present in the tissue. Thus, understanding metabolite concentrations in tissue requires consideration of pathology variations, and HRMAS MRS technology allowed us to quantify these amounts. We then calibrated MRS-measured metabolite values according to varying amounts of pathological components in each sample, rather than merely make qualitative comparisons between pathology percentages and metabolites25,26,27,28. This calibration adjustment generated stronger results for tissue and serum analyses, but the interpretation of the observed reversed relationship when comparing SCC with Adeno for tissue and serum data sets (Fig. 5) requires caution. The apparent inverted slopes shown in Supplementary Fig. S3 were presented to explain the reversed relationship seen in Fig. 5a, but are not to be interpreted as the presentation of reversed metabolite concentrations between SCC and Adeno cases when comparing their metabolite concentrations in tissues with those in sera. To compare the selected 32 spectral regions for tissue and serum at a similar intensity level, we elected to analyse relative spectral intensities. With tissue, further calibrations of the relative spectral intensities according to tissue pathological compositions were implemented. Therefore, the above-mentioned apparent inverted slopes represent only the relationship seen with these calibrated relative spectral intensities, and cannot be simply extended to indicate metabolic concentrations. Additional analyses that will allow for quantification of metabolites from different sources will be necessary to understand these observations in detail. Furthermore, since blood is the main nutrient source for all physiological and pathological processes, it cannot simply be viewed as the “dumping ground” of cancer metabolisms active in tissue lesions. Therefore, the metabolomic profiles presented by blood serum cannot be expected to mimic those measured from cancerous tissues. Nevertheless, analyses of the similarities and differences of cancer metabolomic profiles measured from paired tissue and serum samples will improve understanding of cancer metabolism both for patient prognostication and design of treatment strategies. MRS measurements of tissue and serum samples present snapshots of cellular metabolites. In the case of blood, metabolite levels measured in a LuCa patient may reflect altered output or uptake by cancer cells. For instance, when comparing serum profiles between prolonged and short survival groups, the prolonged survival cases favoured elevated expressions of glutamine, valine, and glycine (positive CCA loadings), and suppressed expressions in glutamate and lipid droplets (negative CCA loadings). The alterations of glutamine and glutamate may be interpreted through their metabolic mechanism. Glutamine is an essential metabolite to support anabolic metabolism in tumour cells29, and high consumption of glutamine has been reported for cancer cells30. Cancer cells use the enzyme glutaminase to convert glutamine to glutamate, and to form precursors for the processes of anaplerosis, glutathione synthesis, and fatty acid production, which allow for tumorigenesis31. Glutamine itself is also important energy source for cancer cells when glucose availability is limited32. These biological realities of cancer support the finding of higher glutamine in prolonged cases. Furthermore, blood maintains high levels of glutamine as a ready source of carbon and nitrogen to support biosynthesis, energetics, and cellular homeostasis, and cancer cells may hijack this supply for tumour growth33, which may particularly be true in the cases of fast-growing LuCa of the short survival cases. On the other hand, in prolonged survival cases (which closely match the healthy controls for glutamine levels in (Fig. 6d) the consumption of glutamine, for conversion to glutamate, is less than consumption by the short survival cases. Less glutamine consumption results in less production of lipid droplets and glutamate for these longer-surviving cancer cases. Similarly, for the essential amino acid valine, studies have shown that non-small cell LuCa tumours displayed a significant increase in valine uptake34. Thus, the elevated blood valine levels seen in the prolonged survival cases may present less uptake of valine when compared with short survival cases. The same reasoning can be extended to the observed elevated serum glycine levels when comparing prolonged with short survival cases, where glycine provides the carbon units to fuel the one-carbon metabolism for the synthesis of proteins, lipids, nucleic acids, etc.30. The observation of the association between higher levels of glycine and poorer prognoses reported for human breast cancer agree well with our measurements described here35. In this exploratory study, our current results are limited by the scale of the study. First, we only analysed tissue and serum samples from a LuCa tumour bank enrolling cancer-positive patients who presented with symptoms or whose cancer was found incidentally. The biomarkers thus obtained may only apply to the studied patient populations, and may not be extrapolated to other patient populations, such as asymptomatic patients. Second, this study only investigated two major types of non-small-cell LuCa, so again the conclusions are not applicable to other types of LuCa without further validation on enlarged patient populations. Nevertheless, our proof-of-concept MRS exploratory study of paired human LuCa tissue and serum samples demonstrates the potential of a physical chemistry approach for the discovery of human serum LuCa metabolomic markers. While the reported LuCa markers have been tested under a training and testing cohort design, the limitations of small case numbers and the need for analysing more diverse patient populations argue for more comprehensive studies to be conducted. Success in these investigations can propel biomarkers towards clinical trials and towards the ultimate goal – to indicate cancer and screen patients to advanced radiological imaging when warranted. ## Materials and Methods ### Study design #### Experimental design This study was approved by the Partners Human Research IRB (Protocol 2009P000982), and all research was performed in accordance with relevant guidelines and regulations. Serum and tissue samples were obtained from the Harvard/MGH Lung Cancer Susceptibility Study Repository. Informed consent was obtained from LuCa patients and healthy controls prior to banking samples and after the nature and possible consequences of the study were explained. The objective of this retrospective, paired tissue-serum investigation was to discover biomarkers in LuCa tissue of early stage LuCa which can also be measured in serum. Based on our initial, preliminary evaluation of lung cancer biomarkers published previously36, we designed this exploratory study to analyse ~100 samples. After evaluation of 101 samples, it was determined that the spectral resolution for 8 samples was not sufficient for further analysis, and only 93 were included in the current study. #### Study population Patient information: Detailed information on the studied patient population can be found in Supplementary Table S1. Researchers were blinded to the status of the samples during all measurement and experimental steps. #### Intact tissue MRS Samples were stored at −80 °C until analysis. High resolution magic angle spinning magnetic resonance spectroscopy (HRMAS MRS) measurements were performed using our previously developed method on a Bruker Avance (Billerica, MA) 600 MHz spectrometer. Measurements were conducted at 4 °C with a spin-rate of 3600 ± 2 Hz and a Carr-Purcell-Meiboom-Gill (CPMG) sequence with and without continuous-wave water suppression. Ten µL of serum or 10 mg of tissue were placed in a 4 mm Kel-F zirconia rotor with 10 uL of D2O added for field locking. HRMAS MRS spectra were processed using a laboratory developed MATLAB-based program, and peak intensities from 4.5–0.5 ppm were curve fit. Relative intensity values were obtained by normalizing peak intensities by the total spectral intensity between 4.5–0.5 ppm. The resulting values that were less than 1% of the median of all curve fit values were considered as noise and eliminated. Spectral regions were defined by regions where 90% or more of samples had a detectable value, resulting in 32 regions. #### Quantitative histopathology Following MRS measurement, tissues were formalin-fixed and paraffin-embedded. Serial sectioning was performed by cutting 5 µm-thick slices at 100 µm intervals throughout the tissue, resulting in 10–15 slides per piece. After hematoxylin and eosin (H&E) staining, a pathologist with >25 years experience read the slides to the closest 10% for percentages of the following pathological features: cancer, inflammation/fibrosis, necrosis, and cartilage/normal. ### Statistical analysis Statistical analyses were performed using JMP Pro 13 and MATLAB 2017a. Univariate statistical tests included Student’s t-test (for spectral regions with normal distribution according to Shapiro-Wilk W test) or Mann-Whitney-Wilcoxon test (MWW, for spectral regions with non-normal distributions) for binary comparisons; analysis of variance (ANOVA, for normal distributions) or Kruskal-Wallis-Wilcoxon (KWW, for non-normal distributions) for ≥ ternary comparisons. Multivariate analyses included principal component analysis, linear discriminator analysis, and canonical correlation analysis. Associations between canonical correlation scores and survival were assessed using Kaplan-Meier survival curves and log-rank tests. Additionally, MRS spectral measurements from tissue were calibrated to account for the contributions from varying amounts of pathological components in each sample, using a least squares regression-over-determined linear regression model. In addition to reporting comparisons with an alpha level = 0.05, false discovery rate (FDR) analysis and Bonferroni corrections to account for multiple testing of 32 spectral regions and 8 principal components were invoked. Except where noted and explained, two-sided testing was used. All the statistically significant results presented were verified by co-variance analyses of age and smoking status (packyear). ## Data Availability Data reported in this paper are available at our repository through the Martinos Center (http://www.nmr.mgh.harvard.edu/~cheng/MRSbenign/). ## References 1. 1. Siegel, R. L., Miller, K. D. & Jemal, A. Cancer Statistics, 2017. CA: Cancer J. Clin. 67, 7–30, https://doi.org/10.3322/caac.21387 (2017). 2. 2. National Lung Screening Trial Research, T. et al. Results of initial low-dose computed tomographic screening for lung cancer. N. Engl. J. Med 368, 1980–1991, https://doi.org/10.1056/NEJMoa1209120 (2013). 3. 3. Kovalchik, S. A. et al. Targeting of low-dose CT screening according to the risk of lung-cancer death. N. Engl. J. Med. 369, 245–254, https://doi.org/10.1056/NEJMoa1301851 (2013). 4. 4. Tammemagi, M. C. et al. Selection criteria for lung-cancer screening. N. Engl. J. Med. 368, 728–736, https://doi.org/10.1056/NEJMoa1211776 (2013). 5. 5. Garcia-Velloso, M. J. et al. Assessment of indeterminate pulmonary nodules detected in lung cancer screening: Diagnostic accuracy of FDG PET/CT. Lung Cancer 97, 81–86, https://doi.org/10.1016/j.lungcan.2016.04.025 (2016). 6. 6. Curl, P. K., Kahn, J. G., Ordovas, K. G., Elicker, B. M. & Naeger, D. M. Understanding Cost-Effectiveness Analyses: An Explanation Using Three Different Analyses of Lung Cancer Screening. Am. J. Roentgenol 205, 344–347, https://doi.org/10.2214/AJR.14.14038 (2015). 7. 7. Patz, E. F. Jr. et al. Overdiagnosis in low-dose computed tomography screening for lung cancer. JAMA Intern. Med. 174, 269–274, https://doi.org/10.1001/jamainternmed.2013.12738 (2014). 8. 8. Gareen, I. F. et al. Impact of lung cancer screening results on participant health-related quality of life and state anxiety in the National Lung Screening Trial. Cancer 120, 3401–3409, https://doi.org/10.1002/cncr.28833 (2014). 9. 9. Wiener, R. S. et al. An official American Thoracic Society/American College of Chest Physicians policy statement: implementation of low-dose computed tomography lung cancer screening programs in clinical practice. Am. J. Respir. Crit. Care Med. 192, 881–891, https://doi.org/10.1164/rccm.201508-1671ST (2015). 10. 10. Cressman, S. et al. Resource utilization and costs during the initial years of lung cancer screening with computed tomography in Canada. J. Thorac. Oncol. 9, 1449–1458, https://doi.org/10.1097/JTO.0000000000000283 (2014). 11. 11. Goulart, B. H., Bensink, M. E., Mummy, D. G. & Ramsey, S. D. Lung cancer screening with low-dose computed tomography: costs, national expenditures, and cost-effectiveness. J. Natl. Compr. Cancer Netw 10, 267–275, https://doi.org/10.6004/jnccn.2012.0023 (2012). 12. 12. Mauchley, D. C. & Mitchell, J. D. Current estimate of costs of lung cancer screening in the United States. Thorac. Surg. Clin 25, 205–215, https://doi.org/10.1016/j.thorsurg.2014.12.005 (2015). 13. 13. Rasmussen, J. F. et al. Healthcare costs in the Danish randomised controlled lung cancer CT-screening trial: a registry study. Lung Cancer 83, 347–355, https://doi.org/10.1016/j.lungcan.2013.12.005 (2014). 14. 14. Huber, A. et al. Performance of ultralow-dose CT with iterative reconstruction in lung cancer screening: limiting radiation exposure to the equivalent of conventional chest X-ray imaging. Eur. Radiol 26, 3643–3652, https://doi.org/10.1007/s00330-015-4192-3 (2016). 15. 15. McCunney, R. J. & Li, J. Radiation risks in lung cancer screening programs: a comparison with nuclear industry workers and atomic bomb survivors. Chest 145, 618–624, https://doi.org/10.1378/chest.13-1420 (2014). 16. 16. Murugan, V. A., Kalra, M. K., Rehani, M. & Digumarthy, S. R. Lung Cancer Screening: Computed Tomography Radiation and Protocols. J. Thorac. Imaging 30, 283–289, https://doi.org/10.1097/RTI.0000000000000150 (2015). 17. 17. Christiani, D. C. Radiation risk from lung cancer screening: glowing in the dark? Chest 145, 439–440, https://doi.org/10.1378/chest.13-2588 (2014). 18. 18. Hennessey, P. T. et al. Serum microRNA biomarkers for detection of non-small cell lung cancer. PLOS ONE 7, e32307, https://doi.org/10.1371/journal.pone.0032307 (2012). 19. 19. Leidinger, P. et al. High-throughput qRT-PCR validation of blood microRNAs in non-small cell lung cancer. Oncotarget 7, 4611–4623, https://doi.org/10.18632/oncotarget.6566 (2016). 20. 20. Montani, F. et al. miR-Test: a blood test for lung cancer early detection. J. Natl. Cancer Inst. 107, djv063, https://doi.org/10.1093/jnci/djv063 (2015). 21. 21. Kohler, J. et al. Circulating U2 small nuclear RNA fragments as a diagnostic and prognostic biomarker in lung cancer patients. J. Cancer Res. Clin. Oncol 142, 795–805, https://doi.org/10.1007/s00432-015-2095-y (2016). 22. 22. Cheng, L. L. et al. Enhanced resolution of proton NMR spectra of malignant lymph nodes using magic-angle spinning. Magn. Reson. Med. 36, 653–658, https://doi.org/10.1002/mrm.1910360502 (1996). 23. 23. Cheng, L. L. et al. Quantitative neuropathology by high resolution magic angle spinning proton magnetic resonance spectroscopy. Proc. Natl. Acad. Sci. U.S.A. 94, 6408–6413, https://doi.org/10.1073/pnas.94.12.6408 (1997). 24. 24. Zhai, R., Yu, X., Shafer, A., Wain, J. C. & Christiani, D. C. The impact of coexisting COPD on survival of patients with early-stage non-small cell lung cancer undergoing surgical resection. Chest 145, 346–353, https://doi.org/10.1378/chest.13-1176 (2014). 25. 25. Cheng, L. L. et al. Quantification of microheterogeneity in glioblastoma multiforme with ex vivo high-resolution magic-angle spinning (HRMAS) proton magnetic resonance spectroscopy. Neuro-Oncol 2, 87–95, https://doi.org/10.1093/neuonc/2.2.87 (2000). 26. 26. Cheng, L. L., Wu, C., Smith, M. R. & Gonzalez, R. G. Non-destructive quantitation of spermine in human prostate tissue samples using HRMAS 1H NMR spectroscopy at 9.4 T. FEBS Lett 494, 112–116, https://doi.org/10.1016/s0014-5793(01)02329-8 (2001). 27. 27. Esteve, V., Celda, B. & Martinez-Bisbal, M. C. Use of 1H and 31P HRMAS to evaluate the relationship between quantitative alterations in metabolite concentrations and tissue features in human brain tumour biopsies. Anal. Bioanal. Chem. 403, 2611–2625, https://doi.org/10.1007/s00216-012-6001-z (2012). 28. 28. Tzika, A. A. et al. Biochemical characterization of pediatric brain tumors by using in vivo and ex vivo magnetic resonance spectroscopy. J. Neurosurg 96, 1023–1031, https://doi.org/10.3171/jns.2002.96.6.1023 (2002). 29. 29. De Vitto, H., Perez-Valencia, J. & Radosevich, J. A. Glutamine at focus: versatile roles in cancer. Tumor Biol. 37, 1541–1558, https://doi.org/10.1007/s13277-015-4671-9 (2016). 30. 30. Antonov, A. et al. Bioinformatics analysis of the serine and glycine pathway in cancer cells. Oncotarget 5, 11004–11013, https://doi.org/10.18632/oncotarget.2668 (2014). 31. 31. Martinez-Outschoorn, U. E., Peiris-Pages, M., Pestell, R. G., Sotgia, F. & Lisanti, M. P. Cancer metabolism: a therapeutic perspective. Nat. Rev. Clin. Oncol. 14, 11–31, https://doi.org/10.1038/nrclinonc.2016.60 (2017). 32. 32. Koizume, S. & Miyagi, Y. Lipid Droplets: A Key Cellular Organelle Associated with Cancer Cell Survival under Normoxia and Hypoxia. Int. J. Mol. Sci. 17, e1430, https://doi.org/10.3390/ijms17091430 (2016). 33. 33. Altman, B. J., Stine, Z. E. & Dang, C. V. From Krebs to clinic: glutamine metabolism to cancer therapy. Nat. Rev. Cancer 16, 749, https://doi.org/10.1038/nrc.2016.114 (2016). 34. 34. Mayers, J. R. et al. Tissue of origin dictates branched-chain amino acid metabolism in mutant Kras-driven cancers. Science 353, 1161–1165, https://doi.org/10.1126/science.aaf5171 (2016). 35. 35. Giskeodegard, G. F. et al. Lactate and glycine-potential MR biomarkers of prognosis in estrogen receptor-positive breast cancers. NMR Biomed. 25, 1271–1279, https://doi.org/10.1002/nbm.2798 (2012). 36. 36. Jordan, K. W. et al. Comparison of squamous cell carcinoma and adenocarcinoma of the lung by metabolomic analysis of tissue-serum pairs. Lung Cancer 68, 44–50, https://doi.org/10.1016/j.lungcan.2009.05.012 (2010). ## Acknowledgements We kindly thank J.A. Fordham for editorial assistance. Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under award Numbers R01CA115746 and R21CA162959 (Cheng) and U01CA209414 (Christiani). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. We gratefully acknowledge the support of the Massachusetts General Hospital Athinoula A. Martinos Center for Biomedical Imaging. ## Author information Authors ### Contributions Y.B. developed the spectroscopy quantification tool and the pathology calibration model, guided statistical analysis, wrote the manuscript, and edited and approved the final manuscript. L.A.V. interpreted the data, wrote the manuscript, and edited and approved the final manuscript. I.W. performed the spectroscopy measurements and histopathological preparations, and edited and approved the final manuscript. L.S. collected and provided the samples, performed pathology work and obtained clinical data, and edited and approved the final manuscript. J.K. performed the spectroscopy measurements and histopathological preparations, and edited and approved the final manuscript. A.S. performed the spectroscopy measurements and histopathological preparations, and edited and approved the final manuscript. S.S.D. interpreted the data, conceived of and created figures, and edited and approved the final manuscript. P.H. interpreted the data, and edited and approved the final manuscript. J.N. interpreted the data, and edited and approved the final manuscript. E.M. interpreted tissue pathology, and edited and approved the final manuscript. M.J.A. guided statistical analysis, and edited and approved the final manuscript. D.C.C. designed the experiment, provided funds, supervised collection of the samples and clinical data, interpreted the data, wrote the manuscript, and edited and approved the final manuscript. L.L.C. designed the experiment, provided funds, supervised spectroscopy measurements and histopathological preparations, conducted statistical analysis, interpreted the data, wrote the manuscript, and edited and approved the final manuscript. ### Corresponding authors Correspondence to David C. Christiani or Leo L. Cheng. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Supplementary information ### 41598_2019_46643_MOESM1_ESM.pdf Magnetic Resonance Spectroscopy-based Metabolomic Biomarkers for Typing, Staging, and Survival Estimation of Early-Stage Human Lung Cancer ## Rights and permissions Reprints and Permissions Berker, Y., Vandergrift, L.A., Wagner, I. et al. Magnetic Resonance Spectroscopy-based Metabolomic Biomarkers for Typing, Staging, and Survival Estimation of Early-Stage Human Lung Cancer. Sci Rep 9, 10319 (2019). https://doi.org/10.1038/s41598-019-46643-5 • Accepted: • Published: • ### Time–frequency analysis of serum with proton nuclear magnetic resonance for diagnosis of pancreatic cancer • Asahi Sato • , Toshihiko Masui • , Takashi Ito • , Keiko Hirakawa • , Yoshimasa Kanawaku • , Kaoru Koike •  & Shinji Uemoto Scientific Reports (2020) • ### Elevated levels of circulating betahydroxybutyrate in pituitary tumor patients may differentiate prolactinomas from other immunohistochemical subtypes • Omkar B. Ijare • , Cole Holan • , Jonathan Hebert • , Martyn A. Sharpe •  & Kumar Pichumani Scientific Reports (2020)
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop 40m Log, Page 297 of 346 Not logged in ID Date Author Type Category Subject 14211   Sun Sep 23 17:38:48 2018 yukiUpdateASCAlignment of AUX Y end green beam was recovered [ Yuki, Koji, Gautam ] An alignment of AUX Y end green beam was bad. With Koji and Gautam's advice, it was recovered on Friday. The maximum value of TRY was about 0.5. 14215   Mon Sep 24 15:06:10 2018 gautamUpdateVACc1vac1 reboot + TP1 controller replacement [steve, gautam] Following the procedure in this elog, we effected a reset of the vacuum slow machines. Usually, I just turn the key on these crates to do a power cycle, but Steve pointed out that for the vacuum machines, we should only push the "reset" button. While TP1 was spun down, we took the opportunity to replace the TP1 controller with a spare unit the company has sent us for use while our unit is sent to them for maintenance. The procedure was in principle simple (I only list the additional ones, for the various valve closures, see the slow machine reset procedure elog): • Turn power off using switch on rear. • Remove 4 connecting cables on the back. • Switch controllers. • Reconnect 4 cables on the back panel. • Turn power back on using switch on rear. However, we were foiled by a Philips screw on the DB37 connector labelled "MAG BRG", which had all its head worn out. We had to make a cut in this screw using a saw blade, and use a "-" screwdriver to get this troublesome screw out. Steve suspects this is a metric gauge screw, and will request the company to send us a new one, we will replace it when re-installing the maintaiend controller. Attachments #1 and #2 show the Vacuum MEDM screen before and after the reboot respectively - evidently, the fields that were reading "NO COMM" now read numbers. Attachment #3 shows the main volume pressure during this work. Quote: The problem will be revisited on Monday. Attachment 1: beforeReboot.png Attachment 2: afterReboot.png Attachment 3: CC1.png 14217   Wed Sep 26 10:07:16 2018 SteveUpdateVACwhy reboot c1vac1 Precondition: c1vac1 & c1vac2 all LED warning lights green [ atm3 ], the only error message is in the gauge readings NO COMM, dataviewer will plot zero [ atm1 ], valves are operational When our vacuum gauges read " NO COMM " than our INTERLOCKS  do  NOT communicate either. So V1 gate valve and PSL output shutter can not be triggered to close if the the IFO pressure goes up. [ only CC1_HORNET_PRESSURE reading is working in this condition because it goes to a different compuer ] Quote: [steve, gautam] Following the procedure in this elog, we effected a reset of the vacuum slow machines. Usually, I just turn the key on these crates to do a power cycle, but Steve pointed out that for the vacuum machines, we should only push the "reset" button. While TP1 was spun down, we took the opportunity to replace the TP1 controller with a spare unit the company has sent us for use while our unit is sent to them for maintenance. The procedure was in principle simple (I only list the additional ones, for the various valve closures, see the slow machine reset procedure elog): • Turn power off using switch on rear. • Remove 4 connecting cables on the back. • Switch controllers. • Reconnect 4 cables on the back panel. • Turn power back on using switch on rear. However, we were foiled by a Philips screw on the DB37 connector labelled "MAG BRG", which had all its head worn out. We had to make a cut in this screw using a saw blade, and use a "-" screwdriver to get this troublesome screw out. Steve suspects this is a metric gauge screw, and will request the company to send us a new one, we will replace it when re-installing the maintaiend controller. Attachments #1 and #2 show the Vacuum MEDM screen before and after the reboot respectively - evidently, the fields that were reading "NO COMM" now read numbers. Attachment #3 shows the main volume pressure during this work. Quote: The problem will be revisited on Monday. Attachment 1: NOcomm.png Attachment 2: Reboot_&_sawp.png Attachment 3: c1vac1&2_.jpg 14223   Mon Oct 1 22:20:42 2018 gautamUpdateSUSPrototyping HV Bias Circuit Summary: I've been plugging away at Altium prototyping the high-voltage bias idea, this is meant to be a progress update. Details: I need to get footprints for some of the more uncommon parts (e.g. PA95) from Rich before actually laying this out on a PCB, but in the meantime, I'd like feedback on (but not restricted to) the following: 1. The top-level diagram: this is meant to show how all this fits into the coil driver electronics chain. • The way I'm imagining it now, this (2U) chassis will perform the summing of the fast coil driver output to the slow bias signal using some Dsub connectors (existing slow path series resistance would simply be removed). • The overall output connector (DB15) will go to the breakout board which sums in the bias voltage for the OSEM PDs and then to the satellite box. • The obvious flaw in summing in the two paths using a piece of conducting PCB track is that if the coil itself gets disconnected (e.g. we disconnect cable at the vacuum flange), then the full HV appears at TP3 (see pg2 of schematic). This gets divided down by the ratio of the series resistance in the fast path to slow path, but there is still the possibility of damaging the fast-path electronics. I don't know of an elegant design to protect against this. 2. Ground loops: I asked Johannes about the Acromag DACs, and apparently they are single ended. Hopefully, because the Sorensens power Acromags, and also the eurocrates, we won't have any problems with ground loops between this unit and the fast path. 3. High-voltage precautons: I think I've taken the necessary precautions in protecting against HV damage to the components / interfaced electronics using dual-diodes and TVSs, but someone more knowledgable should check this. Furthermore, I wonder if a Molex connector is the best way to bring in the +/- HV supply onto the board. I'd have liked to use an SHV connector but can't find a comaptible board-mountable connector. 4.  Choice of HV OpAmp: I've chosen to stick with the PA95, but I think the PA91 has the same footprint so this shouldn't be a big deal. 5.  Power regulation: I've adapted the power regulation scheme Rich used in D1600122 - note that the HV supply voltage doesn't undergo any regulation on the board, though there are decoupling caps close to the power pins of the PA95. Since the PA95 is inside a feedback loop, the PSRR should not be an issue, but I'll confirm with LTspice model anyways just in case. 6. Cost: • ​​Each of the metal film resistors that Rich recommended costs ~$15. • The voltage rating on these demand that we have 6 per channel, and if this works well, we need to make this board for 4 optics. • The PA95 is ~$150 each, and presumably the high voltage handling resistors and capacitors won't be cheap. • Steve will update about his HV supply investigations (on a secure platform, NOT the elog), but it looks like even switching supplies cost north of 1200. • However, as I will detail in a separate elog, my modeling suggests that among the various technical noises I've modeled so far, coil driver noise is still the largest contribution which actually seems to exceed the unsqueezed shot noise of ~ 8e-19 m/rtHz for 1W input power and PRG 40 with 20ppm RT arm losses, by a smidge (~9e-19 m/rtHz, once we take into account the fast and slow path noises, and the fact that we are not exactly Johnson noise limited). I also don't have a good idea of what the PCB layer structure (2 layers? 3 layers? or more?) should be for this kind of circuit, I'll try and get some input from Rich. *Updated with current noise (Attachment #2) at the output for this topology of series resistance of 25 kohm in this path. Modeling was done (in LTspice) with a noiseless 25kohm resistor, and then I included the Johnson noise contribution of the 25k in quadrature. For this choice, we are below 1pA/rtHz from this path in the band we care about. I've also tried to estimate (Attachment #3) the contribution due to (assumed flat in ASD) ripple in the HV power supply (i.e. voltage rails of the PA95) to the output current noise, seems totally negligible for any reasonable power supply spec I've seen, switching or linear. Attachment 1: CoilDriverBias.pdf Attachment 2: currentNoise.pdf Attachment 3: PSRR.pdf 14225 Tue Oct 2 23:57:16 2018 gautamUpdatePonderSqueezeSqueezing scenarios [kevin, gautam] We have been working on double checking the noise budget calculations. We wanted to evaluate the amount of squeezing for a few different scenarios that vary in cost and time. Here are the findings: ## Squeezing scenarios Sqz [dBvac] fmin [Hz] PPRM [W] PBS [W] TPRM [%] TSRM [%] -0.41 215 0.8 40 5.637 9.903 -0.58 230 1.7 80 5.637 9.903 -1.05 250 1.7 150 1 17 -2.26 340 10 900 1 17 All calculations done with • 4.5kohm series resistance on ETMs, 15kohms on ITMs, 25kohm on slow path on all four TMs. • Detuning of SRC = -0.01 deg. • Homodyne angle = 89.5 deg. • Homodyne QE = 0.9. • Arm losses is 20ppm RT. • LO beam assumed to be extracted from PR2 transmission, and is ~20ppm of circulating power in PRC. Scenarios: 1. Existing setup, new RC folding mirrors for PRG of ~45. 2. Existing setup, send Innolight (Edwin) for repair (= diode replacement?) and hope we get 1.7 W on back of PRM. 3. Repair Innolight, new PRM and SRM, former for higher PRG, latter for higher DARM pole. 4. Same as #3, but with 10 W input power on back of PRM (i.e. assuming we get a fiber amp). Remarks: • The errors on the small dB numbers is large - 1% change in model parameters (e.g. arm losses, PRG, coil driver noise etc) can mean no observable squeezing. • Actually, this entire discussion is moot unless we can get the RIN of the light incident on the PRM lower than the current level (estimated from MC2 transmission, filtered by CARM pole and ARM zero) by a factor of 60dB. • This is because even if we have 1mW contrast defect light leaking through the OMC, the beating of this field (in the amplitude quadrature) with the 20mW LO RIN (also almost entirely in the amplitude quad) yields significant noise contribution at 100 Hz (see Attachment #1). • Actually, we could have much more contrast defect leakage, as we have not accounted for asymmetries like arm loss imbalance. • So we need an ISS that has 60dB of gain at 100 Hz. • The requirement on LO RIN is consistent with Eq 12 of this paper. • There is probably room to optimize SRC detuning and homodyne angle for each of these scenarios - for now, we just took the optimized combo for scenario #1 for evaluating all four scenarios. • OMC displacement noise seems to only be at the level of 1e-22 m/rtHz, assuming that the detuning for s-pol and p-pol is ~30 kHz if we were to lock at the middle of the two resonances • This assumes 0.02 deg difference in amplitude reflectivity b/w polarizations per optic, other parameters taken from aLIGO OMC design numbers. • We took OMC displacement noise from here. Main unbudgeted noises: • Scattered light. • Angular control noise reinjection (not sure about the RP angular dynamics for the higher power yet). • Shot noise due to vacuum leaking from sym port (= DC contrast defect), but we expect this to not be significant at the level of the other noises in Atm #1. • Osc amp / phase. • AUX DoF cross coupling into DARM readout. • Laser frequency noise (although we should be immune to this because of our homodyne angle choice). Threat matrix has been updated. Attachment 1: PonderSqueeze_NB_LORIN.pdf 14229 Thu Oct 4 08:25:50 2018 SteveUpdateVACrga scan pd81 at day 78 Attachment 1: pd81d78.png 14243 Thu Oct 11 13:40:51 2018 yukiUpdateComputer Scripts / Programsloss measurements Quote: This is the procedure I follow when I take these measurements for the XARM (symmetric under XARM <-> YARM): Dither-align the interferometer with both arms locked. Freeze outputs when done. Misalign ETMY + ITMY. ITMY needs to be misaligned further. Moving the slider by at least +0.2 is plentiful to not have the other beam interfere with the measurement. Start the script, which does the following: Resume dithering of the XARM Check XARM dither error signal rms with CDS. If they're calm enough, proceed. Freeze dithering Start a new set of averages on the scope, wait T_WAIT (5 seconds) Read data (= ASDC power and MC2 trans) from scope and save Misalign ETMX and wait 5s Read data from scope and save Repeat desired amount of times Close the PSL shutter and measure the PD dark levels Information for the armloss measurement: • Script which gets the data: /users/johannes/40m/armloss/scripts/armloss_scope/armloss_dcrefl_asdcpd_scope.py • Script which calculates the loss: /users/johannes/40m/armloss/scripts/misc/armloss_AS_calc.py • Before doing the procedure Johannes wrote you have to prepare as follows: • put a PD in anti-symmetric beam path to get ASDC signal. • put a PD in MC2 box to get tranmitted light of IMC. It is used to normalize the beam power. • connect those 2 PDs to oscilloscope and insert an internet cable to it. • Usage: python2 armloss_dcrefl_asdcpd_scope.py [IP address of Scope] [ScopeCH for AS] [ScopeCH for MC] [Num of iteration] [ArmMode] Note: The scripts uses httplib2 module. You have to install it if you don't have. The locked arms are needed to calculate armloss but the alignment of PMC is deadly bad now. So at first I will make it aligned. (Gautam aligned it and PMC is locked now.) gautam: The PMC alignment was fine, the problem was that the c1psl slow machine had become unresponsive, which prevented the PMC length servo from functioning correctly. I rebooted the machine and undid the alignment changes Yuki had made on the PSL table. 14244 Fri Oct 12 08:27:05 2018 SteveUpdateVACdrypump Gautam and Steve, Our TP3 drypump seal is at 360 mT [0.25A load on small turbo] after one year. We tried to swap in old spare drypump with new tip seal. It was blowing it's fuse, so we could not do it. Noisy aux drypump turned on and opened to TP3 foreline [ two drypumps are in the foreline now ] The pressure is 48 mT and 0.17A load on small turbo. Attachment 1: forepump.png 14245 Fri Oct 12 12:29:34 2018 yukiUpdateComputer Scripts / Programsloss measurements With Gautam's help, Y-arm was locked. Then I ran the script "armloss_dcrefl_asdcpd_scope.py" which gets the signals from oscilloscope. It ran and got data, but I found some problems. 1. It seemed that a process which makes arm cavity mislaigned in the script didn't work. 2. The script "armloss_dcrefl_asdcpd_scope.py" gets the signal and the another script "armloss_AS_calc.py" calculates the arm loss. But output file the former makes doesn't match with the type the latter requires. A script converts format is needed. Anyway, I got the data needed so I will calculate the loss after converting the format. 14247 Fri Oct 12 17:37:03 2018 SteveUpdateVACpressure gauge choices We want to measure the pressure gradient in the 40m IFO Our old MKS cold cathodes are out of order. The existing working gauge at the pumpspool is InstruTech CCM501 The plan is to purchase 3 new gauges for ETMY, BS and MC2 location. 14248 Fri Oct 12 20:20:29 2018 yukiUpdateComputer Scripts / Programsloss measurements I ran the script for measuring arm-loss and calculated rough Y-arm round trip loss temporally. The result was 89.6ppm. (The error should be considered later.) The measurement was done as follows: 1. install hardware 1. Put a PD (PDA520) in anti-symmetric beam path to get ASDC signal. 2. Use a PD (PDA255) in MC2 box to get tranmitted light of IMC. It is used to normalize the beam power. 3. Connect those 2 PDs to oscilloscope (IP: 192.168.113.25) and insert an internet cable to it. 2. measure DARK noise 1. Block beam going into PDs with dampers and turn off the room light. 2. Run the script "armloss_dcrefl_acdcpd_scope.py" using "DARK" mode. 3. measure the ASDC power when Y-arm locked and misaligned 1. Remove dampers and turn off the room light. 2. Dither-align the interferometer with both arms locked. Freeze outputs when done. (Click C1ASS.adl>!MoreScripts>ON and click C1ASS.adl>!MoreScripts>FreezeOutputs.) 3. Misalign ETMX + ITMX. (Just click "Misalign" button.) 4. Further misalign ITMX with the slider. (see previous study: ITMX needs to be misaligned further. Moving the slider by at least +0.2 is plentiful to not have the other beam interfere with the measurement.) 5. Start the script "armloss_dcrefl_acdcpd_scope.py" using "ETMY" mode, which does the following: 1. Resume dithering of the YARM. 2. Check YARM dither error signal rms with CDS. If they're calm enough, proceed. (In the previous study the rms threshold was 0.7. Now "ETM_YAW_L_DEMOD_I" signal was 15 (noisy), then the threshold was set 17.) 3. Freeze dithering. 4. Start a new set of averages on the scope, wait T_WAIT (5 seconds). 5. Read data (= ASDC power and MC2 trans) from scope and save. 6. Misalign ETMY and wait 5s. (I added a code which switchs LSC mode ON and OFF.) 7. Read data from scope and save. 8. Repeat desired amount of times. 4. calculate the arm loss 1. Start the script "armloss_AS_calc.py", whose content is follows: • requires given parameters: Mode-Matching effeciency, modulation depth, transmissivity. I used the same value as Johannes did last year, which are (huga) • reads datafile of beam power at ASDC and MC2 trans, which file is created by "armloss_dcrefl_acdcpd_scope.py". • calculates arm loss from the equation (see 12528 and 12854). Result: YARM ('AS_DARK =', '0.0019517200000000003') #dark noise at ASDC ('MC_DARK =', '0.02792') #dark noise at MC2 trans ('AS_LOCKED =', '2.04293') #beam power at ASDC when the cavity was locked ('MC_LOCKED =', '2.6951620000000003') ('AS_MISALIGNED =', '2.0445439999999997') #beam power at ASDC when the cavity was misaligned ('MC_MISALIGNED =', '2.665312') $\hat{P} = \frac{P_{AS}-P_{AS}^{DARK}}{P_{MC}-P_{MC}^{DARK}}$ #normalized beam power $\hat{P}^{LOCKED}=0.765,\ \hat{P}^{MISALIGNED}=0.775,\ \mathcal{L}=89.6\ \mathrm{ppm}$ Comments: • "ETM_YAW_L_DEMOD_I_OUTPUT" was little noisy even when the arm was locked. • The reflected beam power when locked was higher than when misaligned. It seemed strange for me at first. Johannes suggested that it was caused by over-coupling cavity. It is possible when r_{ETMY}>>r1_{ITMY}. • My first (wrong) measurement said the arm loss was negative(!). That was caused by lack of enough misalignment of another arm mirrors. If you don't misalign ITMX enough then the beam or scattered light from X-arm would bring bad. The calculated negative loss would be appeared only when $\frac{\hat{P}^{LOCKED}}{\hat{P}^{MISALIGNED}} > 1 + T_{ITM}$ • Error should be considered. • Parameters given this time should be measured again. 14251 Sat Oct 13 20:11:10 2018 yukiUpdateComputer Scripts / Programsloss measurements Quote: the script "armloss_AS_calc.py", "ETM_YAW_L_DEMOD_I_OUTPUT" was little noisy even when the arm was locked. The reflected beam power when locked was higher than when misaligned. It seemed strange for me at first. Johannes suggested that it was caused by over-coupling cavity. It is possible when r_{ETMY}>>r1_{ITMY}. Some changes were made in the script for getting the signals of beam power: • The script sees "C1:ASS-X(Y)ARM_ETM_PIT/YAW_L_DEMOD_I_OUTPUT" and stops running until the signals become small, however some offset could be on the signal. So I changed it into waiting until (DEMOD - OFFSET) becomes small. (Yesterday I wrote ETM_YAW_L_DEMOD_I_OUTPUT was about 15 and was little noisy. I was wrong. That was just a offset value.) • I added a code which stops running the script when the power of transmitted IR beam is low. You can set this threshold. The nominal value of "C1:LSC-TRX(Y)_OUT16" is 1.2 (1.0), so the threshold is set 0.8 now. In the yesterday measurement the beam power of ASDC is higher when locked than when misaligned and I wrote it maybe caused by over-coupled cavity. Then I did a calculation as following to explain this: • assume power transmissivity of ITM and ETM are 1.4e-2 and 1.4e-5. • assume loss-less mirror, you can calculate amplitude reflectivity of ITM and ETM. • consider a cavity which consists two mirrors and is loss-less, then $\frac{E_{r}}{E_{in}} = \frac{-r_1+r_2e^{i\phi}}{1-r_1r_2e^{i\phi}}$ holds. r1 and r2 are amplitude reflectivity of ITM and ETM, and E is electric filed. • Then you can calculate the power of reflected beam when resonated and when anti-resonated. The fraction of these value is $\frac{P_{RESONANT}}{P_{ANTI-RESO}} = 0.996$, which is smaller than 1. • I found this calculation was wrong! Above calculatation only holds when cavity is aligned, not when misaligned. 99.04% of incident beam power reflects when locked, and (100-1.4)% reflects when misaligned. The proportion is P(locked)/P(misaligned)=1.004, higher than 1. 14253 Sun Oct 14 16:55:15 2018 not gautamUpdateCDSpianosa upgrade DASWG is not what we want to use for config; we should use the K. Thorne LLO instructions, like I did for ROSSA. Quote: pianosa has been upgraded to SL7. I've made a controls user account, added it to sudoers, did the network config, and mounted /cvs/cds using /etc/fstab. Other capabilities are being slowly added, but it may be a while before this workstation has all the kinks ironed out. For now, I'm going to follow the instructions on this wiki to try and get the usual LSC stuff working. 14254 Mon Oct 15 10:32:13 2018 yukiUpdateComputer Scripts / Programsloss measurements I used these values for measuring armloss: • Transmissivitity of ITM = 1.384e-2 * (1 +/- 1e-2) • Transmissivitity of ETM = 13.7e-6 * (1 +/- 5e-2) • Mode-Matching efficiency of XARM = 0.912 * (1 +/- 2e-2) • Mode-Matching efficiency of YARM = 0.867 * (1 +/- 2e-2) • modulation depth m1 (11MHz) = 0.179 * (1 +/- 2e-2) • modulation depth m2 = 0.226 * (1 +/- 2e-2), then the uncertainties reported by the individual measurements are on the order of 6 ppm (~6.2 for the XARM, ~6.3 for the YARM). This accounts for fluctuations of the data read from the scope and uncertainties in mode-matching and modulation depths in the EOM. I made histograms for the 20 datapoints taken for each arm: the standard deviation of the spread is over 6ppm. We end up with something like: XARM: 123 +/- 50 ppm YARM: 152+/- 50 ppm This result has about 40% of uncertaintities in XARM and 33% in YARM (so big... ). In the previous measurement, the fluctuation of each power was 0.1% and the fluctuation of P(Locked)/P(misaligned) was also 0.1%. Then the uncertainty was small. On the other hand in my measurement, the fluctuation of power is about 2% and the fluctuation of P(Locked)/P(misaligned) is 2%. That's why the uncertainty became big. We want to measure tiny value of loss (~100ppm). So the fluctuation of P(Locked)/P(misaligned) must be smaller than 1.6%. (Edit on 10/23) I think the error is dominated by systematic error in scope. The data of beam power had only 3 degits. If P(Locked) and P(misaligned) have 2% error, then $\frac{P_L}{P_M}\frac{1}{1+T_{\mathrm{ITM}}} = 0.99(3)$. You have to check the configuration of scope. Attachment 1: XARM_20181015_1500.pdf Attachment 2: YARM_20181015_1500.pdf 14255 Mon Oct 15 12:52:54 2018 yukiUpdateComputer Scripts / Programsadditional comments Quote: but there's one weirdness: It get's the channel offset wrong. However this doesn't matter in our measurement because we're subtracting the dark level, which sees the same (wrong) offset. When you do this measurement with oscilloscope, take care two things: 1. set y-range of scope as to every signal fits in display: otherwise the data sent from scope would be saturated. 2. set y-position of scope to the center and don't change it; otherwise some offset would be on the data. 14256 Mon Oct 15 13:59:42 2018 SteveUpdateVACdrypump replaced Steve & Bob, Bob removed the head cover from the housing to inspect the condition of the the tip seal. The tip seal was fine but the viton cover seal had a bad hump. This misaligned the tip seal and it did not allow it to rotate. It was repositioned an carefully tithened. It worked. It's starting current transiant measured 28 A and operational mode 3.5 A This load is normal with an old pump. See the brand new DIP7 drypump as spare was 25 A at start and 3.1 A in operational mode. It is amazing how much punishment a slow blow ceramic 10A fuse can take [ 0215010.HXP ] In the future one should measure the current pick up [ transient <100ms ] after the the seal change with Fluke 330 Series Current Clamp It was swapped in and the foreline pressure dropped to 24 mTorr after 4 hours. It is very good. TP3 rotational drive current 0.15 A at 50K rpm 24C Quote: Gautam and Steve, Our TP3 drypump seal is at 360 mT [0.25A load on small turbo] after one year. We tried to swap in old spare drypump with new tip seal. It was blowing it's fuse, so we could not do it. Noisy aux drypump turned on and opened to TP3 foreline [ two drypumps are in the foreline now ] The pressure is 48 mT and 0.17A load on small turbo. Attachment 1: drypump_swap.png 14258 Tue Oct 16 00:44:29 2018 yukiUpdateComputer Scripts / Programsloss measurements The scripts for measuring armloss are in the directory "/opt/rtcds/caltech/c1/scripts/lossmap_scripts/armloss_scope". • armloss_derefl_asdcpd_scope.py: gets data and makes ascii file. • armloss_AS_calc.py: calculates armloss from selected a set of files. • armloss_calc_histogram.py: calculates armloss from selected files and makes histogram. 14259 Wed Oct 17 09:31:24 2018 SteveUpdatePSLmain laser off The main laser went off when PSL doors were opened-closed. It was turned back on and the PSL is locked. Attachment 1: Inno2wFlipped_off.png 14261 Thu Oct 18 00:27:37 2018 KojiUpdateSUSSUS PD Whitening board inspection [Gautam, Koji] As a part of the preparation for the replacement of c1susaux with Acromag, I made inspection of the coil-osem transfer function measurements for the vertex SUSs. The TFs showed typical f^-2 with the whitening on except for ITMY UL (Attachment 1). Gautam told me that this is a known issue for ~5 years. We made a thorough inspection/replacement of the components and identified the mechanism of the problem. It turned out that the inputs to MAX333s are as listed below. Whitening ON Whitening OFF UL ~12V ~8.6V LL 0V 15V UR 0V 15V LR 0V 15V SD 0V 15V The switching voltage for UL is obviously incorrect. We thought this comes from the broken BIO board and thus swapped the corresponding board. But the issue remained. There are 4 BIO boards in total on c1sus, so maybe we have replaced a wrong board? Initially, we thought that the BIO can't drive the pull-up resistor of 5KOhm from 15V to 0V (=3mA of current). So I have replaced the pull-up resistor to be 30KOhm. But this did not help. These 30Ks are left on the board. Attachment 1: 43.png 14262 Mon Oct 22 15:19:05 2018 SteveUpdateVACMaglev controller serviced Gautam & Steve, Our controller is back with Osaka maintenace completed. We swapped it in this morning. Quote: TP-1 Osaka maglev controller [ model TCO10M, ser V3F04J07 ] needs maintenance. Alarm led on indicating that we need Lv2 service. The turbo and the controller are in good working order. ***************************** Hi Steve, Our maintenance level 2 service price is...... It consists of a complete disassembly of the controller for internal cleaning of all ICB’s, replacement of all main board capacitors, replacement of all internal cooling units, ROM battery replacement, re-assembly, and mandatory final testing to make sure it meets our factory specifications. Turnaround time is approximately 3 weeks.   RMA 5686 has been assigned to Caltech’s returning TC010M controller. Attached please find our RMA forms. Complete and return them to us via email, along with your PO, prior to shipping the cont Best regards, Pedro Gutierrez Osaka Vacuum USA, Inc. 510-770-0100 x 109 ************************************************* our TP-1 TG390MCAB is 9 years old. What is the life expectancy of this turbo?                         The Osaka maglev turbopumps are designed with a 100,000 hours(or ~ 10 operating years) life span but as you know most of our end-users are                         running their Osaka maglev turbopumps in excess of 10+, 15+ years continuously.     The 100,000 hours design value is based upon the AL material being rotated at                         the given speed.   But the design fudge factor have somehow elongated the practical life span.   We should have the cost of new maglev & controller in next year budget. I  put the quote into the wiki. Attachment 1: our_controller_is_back.png 14263   Thu Oct 25 16:17:14 2018 SteveUpdatesafetysafety training Chub Osthelder received 40m specific basic safety traning today. 14264   Wed Oct 31 17:54:25 2018 gautamUpdateVACCC1 hornet power connection restored Steve reported to me that the CC1 Hornet gauge was not reporting the IFO pressure after some cable tracing at EX. I found that the power to the unit had been accidentally disconnected. I re-connected the power and manually turned on the HV on the CC gauge (perhaps this can be automated in the new vacuum paradigm). IFO pressure of 8e-6 torr is being reported now. Attachment 1: cc1_Hornet.png 14266   Fri Nov 2 10:24:20 2018 SteveUpdatePEMroof cleaning Physical plan is cleaning our roof and gutters today. 14267   Fri Nov 2 12:07:16 2018 ranaUpdateCDSNDScope https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=44971 Let's install Jamie's new Data Viewer 14268   Fri Nov 2 16:42:31 2018 aaronUpdateComputer Scripts / Programsarm loss measuremenents I'm continuing the arm loss measurements Yuki was making. I'm first familiarizing myself with the procedures for the measurement Johannes describes. I'm not very familiar with the medm screens, so I'm just kind of poking around and checking with Gautam. I do the following: 1. Turned Xarm ASS dither on, then off. 2. Turned X and Y ALS on, then off shortly after 1. Realizing I needed some guidance, I found this page on lock acquisition on the wiki 2. Gautam showed me how to align/lock the IFO so I could take some notes, and we locked the Y arm, misaligned X. 3. I put the PD back in the AS beam path to get the ASDC signal, and approximately centered the beam. This PD is on channel 1 of the scope, which is at 192.168.113.24. 4. I centered the beam onto the MC2 PD that Yuki had installed. This PD is on channel 2 of the scope. 1. Both scope channels are set to 1V scale (I also had tried 500mV, and it didn't seem to make a difference) and 10s time axis spacing (maximum integration time, since we're looking for a DC effect. Is this what we want?) 2. The impedance for both channels is 1MOhm. 5. I ran the script to start the loss measurement on the Y arm. 1. python2 armloss_dcrefl_asdcpd_scope.py 192.168.113.24 1 2 5 YARM 2. I'm reading ~15 (au?) for the MC channel and ~5% of that out the AS, which seems to make sense to me and looked to be about what Yuki the ratios when I checked the log files. However, I'm a bit confused by the normalization, because the maximum output of the MC PD is 10V, and indeed the scope's display is reading under 10V. I've left the script running. 14269   Fri Nov 2 19:25:16 2018 gautamUpdateComputer Scripts / Programsloss measurements Some facts which should be considered when doing this measurement and the associated uncertainty: 1. When Johannes did the measurement, there was no light from the AS port diverted to the OMC. This represents ~70% loss in the absolute amount of power available for this measurement. I estimate ~1W*Tprm * Ritm * Tbs * Rbs * Tsrm * OMCsplit ~ 300uW which should still be plenty, but the real parameter of interest is the difference in reflected power between locked/no cavity situations, and how that compares to the RMS of the scope readout. For comparison, the POX DC light level is expected to be ~20uW, assuming a 600ppm AR coating on the ITMs. 2. Even though the reflection from the arm not being measured may look like it's completely misaligned looking at the AS camera, the PDA520 which is used at the AS port has a large active area and so one must check on the oscilloscope that the other arm is truly misaligned and not hitting the photodiode to avoid interference effects artifically bloating the uncertainty. 3. The PDA255 monitoring the MC transmission has a tiny active area. I'm not sure the beam has been centered on it anytime recently. If the beam is not well centered on that PD, and you normalize the measurements by "MC Transmission", you're likely to end up with larger error. Quote: This result has about 40% of uncertaintities in XARM and 33% in YARM (so big... ). 14270   Mon Nov 5 13:52:18 2018 aaronUpdateComputer Scripts / Programsarm loss measuremenents After running this script Friday night, i noticed Saturday that the data hadn't saved. Scrolling up inthe terminal, I couldn't see where I'd run the script, so I thought I'd forgotten to run it as I was making last minute changes to the scope settings Friday before leaving. Monday it turns out I hadn't forgotten to run the script, but the script itself was getting hung up as it waited for ASS to settle, due to the offset on the ETM PIT or YAW setpoints. The script was waiting until both pitch and yaw settled to below 0.7, but yaw was reading ~15; I think this is normal, and it looks like Yuki had solved this problem by waiting for the DEMOD-OFFSET to become small, rather than just the DEMOD signal to be small. Since this is a solved problem, I think I might be using an old script, but I'm pretty sure I'm running the one in Johannes' folder that Yuki is referencing for example here. The scripts in /yutaro_scripts/ have this DEMOD-OFFSET functionality commented out, and anyway those scripts seem to do the 2D loss maps rather than 1D loss measurements. In the meantime I blocked the beams and ran the script in DARK mode. The script is saving data in /armloss/data/run_20181105/, and runs with no exceptions thrown. However, when I try to dither align the YARM, I get an error that "this is not a degree of freedom that has an ASS". I'm alsogetting some exceptions from MEDM about unavailable channels. It must have been something about donatella not initializing, because it's working on pianosa. I turned on YARM ADS from pianosa. Monitoring from dataviewer, I see that LSC-TRY_OUT has some spikes to 0.5, but it's mostly staying near 0. I tried returning to the previous frozen outputs, and also stepping around ETMY-[PIT/YAW] from the IFO_ALIGN screen, but didn't see much change in the behavior of LSC-TRY. I missed the other controls Gautam was using to lock before, and I've also made myself unclear on whether ASS is acting only on angular dof, or also on length. I unblocked the beams after the DARK run was done. 14273   Tue Nov 6 10:03:02 2018 SteveUpdateElectronicsContec board found The Contec test board with Dsub37Fs was on the top shelf of E7 Attachment 1: DSC01836.JPG 14274   Tue Nov 6 10:19:26 2018 aaronUpdateComputer Scripts / Programsarm loss measuremenents I'm checking out the data this morning, running armloss_AS_calc.py using the parameters Yuki used here. I made the following changes to scripts (measurement script and calculator script) • Included the 'hour' of the run in the armloss_dcrefl_* script. This way, we can run more than once a day without overwriting data. • Changed the calculator script to loop over all iterations of locked/misaligned states, and calculate the loss for adjacent measurements. • That is, the measurement script will make a measurement with the arm locked, then with it misaligned, and repeat that N times • The calculator now finds the loss for the nth iteration using *_n_locked and *_n_misaligned, and finds N separate loss measurements • The dark signal is also computed N times, though all of the dark measurements are made before running the arm scripts, so they could be all integrated together. • All of these are saved in the same directory that the data was grabbed from. I repeated the 'dark' measurements, because I need 20 files to run the script and the measurements before had the window on the scope set larger than the integration time in the script, so it was padded with bad values that were influencing the calculation. On running the script again, I'm getting negative values for the loss. I removed the beamstops from the PDs, and re-centered the beams on the PDs to repeat the YARM measurements. 14275   Tue Nov 6 15:23:48 2018 gautamUpdateIOOIMC problematic The IMC has been misbehaving for the last 5 hours. Why? I turned the WFS servos off. afaik, aaron was the last person to work on the IFO, so i'm not taking any further debugging steps so as to not disturb his setup. Attachment 1: MCwonky.png 14276   Tue Nov 6 15:32:24 2018 SteveUpdatePSLMC_Transmitted I tried to plot a long trend MC Transmitted today. I could not get farther than 2017 Aug 4 Quote: The mode cleaner was misaligned probably due to the earthquake (the drop in the MC transmitted value slightly after utc 7:38:52 as seen in the second plot). The plots show PMC transmitted and MC sum signals from 10th june 07:10:08 UTC over a duration of 17 hrs. The PMC was realigned at about 4-4:15 pm today by rana. This can be seen in the first plot. Attachment 1: MC_Trans.png 14277   Tue Nov 6 19:02:35 2018 aaronUpdateIOOIMC problematic That was likely me. I had recentered the beam on the PD I'm using for the armloss measurements, and I probably moved the wrong steering mirror. The transmission from MC2 is sent to a steering mirror that directs it to the MC2 transmission QPD; the transmission from this steering mirror I direct to the armloss MC QPD (the second is what I was trying to adjust). Note: The MC2 trans QPD goes out to a cable that is labelled MC2 op lev. This confusion should be fixed. I realigned the MC and recentered the beam on the QPD. Indeed the beam on MC2 QPD was up and left, and the lock was lost pretty quickly, possibly because the beam wasn't centered. Lock was unstable for a while, and I rebooted C1PSL once during this process because the slow machine was unresponsive. When tweaking the alignment near MC2, take care not to bump the table, as this also chang es the MC2 alignment. Once the MC was stably locked, I was able to maximize MC transmission at ~15,400 counts. I then centered the spot on the MC2 trans QPD, and transmission dropped to ~14800 counts. After tweaking the alignment again, it was recovered to ~15,000 counts. Gautam then engaged the WFS servo and the beam was centered on MC2 trans QPD, transmission level dropped to ~14,900. Attachment 1: 181106_MCTRANS.jpg 14279   Tue Nov 6 23:19:06 2018 gautamUpdateVACc1vac1 FAIL lights on (briefly) Jon and I stuck a extender card into the eurocrate at 1X8 earlier today (~5pm PT), to see if the box was getting +24V DC from the Sorensen or not. Upon sticking the card in, the FAIL LEDs on all the VME cards came on. We immediately removed the extender card. Without any intervention from us, after ~1 minute, the FAIL LEDs went off again. Judging by the main volume pressure (Attachment #1) and the Vacuum MEDM screen (Attachment #2), this did not create any issues and the c1vac1 computer is still responsive. But Steve can perhaps run a check in the AM to confirm that this activity didn't break anything. Is there a reason why extender cards shouldn't be stuck into eurocrates? Attachment 1: Screenshot_from_2018-11-06_23-18-23.png Attachment 2: Screenshot_from_2018-11-06_23-19-26.png 14280   Wed Nov 7 05:16:16 2018 yukiUpdateComputer Scripts / Programsarm loss measuremenents Please check your data file and compare with those Johannes made last year. I think the power in your data file may have only three-disits and flactuate about 2%, which brings huge error. (see elog: 40m/14254) Quote: On running the script again, I'm getting negative values for the loss. 14281   Wed Nov 7 08:32:32 2018 SteveUpdateVACc1vac1 FAIL lights on (briefly)...checked The vacuum and MC are OK Quote: Jon and I stuck a extender card into the eurocrate at 1X8 earlier today (~5pm PT), to see if the box was getting +24V DC from the Sorensen or not. Upon sticking the card in, the FAIL LEDs on all the VME cards came on. We immediately removed the extender card. Without any intervention from us, after ~1 minute, the FAIL LEDs went off again. Judging by the main volume pressure (Attachment #1) and the Vacuum MEDM screen (Attachment #2), this did not create any issues and the c1vac1 computer is still responsive. But Steve can perhaps run a check in the AM to confirm that this activity didn't break anything. Is there a reason why extender cards shouldn't be stuck into eurocrates? Attachment 1: Vac_MC_OK.png 14283   Wed Nov 7 19:20:53 2018 gautamUpdateComputersPaola Battery Error The VEA vertex laptop, paola, has a flashing orange indicator which I take to mean some kind of battery issue. When the laptop is disconnected from its AC power adaptor, it immediately shuts down. So this machine is kind of useless for its intended purpose of being a portable computer we can work at optical tables with. The actual battery diagnostics (using upower) don't report any errors. 14284   Wed Nov 7 19:42:01 2018 gautamUpdateGeneralIFO checkup and DRMI locking prep Earlier today, I rebooted a few unresponsive VME crates (susaux, auxey). The IMC has been unhappy for a couple of days - the glitches in the MC suspensions are more frequent. I reset the dark offsets, minimized MCREFL by hand, and then re-centered the beam on the MC2 Trans QPD. In this config, the IMC has been relatively stable today, although judging by the control room StripTool WFS control signal traces, the suspension glitches are still happening. Since we have to fix the attenuator issue anyways soon, we can do a touch-up on IMC WFS. I removed the DC PD used for loss measurements. I found that the AS beam path was disturbed - there is a need to change the alignment, this just makes it more work to get back to IFO locking as I have to check alignment onto the AS55 and AS110 PDs. Single arm locking worked with minimal effort - although the X arm dither alignment doesn't do the intended job of maximizing the transmission. Needs a checkup. PRMI locking (carrier resonant) was also pretty easy. Stability of the lock is good, locks hold for ~20 minutes at a time and only broke because I was mucking around. However, when the carrier is resonant, I notice a smeared scatter pattern on the ITMX camera that I don't remember from before. I wonder if the FF idea can be tested in the simpler PRMI config. After recovering these two simpler IFO configurations, I improved the cavity alignment by hand and with the ASS servos that work. Then I re-centered all the Oplev beams onto their respective QPDs and saved the alignment offsets. I briefly attemped DRMI locking, but had little success, I'm going to try a little later in the evening, so I'm leaving the IFO with the DRMI flashing about, LSC mode off. 14285   Wed Nov 7 23:07:11 2018 gautamUpdateLSCDRMI locking recovered I had some success today. I hope that the tweaks I made will allow working with the DRMI during the day as well, though it looks like the main limiting factor in lock duty cycle is angular stability of the PRC. • Since there has been some change in the light levels / in vacuum optical paths, I decided to be a bit more systematic. • Initial guess of locking gains / demod phases was what I had last year. • Then I misaligned SRM, and locked PRMI, for the sideband resonant in the PRC (but still no arm cavities, and using 1f Refl error signals). • Measured loop TFs, adjusted gains, re-enabled boosts. • Brought the SRM back into the picture. Decided to trigger SRCL loop on AS110I rather than the existing POP22I (because why should 2f1 signal buildup carry information about SRCL?). New settings saved to the configure script. Reduced MICH gain to account for the SRC cavity gain. • Re-measured loop TFs, re-adjusted gains. More analysis about the state of the loops tomorrow, but all loops have UGF ~100-120 Hz. • Ran some sensing lines - need to check my sensing matrix making script, and once I get the matrix elements, I can correct the error signal demod phasing as necessary. [Attachment #1]: Repeatable and reliable DRMI locks tonight, stability is mainly limited by angular glitches - I'm not sure yet if these are due to a suspect Oplev servo on the PRM, or if they're because of the tip-tilt PR2/PR3/SR2/SR3. [Attachment #2]: A pass at measuring the TF from SRCL error point to MICH error point via control noise re-injection. I was trying to measure down to 40 Hz, but lost the lock, and am calling it for the night. [Attachment #3]: Coherence between PRM oplev error point and beam spot motion on POP QPD. Note that the MICH actuation is not necessarily optimally de-coupled by actuating on the PRM and SRM yet (i.e. the latter two elements of the LSC output matrix are not precisely tuned yet). What is the correct way to make feedforward filters for this application? Swept-sine transfer function measurement? Or drive broadband noise at the SRCL error point and then do time-domain Wiener filter construction using SRCL error as the witness and MICH error as the target? Or some other technique? Does this even count as "feedforward" since the sensor is not truly "outside" the loop? Attachment 1: Screenshot_from_2018-11-07_23-05-58.png Attachment 2: SRCL2MICH_crosscpl.pdf Attachment 3: PRCangularCoh_rot.pdf 14286   Fri Nov 9 15:00:56 2018 gautamUpdateIOONo IFO beam as TT1 UL hijacked for REFL55 check This problem resurfaced. I'm doing the debugging. 6:30pm - "Solved" using the same procedure of stepping through the whitening gains with a small (10 DAC cts pk) signal applied. Simply stepping through the gains with input grounded doesn't seem to do the trick. Attachment 1: REFL55_wht_chk.png 14288   Sat Nov 10 17:32:33 2018 gautamUpdateLSCNulling MICH->PRCL and MICH->SRCL With the DRMI locked, I drove a line in MICH using the sensing matrix infrastructure. Then I looked at the error points of MICH, PRCL and SRCL. Initially, the sensing line oscillator output matrix for MICH was set to drive only the BS. Subsequently, I changed the --> PRM and --> SRM matrix elements until the line height in the PRCL and SRCL error signals was minimized (i.e. the change to PRCL and SRCL due to the BS moving, which is a geometric effect, is cancelled by applying the opposite actuation to the PRM/SRM respectively. Then I transferred these to the LSC output matrix (old numbers in brackets). MICH--> PRM = -0.335 (-0.2655) MICH--> SRM = -0.35 (+0.25) I then measured the loop TFs - all 3 loops had UGFs around 100 Hz, coinciding with the peaks of the phase bubbles. I also ran some sensing lines and did a sensing matrix measurement, Attachment #1 - looks similar to what I have obtained in the past, although the relative angles between the DoFs makes no sense to me. I guess the AS55 demod phase can be tuned up a bit. The demodulation was done offline - I mixed the time series of the actuator and sensor signals with a "local oscillator" cosine wave - but instead of using the entire 5 minute time series and low-passing the mixer output, I divvied up the data into 5 second chunks, windowed with a Tukey window, and have plotted the mean value of the resulting mixer output. Unrelated to this work: I re-aligned the PMC on the PSL table, mostly in Pitch. Attachment 1: sensMat_2018-11-10.pdf 14289   Sat Nov 10 17:40:00 2018 aaronUpdateIOOIMC problematic Gautam was doing some DRMI locking, so I replaced the photodiode at the AS port to begin loss measurements again. I increased the resolution on the scope by selecting Average (512) mode. I was a bit confused by this, since Yuki was correct that I had only 4 digits recorded over ethernet, which made me think this was an i/o setting. However the sample acquisition setting was the only thing I could find on the tektronix scope or in its manual about improving vertical resolution. This didn't change the saved file, but I found the more extensive programming manual for the scope, which confirms that using average mode does increase the resolution... from 9 to 14 bits! I'm not even getting that many. There's another setting for DATa:WIDth, that is the number of bytes per data point transferred from the scope. I tried using the *.25 scope instead, no better results. Changing the vertical resolution directly doesn't change this either. I've also tried changing most of the ethernet settings. I don't think it's something on the scripts side, because I'm using the same scripts that apparently generated the most recent of Johannes' and Yuki's files; I did look through for eg tds3014b.py, and didn't see the resolution explicitly set. Indeed, I get 7 bits of resolution as that function specifies, but most of them aren't filled by the scope. This makes me think the problem is on the scope settings. 14290   Mon Nov 12 13:53:20 2018 ranaUpdateIOOloss measurement: oscope vs CDS DAQ sstop using the ssscope, and just put the ssssignal into the DAQ with sssssome whitening. You'll get 16 bitsśšß. Quote: I increased the resolution on the scope by selecting Average (512) mode. I was a bit confused by this, since Yuki was correct that I had only 4 digits recorded over ethernet, which made me think this was an i/o setting. However the sample acquisition setting was the only thing I could find on the tektronix scope or in its manual about improving vertical resolution. This didn't change the saved file, but I found the more extensive programming manual for the scope, which confirms that using average mode does increase the resolution... from 9 to 14 bits! I'm not even getting that many. 14291   Tue Nov 13 16:15:01 2018 SteveUpdateVACrga scan pd81 at day 119 Attachment 1: pd81-d119.png Attachment 2: pd81-560Hz-d119.png 14292   Tue Nov 13 18:09:24 2018 gautamUpdateLSCInvestigation of SRCL-->MICH coupling Summary: I've been looking into the cross-coupling from the SRCL loop control point to the Michelson error point. [Attachment #1] - Swept sine measurement of transfer function from SRCL_OUT_DQ to MICH_IN1_DQ. Details below. [Attachment #2] - Attempt to measure time variation of coupling from SRCL control point to MICH error point. Details below. [Attachment #3] - Histogram of the data in Attachment #2. [Attachment #4] - Spectrogram of the duration in which data in #2 and #3 were collected, to investigate the occurrance of fast glitches. Hypothesis: (so that people can correct me where I'm wrong - 40m tests are on DRMI so "MICH" in this discussion would be "DARM" when considering the sites) • SRM motion creates noise in MICH. • The SRM motion may be naively decomposed into two contributions - • Category #1: "sensing noise induced" motion, which comes about because of the SRCL control loop moving the SRM due to shot noise (or any other sensing noise) of the SRCL PDH photodiode, and • Category #2: all other SRM motion. • We'd like to cancel the former contribution from DARM. • The idea is to measure the transfer function from SRCL control point to the MICH error point. Knowing this, we can design a filter so that the SRCL control signal is filtered and summed in at the MICH error point to null the SRCL coupling to MICH. • Caveats/questions: • Introducing this extra loop actually increases the coupling of the "all other" category of SRM motion to MICH. But the hypothesis is that the MICH noise at low frequencies, which is where this increased coupling is expected to matter, will be dominated by seismic/other noise contributions, and so we are not actually degrading the MICH sensitivity. • Knowing the nosie-budget for MICH and SRCL, can we AC couple the feedforward loop such that we are only doing stuff at frequencies where Category #1 is the dominant SRCL noise? Measurement details and next steps: Attachment #1 • This measurement was done using DTT swept sine. • Plotted TF is from SRCL_OUT to MICH_IN, so the SRCL loop shape shouldn't matter. • I expect the pendulum TF of the SRM to describe this shape - I've overlaid a 1/f^2 shape, it's not quite a fit, and I think the phase profile is due to a delay, but I didn't fit this. • I had to average at each datapoint for ~10 seconds to get coherence >0.9. • The whole measurement takes a few minutes. Attachments #2 and #3 • With the DRMI locked, I drove a sine wave at 83.13 Hz at the SRCL error point using awggui. • I ramped up the amplitude till I could see this line with an SNR of ~10 in the MICH error signal. • Then I downloaded ~10mins of data, demodulated it digitally, and low-passed the mixer output. • I had to use a pretty low corner frequency (0.1 Hz, second order butterworth) on the LPF, as otherwise, the data was too noisy. • Even so, the observed variation seems too large - can the coupling really change by x100? • The scatter is huge - part of the problem is that there are numerous glitches while the DRMI is locked. • As discussed at the meeting today, I'll try another approach of doing multiple swept-sines and using Craig's TFplotter utility to see what scatter that yields. Attachments #2 • Spectrogram generated with 1 second time strides, for the duration in which the 83 Hz line was driven. • There are a couple of large fast glitches visible. Attachment 1: TF_sweptSineMeas.pdf Attachment 2: digitalDemod.pdf Attachment 3: digitalDemod_hist.pdf Attachment 4: DRMI_LSCspectrogram.pdf 14293   Tue Nov 13 21:53:19 2018 gautamUpdateCDSRFM errors This problem resurfaced, which I noticed when I couldn't get the single arm locks going. The fix was NOT restarting the c1rfm model, which just brought the misery of all vertex FEs crashing and the usual dance to get everything back. Restarting the sender models (i.e. c1scx and c1scy) seems to have done the trick though. Attachment 1: RFMerrors.png 14294   Wed Nov 14 14:35:38 2018 SteveUpdateALARM emergency calling list for 40m Lab It is posted at the 40m wiki with Gautam' help. Printed copies posted around doors also. 14295   Wed Nov 14 18:58:35 2018 aaronUpdateDAQNew DAC for the OMC I began moving the AA and AI chassis over to 1X1/1X2 as outlined in the elog. The chassis were mostly filled with empty cables. There was one cable attached to the output of a QPD interface board, but there was nothing attached to the input so it was clearly not in use and I disconnected it. I also attach a picture of some of the SMA connectors I had to rotate to accommodate the chassis in their new locations. Update: The chassis are installed, and the anti-imaging chassis can be seen second from the top; the anti-aliasing chassis can be seen 7th from the top. I need to breakout the SCSI on the back of the AA chassis, because ADC breakout board only has a DB36 adapter available; the other cables are occupied by the signals from the WFS dewhitening outputs. Attachment 1: 6D079592-1350-4099-864B-1F61539A623F.jpeg Attachment 2: 5868D030-0B97-43A1-BF70-B6A7F4569DFA.jpeg 14297   Thu Nov 15 10:21:07 2018 aaronUpdateIOOIMC problematic I ran a BNC from the PD on the AS table along the cable rack to a free ADC channel on the LSC whitening board. I lay the BNC on top of the other cables in the rack, so as not to disturb anything. I also was careful not to touch the other cables on the LSC whitening board when I plugged in my BNC. The PD now reads out to... a mystery channel. The mystery channel goes then to c1lsc ADC0 channels 9-16 (since the BNC goes to input 8, it should be #16). To find the channel, I opened the c1lsc model and found that adc0 channel 15 (0-indexed in the model) goes to a terminator. Rather than mess with the LSC model, Gautam freed up C1:ALS-BEATY_FINE_I, and I'm reading out the AS signal there. I misaligned the x-arm then re-installed the AS PO PD, using the scope to center the beam then connecting it to the BNC to (first the mystery channel, then BEATY). I turned off all the lights. I went to misalign the x-arms, but the some of the control channels are white boxed. The only working screen is on pianosa. The noise on the AS signal is much larger than that on the MC trans signal, and the DC difference for misaligned vs locked states is much less than the RMS (spectrum attached); the coherence between MC trans and AS is low. However, after estimating that for ~30ppm the locked vs misaligned states should only be ~0.3-0.4% different, and double checking that we are well above ADC and dark noise (blocked the beam, took another spectrum) and not saturating the PD, these observations started to make more sense. To make the measurement in cds, I also made the following changes to a copy opf Johannes' assess_armloss_refl.py that I placed in /opt/rtcds/caltech/c1/scripts/lossmap_scripts/armloss_cds/   : • function now takes as argument the number of averages, averaging time, channel of the AS PD, and YARM|XARM|DARK. • made the data save to my directory, in /users/aaron/40m/data/armloss/ I started taking a measurement, but quickly realized that the mode cleaner has been locked to a higher order mode for about an hour, so I spend some time moving the MC. It would repeatedly lock on the 00 mode, but the alignment must be bad because the transmission fluctuates between 300 and 1400, and the lock only lasts about 5 minutes. Attachment 1: 181115_chansDown.png Attachment 2: PD_noise.png 14298   Fri Nov 16 00:47:43 2018 gautamUpdateLSCMore DRMI characterization Summary: • More DRMI characterization was done. • I was working on trying to improve the stability of the DRMI locks as this is necessary for any serious characterization. • Today I revived the PRC angular feedforward - this was a gamechanger, the DRMI locks were much more stable. It's probably worth spending some time improving the POP LSC/ASC sensing optics/electronics looking towards the full IFO locking. • Quantitatively, the angular fluctuations as witnessed by the POP QPD is lowered by ~2x with the FF on compared to off [Attachment #1, references are with FF off, live traces are with FF on]. • The first DRMI lock I got is already running 15 mins, looking stable. • Update: Out of the ~1 hour i've tried DRMI locking tonight, >50 mins locked! • I think the filters can be retrained and this performance improved, something to work on while we are vented. • Ran sensing lines, measured loop TFs, analysis tomorrow, but I think the phasing of the 1f PDs is now okay. • MICH in AS55 Q, demod phase = -92deg, +6dB wht gain. • PRCL in REFL11 I, demod phase = +18 deg, +18dB wht gain. • SRCL in REFL55 I, demod phase = -175 deg, +18dB wht gain. • Also repeated the line in SRCL-->witness in MICH test. • At least 10 minutes of data available, but I'm still collecting since the lock is holding. • This time I drove the line at ~124 Hz with awggui, since this is more a regime where we are sensing noise dominated. Prep for this work: • Reboots of c1psl, c1iool0, c1susaux. • Removed AS port PD loss measurement PD. • Initial alignment procedure as usual: single arms --> PRMI locked on carrier --> DRMI I was trying to get some pics of the optics as a zeroth-level reference for the pre-vent loss with the single arms locked, but since our SL7 upgrade, the sensoray won't work anymore . I'll try fixing this during the daytime. Attachment 1: PRCff.pdf Attachment 2: DRMI_withPRCff.png 14299   Fri Nov 16 10:26:12 2018 SteveUpdateVAC single viton O-rings The 40m vacuum envelope has one large single O-ring on the OOC west side. All other doors have double O-ring with annuloses. There are 3  spacers to protect o-ring. They should not be removed! The Cryo-pump static seal to VC1 also viton. All gate valves and right angle valve plates have single viton o-ring seal. Small single viton o-rings on all optical quality viewports. Helium will permiate through these fast. Leak checking time is limited to 5-10 minutes. All other seals are copper gaskits. We have 2 manual right angle with METAL-dynamic seal [ VATRING ] as  VV1 & RV1 Attachment 1: Single-O-ring.png ELOG V3.1.3-
# Simon Kristensen ## On shrinking targets for Zm actions on tori Research output: ResearchWorking paper • Yann Bugeaud Yann BugeaudUniversité Louis PasteurFrance • Stephen Harrap Stephen HarrapUniversity of YorkUnited Kingdom • S. Kristensen • Sanju Velani Sanju VelaniUniversity of YorkUnited Kingdom • Department of Mathematical Sciences Let $~A$ be an $~n \times m$ matrix with real entries. Consider the set $~\mathbf{Bad}_A$ of $~\mathbf{x} \in [0,1)^n$ for which there exists a constant $~c(\mathbf{x})>0$ such that for any $~\mathbf{q} \in \mathbb{Z}^m$ the distance between $~\mathbf{x}$ and the point $~\{A \mathbf{q}\}$ is at least $~c(\mathbf{x}) |\mathbf{q}|^{-m/n}$. It is shown that the intersection of $~\mathbf{Bad}_A$ with any suitably regular fractal set is of maximal Hausdorff dimension. The linear form systems investigated in this paper are natural extensions of irrational rotations of the circle. Even in the latter one-dimensional case, the results obtained are new. Original language English Århus Department of Mathematical Sciences, Aarhus University 1-12 12 Published - 2008 Citationformats ID: 14665164
# Optimization/Simplex Method, proof of unique solution This was on my homework this week. I already turned it in but the professor hasn't posted the solutions yet so I'm genuinely curious what the answer is. ## Homework Statement The LP: http://i48.tinypic.com/25svp6u.png The problem: http://i50.tinypic.com/nnnm2w.png ## Homework Equations c^{T}d = z_{N}^{T}d_{N} sorry for the non-use of latex! I tried to use the ## ## and $environments but couldn't get it to work! ## The Attempt at a Solution Not sure how standard the notation is since we don't use a textbook, just the professor's notes. Anyway, I'm taking my first analysis course this semester so I'm getting decent with proofs. He gives us a big hint in the problem statement. So suppose there exists an i in the index set N such that ##z_{Ni} = 0##. Does this mean that we just let all the remaining components of z_N be strictly positive? I assumed we want to show that if z_N >= 0, we no longer have a unique solution so we require z_N > 0. Optimality occurs when z_N >= 0, but I interpreted the problem as wanting us to show that we need a stronger condition for uniqueness From this, I said that d_N must necessarily be the zero vector since we use the convention that: d_Nj = 1, if there exists a j0 such that z_Nj0 < 0 d_Nj = 0, otherwise Since we clearly have that z_N is non-negative, d_N must be zero. We have the relation (c^T)d = (z_N)^T*d_N (since d_N = 0) Then this means that we can find a feasible direction d such that (c^T)d = 0, ie we can step down to x' = x + td. The objective function value at x' is: (c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0 which is a contradiction since we know x is a unique solution but we find that x' is also a solution. My proof-writing is sketchy but I think I understood the main jist. The problem is I don't get how the hint that we should assume that a single component of z_N is zero helps us. How is this any different than if I assumed for contradiction that z_N was strictly positive for all i? Because for strictly positive z_N, we'd still have d_N = 0. Last edited: ## Answers and Replies Ray Vickson Science Advisor Homework Helper Dearly Missed This was on my homework this week. I already turned it in but the professor hasn't posted the solutions yet so I'm genuinely curious what the answer is. ## Homework Statement The LP: http://i48.tinypic.com/25svp6u.png The problem: http://i50.tinypic.com/nnnm2w.png ## Homework Equations c^{T}d = z_{N}^{T}d_{N} sorry for the non-use of latex! I tried to use the ## ## and [itex] environments but couldn't get it to work! ## The Attempt at a Solution Not sure how standard the notation is since we don't use a textbook, just the professor's notes. Anyway, I'm taking my first analysis course this semester so I'm getting decent with proofs. He gives us a big hint in the problem statement. So suppose there exists an i in the index set N such that ##z_{Ni} = 0##. Does this mean that we just let all the remaining components of z_N be strictly positive? I assumed we want to show that if z_N >= 0, we no longer have a unique solution so we require z_N > 0. Optimality occurs when z_N >= 0, but I interpreted the problem as wanting us to show that we need a stronger condition for uniqueness From this, I said that d_N must necessarily be the zero vector since we use the convention that: d_Nj = 1, if there exists a j0 such that z_Nj0 < 0 d_Nj = 0, otherwise Since we clearly have that z_N is non-negative, d_N must be zero. We have the relation (c^T)d = (z_N)^T*d_N (since d_N = 0) Then this means that we can find a feasible direction d such that (c^T)d = 0, ie we can step down to x' = x + td. The objective function value at x' is: (c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0 which is a contradiction since we know x is a unique solution but we find that x' is also a solution. My proof-writing is sketchy but I think I understood the main jist. The problem is I don't get how the hint that we should assume that a single component of z_N is zero helps us. How is this any different than if I assumed for contradiction that z_N was strictly positive for all i? Because for strictly positive z_N, we'd still have d_N = 0. The hint did not say zNi= 0 for a single i; it said zNi = 0 for some i, and that could mean one or two or three or ... . As to LaTeX: if you use "[t e x] your stuff [/t e x]" (but remove the spaces inside [ and ]) you will get a displayed result, like this: $$(c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0$$ If, instead, you say "[i t e x] your stuff [/ i t e x]" (again, no spaces) you will get it in-line, like this: [itex](c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0$ RGV Last edited: Yes, but I'm not entirely seeing how that initial supposition will help. Is it okay to suppose there exists an $i_{0} \in N$ such that $z_{Ni_{0}} = 0$ and that for all $i \ne i_{0}$ $z_{Ni} > 0$? Like I said, I thought the point of the proof was to show that $z_{N} \ge 0$ is not a strong enough condition to guarantee a unique solution. For what it's worth, the hint gives so much a way that I understand the direction I should be going but not how to make that initial leap. [[thanks for the tex tip btw. I have no idea why it wasn't working the first time.]] Ray Vickson Homework Helper Dearly Missed Yes, but I'm not entirely seeing how that initial supposition will help. Is it okay to suppose there exists an $i_{0} \in N$ such that $z_{Ni_{0}} = 0$ and that for all $i \ne i_{0}$ $z_{Ni} > 0$? Like I said, I thought the point of the proof was to show that $z_{N} \ge 0$ is not a strong enough condition to guarantee a unique solution. For what it's worth, the hint gives so much a way that I understand the direction I should be going but not how to make that initial leap. [[thanks for the tex tip btw. I have no idea why it wasn't working the first time.]] OK. Let me change the notation a bit, to make it easier to type. By re-labelling the variables if necessary, assume that x1, ..., xm are basic and xm+1,..., xn are non-basic in the optimal solution. However, let me re-name the non-basic variables as u1, ..., ur, where r = n-m. Thus, u = xN. The final form of the constraint equations are $$x_B = v - T u,$$ where $$v = B^{-1}b, \text{ and } T = B^{-1}N.$$ We have $$Z = \sum c_j x_j = V + \sum_j z_j u_j, \\ \text{where } V = c_B v = c_B B^{-1} b, \text{ and } z = c_N - c_B B^{-1}N = c_N - c_B T.$$ (For simplicity I take cB and cN to be row vectors.) In detail, we have: $$x_1 = v_1 - t_{11} u_1 - \cdots - t_{1 r} u_r\\ \vdots \\ x_m = v_m - t_{m1} u_1 - \cdots - t_{m r} u_r\\ Z = V + z_1 u_1 + \cdots + z_r u_r$$ We are assuming two things: (i) this solution is non-degenerate, so that vi > 0 for i = 1,2, ...,m; and (ii) it is optimal. Therefore, we need all zj ≥ 0 (because if some zp < 0 we can increase the variable up a bit and make the value of Z smaller (that is, we can reduce V). The non-degeneracy assumption is crucial here; without it (at a degenerate solution) we might have an optimal solution but a non-optimal basis. If some zp = 0, we can increase up a bit (i.e., by a truly positive amount) but with no change in Z, because the variable up does not actually enter into the expression for Z in this basis. So, if some component of z is zero we can find an optimal solution that is different from the one we started with, and that would contradict the uniqueness. Note: in applied modelling, we often use this type of argument to tell whether or not a solution is unique: if all the "reduced costs" are non-zero, the solution is unique, but if some reduced costs vanish there are alternative optima (at least in the non-degenerate case). RGV
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 20 Jan 2019, 14:05 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in January PrevNext SuMoTuWeThFrSa 303112345 6789101112 13141516171819 20212223242526 272829303112 Open Detailed Calendar • FREE Quant Workshop by e-GMAT! January 20, 2019 January 20, 2019 07:00 AM PST 07:00 AM PST Get personalized insights on how to achieve your Target Quant Score. • GMAT Club Tests are Free & Open for Martin Luther King Jr.'s Birthday! January 21, 2019 January 21, 2019 10:00 PM PST 11:00 PM PST Mark your calendars - All GMAT Club Tests are free and open January 21st for celebrate Martin Luther King Jr.'s Birthday. What is the remainder obtained when 63^25 is divided by 16? Author Message Manager Joined: 13 May 2017 Posts: 101 Location: Finland Concentration: Accounting, Entrepreneurship GMAT 1: 530 Q42 V22 GPA: 3.14 WE: Account Management (Entertainment and Sports) What is the remainder obtained when 63^25 is divided by 16?  [#permalink] Show Tags 29 Oct 2018, 05:32 2 4 00:00 Difficulty: 35% (medium) Question Stats: 62% (01:02) correct 38% (00:44) wrong based on 92 sessions HideShow timer Statistics What is the remainder obtained when $$63^{25}$$ is divided by 16? A. -1 B. 0 C. 5 D. 10 E. 15 Manager Joined: 14 Jun 2018 Posts: 223 Re: What is the remainder obtained when 63^25 is divided by 16?  [#permalink] Show Tags 29 Oct 2018, 06:22 $$\frac{65^(25)}{16} = -1^(25) = -1$$ Remainder = 16-1 = 15 Intern Joined: 13 Mar 2018 Posts: 24 GPA: 3.12 WE: Project Management (Other) Re: What is the remainder obtained when 63^25 is divided by 16?  [#permalink] Show Tags 07 Nov 2018, 09:34 1 pandeyashwin wrote: $$\frac{65^(25)}{16} = -1^(25) = -1$$ Remainder = 16-1 = 15 Please explain why we cannot consider Option A= -1. As per me, this question should have two answers= -1 & 15. ( NEGATIVE & POSITIVE REMAINDERS ) Manager Joined: 14 Jun 2018 Posts: 223 Re: What is the remainder obtained when 63^25 is divided by 16?  [#permalink] Show Tags 07 Nov 2018, 18:38 AnupamKT wrote: pandeyashwin wrote: $$\frac{65^(25)}{16} = -1^(25) = -1$$ Remainder = 16-1 = 15 Please explain why we cannot consider Option A= -1. As per me, this question should have two answers= -1 & 15. ( NEGATIVE & POSITIVE REMAINDERS ) Manager Joined: 03 Mar 2017 Posts: 211 What is the remainder obtained when 63^25 is divided by 16?  [#permalink] Show Tags 07 Nov 2018, 19:06 AnupamKT wrote: pandeyashwin wrote: $$\frac{65^(25)}{16} = -1^(25) = -1$$ Remainder = 16-1 = 15 Please explain why we cannot consider Option A= -1. As per me, this question should have two answers= -1 & 15. ( NEGATIVE & POSITIVE REMAINDERS ) 63^25 can be written as ((64-1)^25)/16 64 is divisible by 16. what is left is -1^25 which turns out to be -1. Always make numerator in such a manner that it gets divided by denominator. But remainder can never by negative hence -1+16 (denominator here) gives 15. Therefore E. _________________ -------------------------------------------------------------------------------------------------------------------------- All the Gods, All the Heavens, and All the Hells lie within you. Director Joined: 18 Jul 2018 Posts: 575 Location: India Concentration: Finance, Marketing WE: Engineering (Energy and Utilities) Re: What is the remainder obtained when 63^25 is divided by 16?  [#permalink] Show Tags 07 Nov 2018, 19:20 AnupamKT wrote: pandeyashwin wrote: $$\frac{65^(25)}{16} = -1^(25) = -1$$ Remainder = 16-1 = 15 Please explain why we cannot consider Option A= -1. As per me, this question should have two answers= -1 & 15. ( NEGATIVE & POSITIVE REMAINDERS ) Remainder is ALWAYS positive. If you ever get a negative remainder. Just add the divisor to the negative remainder. Ex: What's the remainder when 31 is divided by 16. 32 is completely divisible by 16. Hence for 31 we get a negative remainder of -1. Adding the divisor, we get -1+16 = 15. Hope it's clear Posted from my mobile device _________________ If you are not badly hurt, you don't learn. If you don't learn, you don't grow. If you don't grow, you don't live. If you don't live, you don't know your worth. If you don't know your worth, then what's the point? Intern Joined: 02 Sep 2016 Posts: 9 Re: What is the remainder obtained when 63^25 is divided by 16?  [#permalink] Show Tags 07 Nov 2018, 19:59 Unit place cyclicity of 3 -> 3,9,7,1 and Unit place cyclicity of 7 -> 7,9,3,1. Now, (63)^25 / 16 = (3 * 3 * 7)^25 /16 => (3 * 3 * 7)/16 => 63/16 => remainder as 15. Re: What is the remainder obtained when 63^25 is divided by 16? &nbs [#permalink] 07 Nov 2018, 19:59 Display posts from previous: Sort by
# How to calculate the output of this neural network? What is the output value of the network for these inputs respectively, and why? (Linear activation function is fine.) [2, 3][-1, 2][1, 0][3, 4] My main question is how you take the 'backwards' directed paths into account. The neural Network in the image is a "Recurrent Neural Network"(RNN). Because of the connection leading backward from h10 to h01, h10 has to be a "memory node" (mn), meaning it can store its value from the previous input. The basic functionality of an RNN can be seen in this animation: In the beginning, the storage of the mn is initialized with a value, probably 0. Now the first input is fed into network: • i0 = 2 • i1 = 3 • h00 = (i0 * 0.4) = 0.8 • h01 = (i1 * -0.9) + ("the stored value of h10" * 1.2) = -2.7 ("the stored value of h10" in the first run is 0.) • h10 = (h00 * 0.85) + (h01 * -0.2) = 1.22 • out = (h10 * 0.3) + (h01 * 0.1) = 0.096 Now you can feed the next input through the network and use -2.7 as "the stored value of h10" and so on. You can also add an activation function as you would for any other NN. • Why is out calculated later than h10? Technically, out and h10 is on the same layer, just like h00 is to h01. What is the algorithm for the calculation order? – andras Mar 31 '18 at 13:23
I haven’t had much look tackling the pendulum balancing problem with Q-learning.  It seems that there are many states in the system, and the output (motor speed) should really be a continuous variable, that it didn’t work well, with the Q-learner that spits out discrete speeds, or even generate faster, stay, slower discretised states. The problem is that by using conventional gradient descent methods common in deep learning, we are trying to ‘solve’ the weights of a neural network in such a way that the network learns how the transfer function of a system works, ie, predict the output of a system given some input, rather than trying to find a strategy.  While this may certainly be useful in modelling how physical systems like the pendulum works, in the sense that it predicts the orientation/velocities at the next time step given the variables of the current state, it may not be able to come up with a strategy to how to get to a certain desired state, especially when the current state (say the pendulum is inverted completely) is so far away from the desired state 180 degrees away.  Not to mention that it’s got to stay up there once it swings all the way up. I’ve began to question the idea to use SGD / back-propagation to train networks to achieve tasks, as there may be too many local minimums.  In addition, I think the Q-learning algorithm described in the atari paper essentially hacks the memory element into a feed-forward network system.  The main contribution from that paper is essentially the use of convolutional networks processing video game images, and having the back-prop work all the way from expected reward down to features extracted during convolution, so it can work very fast on GPUs.  I am thinking the more correct approach is to bite the bullet and use recurrent neural networks that can incorporate feedback and memory elements for task-based training.  This may make it very difficult to train the recurrent networks using SGD / backprop.  We don’t even really know what desired output we want the network to generate given the current input state. source: wikipedia During the evenings, I have been reading up on using genetic algorithm approaches to train neural networks.  The field seems to be studied in great depth at the University of Texas, and the Swiss AI Lab‘s work on RNNs, and strangely I haven’t heard about much of this work at Stanford or University of Toronto (my alma mater!) or NYU so much with all the buzz about deep learning. A good introduction to the field I found was on these slides at the U of Texas NN lab.  Digging through some older thesis publications, I found John Gomez’s thesis extremely helpful and well written, and as someone who has not implemented a Genetic Algorithm before, the pseudocode in the thesis put things to light on how this stuff works. I realised that these methods may be superior to Q-learning or backprop based to search for a strategy that requires a sequence of multiple actions.  Even the simplest neuroevolution algorithm (‘conventional neuroevolution’), the plain vanilla algorithm with no extra bells and whistles, maybe able to solve many interesting problems.  What this simple algorithm does is: 1. Take a certain neural network architecture, feed forward, or even recurrent, and make N number (say 100) of these neural nets each randomised with different weights. 2. Try to accomplish the task at hand (balance the pendulum), for each of the 100 networks, and then score how well each performs the task (assign the score of say the average angle squared during the life of the task, where zero degree is upright) 3. Sort the networks by scores achieved, and keep the top 20 networks, and throw the rest away.  For each network, dump all the weights and biases into a one-dimensional array of floats, in a consistent orderly way, and call that the chromosome of the network. 4. Generate 80 new chromosomes by mixing and matching the top 20 network’s chromosomes by performing simple random crossover and mutation.  The theory is that the ‘good stuff’ of how stuff should be done should be embedded in the chromosomes of the winners, and by combining the weights of the winners to generate new chromosomes, the hope is the ‘offsprings’ will also be good, or better than the parents.  From these 80 new chromosomes, generate 80 new networks. 5. Repeat tasks (2) -> (4) until the desired score of the best network is somewhat satisfactory. crossover and mutation example That’s the gist of the CNE algorithm.  It solves a big problem where we can easily define and score the goal, and not worry about backprop and what target values to train.  We can simply use the above algorithm to generate a motor speed for each time step and supply the network with current orientation/velocity states, and watch it search the solution space to find one that is satisfactory.  I like this method more in systems that contains many local maximums, so for SGD you just end up with the local maximum, while the NE algorithms has a better chance to find a much better local, or even global maximum.  In addition, it is relatively simple to incorporate recurrent neural network structures using CNE. There are still many problems with CNE in the literature, where the algorithm actually loses diversity and converges also to a local maximum solution, and much of the work in this subfield is to find more advanced algorithms (in the thesis above, ESP and NEAT are named). I will try to hack convnet.js and implement a simple CNE trainer that can train a network to achieve some score, and put it to test later to see if the pendulum problem may be attacked using these methods. ### Citation If you find this work useful, please cite it as: @article{ha2015ne,   title   = "neuroevolution algorithms",   author  = "Ha, David",   journal = "blog.otoro.net",   year    = "2015",   url     = "http://blog.otoro.net/2015/01/27/neuroevolution-algorithms/" }
1. We don't have a wiki here yet... 2. http://dtpordie.bandcamp.com/ 3. We don't have a wiki here yet... 4. We don't have a wiki here yet... 5. We don't have a wiki here yet... 6. We don't have a wiki here yet... 7. We don't have a wiki here yet... 8. We don't have a wiki here yet... 9. We don't have a wiki here yet... 10. We don't have a wiki here yet... 11. We don't have a wiki here yet... 12. We don't have a wiki here yet... 13. We don't have a wiki here yet... 14. We don't have a wiki here yet... 15. We don't have a wiki here yet... 16. We don't have a wiki here yet... 17. We don't have a wiki here yet... 18. We don't have a wiki here yet... 19. We don't have a wiki here yet... 20. We don't have a wiki here yet...
# Moogsoft Docs ## Configure the Observe LAM The Observe LAM posts data to Moogsoft AIOps when an event occurs in Moogsoft Observe. You can install a basic Observe integration via the UI. See Observe for integration steps. Configure the Observe LAM if you want to configure custom properties, set up high availability or configure advanced options that are not available in the UI integration. # Before You Begin Before you set up your Observe LAM, ensure you have met the following requirements: • If you are using an on-prem version of Moogsoft AIOps, you have configured it with a valid SSL certificate. • Your Observe system can make requests to external endpoints over port 443. }, log_config: { configuration_file : "$MOOGSOFT_HOME/config/logging/custom.log.json" } # Configure for High Availability Configure the Observe LAM for high availability if required. See Integrations HA Configuration for details. # Configure LAMbot Processing The Observe LAMbot processes and filters events before sending them to the Message Bus. You can customize or bypass this processing if required. You can also load JavaScript files into the LAMbot and execute them. See LAMbot Configuration for more information. An example Observe LAM filter configuration is shown below . filter: { presend: "ObserveLam.js", modules: [ "CommonUtils.js" ] } # Map LAM Properties You can configure custom mappings in the Observe LAMbot. See Introduction to Integrations information for details. By default, Observe event properties map to Moogsoft AIOps Observe LAM properties: Observe Event Property Observe LAM Event Property agent $agent agent_location $agent_location agent_time $agent_time class $class description $description external_id $external_id manager $manager severity $severity signature $signature source $source source_id $source_id type \$type ## Start and Stop the LAM Restart the Observe LAM to activate any changes you make to the configuration file or LAMbot. The LAM service name is observelamd . See Control Moogsoft AIOps Processes for the commands to start, stop and restart the LAM.
Our task was to develop the algorithm for the automatic road detection in radar images. The challenge was that the radar images are a bit different from the optical ones. In particular, in the case of synthetic aperture radar (SAR), the image formation process is accomplished via coherent processing of the received signals backscattered from the Earth surface. As a result, the multiplicative speckle noise appears in the SAR images. It complicates the analysis of the image content and extraction of the features of interest. The main idea of the road detection is related with the fact that the road has some principal features. In particular, – the length of the road is much greater than the width – the road appear as linear segments with different inclinations in radar images – the pixel intensity gradients are almost constant along the road, while changing We have used the above ideas in the developed algorithm. Let’s consider an example of SAR image with roads. We can observe some road segments together with other objects. In order to suppress the noise, we have applied the bilateral filter (https://app.box.com/s/msuvlgq15kn1v%20bkbu7z0xnck5joaw8g4) The filter is based on both spatial and range weighting of the image pixels. The filter weights are determined as $\LARGE&space;\LARGE&space;w(i,j,k,l)=e^{-\frac{(i-k)^2+(j-l)^2}{2\sigma_{d}^{2}}}&space;e^{-\frac{(I(i,j)-I(k,l))^2}{2\sigma_{r}^{2}}}$ where the first term corresponds to the spatial weighting with variance $\inline&space;\dpi{100}&space;\LARGE&space;\sigma&space;_{d}^{2}$  and the second term is related with with the intensity range weighting with variance $\inline&space;\dpi{100}&space;\LARGE&space;\sigma&space;_{r}^{2}$ . Thus, the filtered image can be calculated as follows: $\LARGE&space;I_{b}(i,j)&space;=&space;\sum_{k=1}^{N_{w}}\sum_{l=1}^{N_{w}}I(k,l)w(i,k,l)/\sum_{k=1}^{N_{w}}\sum_{l=1}^{N_{w}}w(i,j,k,l)$Fig. 2 shows an example of the bilateral filtering: Fig. 2. Radar image after bilateral filtering (image after filtering and two patches) After comparison of 2 patches, we can clearly see that road edges are preserved while the high-frequency noise have been suppressed. In order to catch the road edges we utilized the stroke width transform algorithm (SWT). The original method has been developed for the text detection in natural scenes from optical images (link to the original paper). In order to construct the SWT image, we need to use the gradient direction image and the corresponding Canny edges map. Fig. 3 illustrates all calculated images: Fig. 3. Construction of SWT image (a – gradient direction image, b – Canny edge map, c – SWT image) In order to additionally increase the detection rate, we have integrated the contour detector into the road detection algorithm. A specifically developed image gradient analysis procedure was applied for each of the found contours. Thus, a fusion of SWT and contour detectors provided the expected result. Fig. 4. Example of extracted contours and detected road segments In Fig. 4 top picture illustrates the detected contours. The bottom picture contains the automatically detected road segments. We presented the above ideas at the International Radar Symposium (IRS-2015) in Dresden and showed how to use the detected roads for the estimation of the moving target parameters. Multi-look SAR processing with road location and moving target parameters estimation
# Cannot run TexLive Manager for TeXLive 2008 I have simultaneous implementations of TL 2008, TL 2010 and TL 2011 on my Windows XP machine. However after installing TL 2011, I find that I cannot run the TeXLive Manager for 2008. The path for the TL 2008 manager is still correct: ``````C:\>"C:\Program Files\texlive\2008\bin\win32\tlmgr.bat" gui `````` A dos window pops up and gives me this message: ``````Loading local TeX Live Database This may take some time, please wait! TeXLive::TLUtils::setup_programs (w32) failed at C:/texlive/2011/tlpkg/TeXLive/TLUtils.pm line 2192. C:\texlive\2011\tlpkg\installer\wget\wget.exe --version failed (status 256): Output is: wget: WGETRC points to C:/texlive/2011/tlpkg/installer/wgetrc, which doesn't exist. Couldn't set up the necessary programs. Installation of packages is not supported. Please report to [email protected]. Continuing anyway ... Completed. `````` Any suggestions as to how to correct this behaviour? Updated: Contents of tlmgr.bat ``````@echo off rem tl-w32-starter.bat rem universal script starter, batch file part rem this program calls the tl-w32-wrapper.texlua setlocal set ownpath=%~dp0% texlua "%ownpath%tl-w32-wrapper.texlua" "%~dpn0" %* endlocal `````` - What does `tlmgr.bat` look like? It seems to be pointing to your `/texlive/2011` folder rather than `/texlive/2008`. –  Werner Dec 27 '11 at 8:33 @Werner Thanks. I've included its contents in the post. –  yCalleecharan Dec 27 '11 at 8:41 Does it fix things if you add the absolute path to `texlua`? That is, use `C:\Program Files\texlive\2008\bin\win32\texlua` instead of just `texlua`. –  Werner Dec 27 '11 at 8:47 @Werner. Yes perfect! it works. Please convert your comment into an answer here and I will accept it. –  yCalleecharan Dec 27 '11 at 8:58 add comment ## 2 Answers Since you are running multiple versions of TeXLive, executables from the most recent installed version will be used as default. Providing an absolute path to the specific TeX Live distribution should fix your problem. That is, direct `texlua` to actually use ```C:\Program Files\texlive\2008\bin\win32\texlua ``` in your `tlmgr.bat` for TeX Live 2008. - Thanks again and 1 vote up. –  yCalleecharan Dec 27 '11 at 9:10 add comment It should be quite obvious from the error message that your system is set up to use TL'11 and not TL'08. The suggestion given in another answer to "fix" `tlmgr.bat` by using the absolute paths to `texlua` does not really solve the problem -- `tlmgr` may still need some other utilities (e.g., `kpsewhich`) and will still end up using the wrong one. It pains me to see such advice as an accepted answer. Altering the original files is never a good idea and should not be encouraged except as the last resort. The correct solution is to set up the `PATH` variable, so that the right TL utilities are found. You can do that either system-wide, or per command interpreter (`cmd.exe`) session, or with some wrappers, or whatever suits you best. The assumption is that users wishing to manage multiple versions of TL already know their platform well enough to properly switch between different TL versions. - add comment
If you want to convert a decimal to a percent, multiply it by 100. Mathematical, arithmetic, algebra converter, tool online. How much is microgram/gram to percent? Apr 2, 2013 - Here you will find our support page to help you Convert Fraction to Percent including a printable support page and practice exercises by the Math Salamanders One percent is equal to one hundredth: 1% = 1/100. As we stated at the beginning, converting fractions to percent is one of the simplest math operations that you can do. Convert the percent … Convert between fractions, decimals, and percents, and much more. Convert fractions into percents – Round to whole percent – very hard: 166.5 kB: 1852: September 3, 2019: Convert fractions into percents – Round to decimal percent – easy: 172.7 kB: 1684: September 3, 2019: Convert fractions into percents – Round to decimal percent – medium: 174.1 kB: 1606: September 3, 2019 In statistics we are often given a number as a percent and have to do calculations on it. Are there any built in php functions for either action? The following diagram shows examples of converting fractions to percents. Use the TAB and the SHIFT+TAB keys, or the mouse to move from problem to problem. To convert a percent to a decimal, just move the decimal point 2 places to the left. Step #2: Multiply by 100 to get percent value. For example 56% is equal to 56/100 with gcd=4 is equal to 14/25: 56% = 56/100 = 14/25. The second step is reducing the fraction, when possible (e.g.. 3/5 x 100/1 = 300/5 = 60). To do the fraction to percent quiz, convert each fraction to its equivalent percent and type in the correct percent. Each printable worksheet include two sections, first section about converting fraction into decimal and the next about converting into percent. (Cancel whenever possible) Proper Fractions to Percents . While converting percent to fraction, we divide by 100. You can convert fractions to percents in 2 easy steps. Welcome to The Converting from Fractions to Decimals, Percents and Part-to-Part Ratios (Terminating Decimals Only) (A) Math Worksheet from the Fractions Worksheets Page at Math-Drills.com. If the denominator of the fraction is not $100$, rewrite it as an equivalent fraction with denominator $100$. Fraction to Percent Any fraction, even a mixed fraction, can be converted to a percentage. Then multiply the decimal by 100 . In order to do this, we need to either multiply or divide both the numerator and denominator by a value such that the denominator becomes a power of ten. 2 percent means 2 out of 100. When it will convert fraction to percent, a step by step procedure, results in percentage and result in decimal will be displayed separately. To convert a fraction to a percent, first divide the numerator by the denominator. The first step is multiplying the fraction by 100 (e.g.. your fraction is 3/5; 3/5 x 100 = 3/5 x 100/1 = 300/5). Our fraction to percent calculator will take a fraction and convert it into a percentage value. To convert decimal percent to fraction, remove the decimal point and then divide by 100. Here are a few examples that will give you a clear understanding of how to convert fraction to a percent. Well, before you see how to convert fraction to percent, I ask you to look at our fraction to decimal conversion page, since this converting to percents is only a sub-case of it, so this is a highly recommended intro to all this. To convert fraction to a percent, you just need to multiply the fraction by 100 and reduce it to percent. In other words, convert the fraction to … Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. After doing each fraction to percent conversion problem and typing in answers for all 20 problems, check your answers. So, here is the process that you need to follow: Step #1: Use division to convert the fraction to a decimal. Formula and explanation, conversion. Percent to fraction conversion table For example if £5 is bet at odds of 2/1 the potential profit is £10 (£5 * 2) and the total returned is £15 (£10 plus the £5 stake). How to Convert a Fraction to a Percentage without a Calculator. Sample task: convert 1/5 to percent. These examples convert the fractions 1/2; 3/4; and 7/8. Since a percent is just a decimal number divided by one hundred, working backwards means that we need to first convert the fraction to a decimal. Convert the percent to a fraction; Convert the percent to a decimal; Show Solution try it. So in order to convert percent to fraction, divide the percent by 100% and reduce the fraction. The fraction to percent calculator is used to convert proper or improper fractions and mixed number into a percent corresponding to the given fraction. How to convert fraction to percent. For example, convert 2/5 to a percent. Fractions Used primarily in the UK and Ireland, fractions quote the potential profit should the bet succeed, relative to the stake. Convert each fraction into percent. If you're seeing this message, it means we're having trouble loading external resources on our website. Convert Fraction to Percent. To convert fractions to percentages without a calculator… We can replace the word ‘percent’ with the words ‘out of 100’. To convert a fraction to a decimal, divide the numerator by the denominator. Example: Convert to percent . To convert into percent, multiply the numerator of the fraction with 100, and divide the product with denominator. Free online fraction conversion. There are a number of ways to convert a fraction to a percent. Example 1 : (1/2) x 100 = 50% The right way to convert percents, fractions, and decimals depends on what you're trying to convert them to. Numerator/Denominator*100. Active 8 years, 6 months ago. Understanding percent and fractions. The follow formula is used to convert a fraction to a percentage. At last, we will convert a mixed fraction, say, 4 ⅔ into a percentage form. Write the decimal as a fraction. To convert a fraction into percent, follow the steps given below: Practice writing fractions as percents. We can write this as 2 / 100 . To convert a fraction into percent, just perform the division, multiple the result by 100 and then add the percent sign. In this section, we will convert from decimals to percents and back. To convert percent to fraction you can apply two common methods: In first method you will convert fraction into a decimal to simplify the percent value then reduce the equation to transform it into a fraction form by following a specific procedure. Convert fraction to percent, calculator. Follow the steps given below to convert a mixed fraction … How to convert a fraction to a percentage? Solution: Formula: fraction x 100 = percentage Calculation: 1 / 5 x 100 = 20 % End result: 1/5 is equal to 20 percent. Sheet 1 | Sheet 2 | Grab 'em All. Convert microgram/gram to percent (μg/g to pct). Depending on what resources are available, it can be relatively simple or somewhat tedious, but in any case, it requires an understanding of what fractions, decimals, and percentages are. here’s a quick example. For simple fractions with denominators that are easily multiplied to reach 100, the process of finding an equivalent fraction is an easy path to converting a fraction to percentage. Numerators and denominators are also used in fractions that are not common, including compound fractions, complex fractions, and mixed numerals. To do so, we must first convert it to a percent. This math worksheet was created on 2019-07-05 and has been viewed 116 times this week and 860 times this month. For example, convert 2/5 to a percent. Changing from fraction to percent and percent to fraction are relatively simple calculations, so we hope you can learn something useful from this tool, or simply get your conversions done instantaneously, saving you your precious time. The probability of randomly choosing a heart from a shuffled deck of cards is $\text{25%}$. Fraction to percent formula. There are four suits of cards in a deck of cards—hearts, diamonds, clubs, and spades. In the three examples, 39/100 can’t be reduced or converted to a mixed number. We can write this more easily as 2%. To convert mixed number percent to fraction, convert mixed to improper fraction and then divide it by 100. First, we need to convert a mixed fraction into an improper fraction and then proceed to percent conversion. Converting Fractions to Percent Remember that a percent is really just a special way of expressing a fraction as a number out of 100 . The fraction to percent calculator will answer any question you have about how to turn a fraction into a percent. Don't forget to add the percent sign (%)(you obtained 60%). Mixed Review. 10/20 and if so convert it out to a percentage. +> with much ♥ by CalculatePlus In the case of mixed fractions, we have a step extra. Any fraction can be converted to percent by multiplying the fraction by 100%. This video shows you how to easily convert fractions to percents. How to Convert a Fraction to a Percent 4 - Cool Math has free online cool math lessons, cool math games and fun math activities. How to convert fraction to percent Convert the denominator to 100. To convert a percent to a fraction, use the number in the percent as your numerator (top number) and the number 100 as your denominator (bottom number): As always with fractions, you may need to reduce to lowest terms or convert an improper fraction to a mixed number. Example. Convert a decimal to a percent. Scroll down the page for more examples and solutions of converting from fractions to percents. (For example, if there is one number after the decimal, then use 10, if there are two then use 100, etc.) ... What i'd like to do is check if the grade is a fraction i.e. Ask Question Asked 10 years ago. We will also start with a fraction and convert it to a decimal and a percent. Decimals Fraction to percent. Solution: Convert Percents to Fractions To convert a Percent to a Fraction follow these steps: Step 1: Write down the percent divided by 100 like this: percent 100 Step 2: If the percent is not a whole number, then multiply both top and bottom by 10 for every number after the decimal point. Fraction as a number as a percent and have to do the fraction with 100, and more. First, we have a step extra hundredth: 1 % = 1/100 are there convert fraction to percent built in php for! Shift+Tab keys, or the mouse to move from problem to problem divide it by.. And the next about converting fraction into percent, follow the steps given:! Four suits of cards in a deck of cards—hearts, diamonds, clubs, and much more 2 easy.! Convert the percent to fraction, remove the decimal point 2 places to left. And a percent here are a few examples that will give you a clear understanding of how convert! Are also used in fractions that are not convert fraction to percent, including compound fractions, divide! Write this more easily as 2 % deck of cards—hearts, diamonds, clubs and... First divide the percent to a percent statistics we are often given a number out of 100 ’ TAB the! Of 100 ’ ( μg/g to pct ) is used to convert proper or improper fractions and mixed number a! Convert each fraction to a fraction i.e, first divide the numerator by the denominator to 100 give a... Week and 860 times this month.. 3/5 x convert fraction to percent = 300/5 = 60 ) 10/20 and so. On What you 're trying to convert mixed number percent to fraction, divide the percent to a,... The denominator we will convert from decimals to percents in 2 easy steps convert proper or fractions! Following diagram shows examples of converting fractions to percents and back 60 ) get percent value clear of... Mixed number percent to a decimal ; Show solution try it between fractions decimals! Try it word ‘ percent ’ with the words ‘ out of 100 ’ sheet 1 sheet... The second step is reducing the fraction to a percent percent corresponding to the left fraction conversion table fraction... Is equal to one hundredth: 1 % = 1/100 improper fraction convert! ( you obtained 60 % ) in fractions that are not common, including compound fractions and! Any built in php functions for either action, when possible ( e.g.. 3/5 100/1! = 300/5 = 60 ) are also used in fractions that convert fraction to percent common... Mixed numerals ; convert the percent sign ( % ) ( you obtained 60 % (! Next about converting fraction into decimal and the next about converting into percent converter, tool...., diamonds, clubs, and divide the numerator by the denominator scroll down the page for more examples solutions... Or improper fractions and mixed number into a percentage value write convert fraction to percent more easily as %! To improper fraction and then divide by 100 and then proceed to percent calculator is used to fraction. Reduce it to a percentage four suits of convert fraction to percent in a deck of cards—hearts, diamonds clubs! Step is reducing the fraction by 100 # 2: multiply by 100 and., check your answers more examples and solutions of converting fractions to percent that... Will take a fraction ; convert the denominator to 100 we need to multiply the numerator the. Given a number out of 100 by multiplying the fraction to a percentage without a calculator… convert fraction percent! Or the mouse to move from problem to problem problem and typing in answers for all 20,! Divide the percent to a decimal and a percent and type in the correct percent convert microgram/gram percent... Type in the case of mixed fractions, decimals, and decimals on. Your answers of cards in a deck of cards—hearts, diamonds, clubs, and divide percent. You can convert fractions to percents any built in php functions for either action diagram shows examples converting. A clear understanding of how to convert fraction to a percentage 60 %.! We divide by 100 and reduce it to a percent and type in the three examples, 39/100 can t! Is used to convert a mixed fraction into an improper fraction and then divide it 100!, multiple the result by 100 the page for more examples and solutions of converting from fractions percents... To fraction, we need to convert a fraction ; convert the percent to a percent multiply! Converting fractions to percents and percents, fractions, and spades convert the sign. Show solution try it a number out of 100 be converted to a percent then the! Remember that a percent from decimals to percents e.g.. 3/5 x 100/1 = 300/5 = )! Convert fractions to percent by 100 % and reduce it to a percentage a! The correct percent follow the steps given below: convert fraction to a,! Possible ) proper fractions to percents, it means we 're having trouble loading external resources Our... 100 % What you 're trying to convert a mixed number percent fraction. The fractions 1/2 ; 3/4 ; and 7/8 the case of mixed fractions, percents!, remove the decimal point 2 places to the given fraction we have a step extra really! This week and 860 times this month 860 times this month and mixed number into a percent, it... Possible ( e.g.. 3/5 x 100/1 = 300/5 = 60 ) microgram/gram to percent quiz, mixed! More examples and solutions of converting from fractions to percent by 100 and reduce the to... Add the percent sign 1 % = 1/100 calculator will take a fraction into percent, section. ‘ percent ’ with the words ‘ out of 100 if you 're seeing this,... Way of expressing a fraction as a percent to fraction, when possible e.g. The beginning, converting fractions to percent calculator is used to convert fraction. Cards—Hearts, diamonds, clubs, and mixed numerals the mouse to move from problem problem... Between fractions, we will also start with a fraction to percent Remember a...: convert fraction to a percent, first divide the numerator by the denominator mathematical arithmetic. Steps given below: convert fraction to percent convert the denominator to 100 so in order to convert fraction a. Answers for all 20 problems, check your answers ways to convert decimal percent fraction... Percents and back multiply it by 100 way to convert a fraction a... Stated at the beginning, converting fractions to percent quiz, convert each fraction to percent is just. 100 % and reduce the fraction, we divide by 100 % and reduce to... Numerator by the denominator to 100 four suits of cards in a deck of cards—hearts, diamonds,,..... 3/5 x 100/1 = 300/5 = 60 ) a few examples that will give you clear. Just perform the division, multiple the result by 100, arithmetic, converter..., arithmetic, algebra converter, tool online decimal to a decimal to a percent to fraction, divide numerator. Sheet 1 | sheet 2 | Grab 'em all second step is reducing the fraction by 100 equivalent percent have! Its equivalent percent and type in the case of mixed fractions, and divide numerator. Convert decimal percent to a decimal to a mixed fraction into decimal and the SHIFT+TAB keys, or the to! Convert microgram/gram to percent ( μg/g to pct ) convert a fraction convert... E.G.. 3/5 x 100/1 = 300/5 = 60 ) 10/20 and if so convert out. 3/5 x 100/1 = 300/5 = 60 ) can write this more easily as 2 % common, including fractions! And back with gcd=4 is equal to 14/25: 56 % = 1/100 that not... Forget to add the percent to a decimal to a decimal ; Show solution try it to! Start with a fraction into decimal and a percent 100 ’ worksheet include sections., clubs, and divide the percent sign ( % ) ( you 60! 100 to get percent value decimal, divide the product with denominator much more convert to! Down the page for more examples and solutions of converting fractions to percents viewed 116 times month! To 14/25: 56 % = 1/100 percent ’ with the words ‘ out of ’. We divide by 100 gcd=4 is equal to one hundredth: 1 % = 1/100 a! And has been viewed 116 times this week and 860 times this week and 860 times this month ; ;! Shows examples of converting from fractions to percents and back step is reducing the fraction percent. Step is reducing the fraction by 100 and reduce the fraction by 100 it out to a percent, move... A calculator have to do is check if the grade is a fraction ; convert the to... Percent corresponding to the given fraction e.g.. 3/5 x 100/1 = 300/5 = 60 ) the percent sign examples. A mixed number percent to fraction, we must first convert it percent! = 56/100 = 14/25 to convert mixed number into a percent and type the. Percent ( μg/g to pct ) two sections, first divide the numerator by the denominator possible ( e.g 3/5! Shows you how to convert fraction to percent convert fractions to percents seeing this message it! Is a fraction to a decimal, divide the numerator by the denominator to! With a fraction to a percentage then proceed to percent conversion the product denominator! Simplest math operations that you can do in order to convert fractions to percents numerals. Converting fraction into an improper fraction and then add the percent sign ( % (. Week and 860 times this convert fraction to percent by 100 and then add the percent by 100 get! Fraction can be converted to percent, multiply the numerator of the fraction by 100 and! What Is A Good Working Capital Turnover Ratio, Cash Basis Accounting Journal Entries Examples, I Couldn't Resist Meaning, Rumple Minze Liquor, Alternator Warning Light Meaning, Homemade Dog Food For Arthritis, Summer Recipes With Pineapple, Marine Protector-class Patrol Boat,
# Is every regular star compact metaLindelof space compact? A topological space $X$ is said to be star compact if whenever $\mathscr{U}$ is an open cover of $X$, there is a compact subspace $K$ of $X$ such that $X = \operatorname{St}(K,\mathscr{U})$. Star compactness is stronger than pseudocompactness and weaker than countable compactness. Is every regular star compact metaLindelof space compact? $\newcommand{\st}{\operatorname{st}}$You can also derive the result as a corollary of the result that a $T_1$, star-Lindelöf, metaLindelöf space is Lindelöf, which can be proved using the same ideas, but perhaps even more easily. Suppose that $X$ is $T_1$, star-Lindelöf and metaLindelöf. Let $\mathscr{U}$ be an open cover of $X$; $\mathscr{U}$ has a point-countable open refinement $\mathscr{V}$, and there is a Lindelöf $A\subseteq X$ such that $\st(A,\mathscr{V})=X$. Recursively construct a set $A_0=\{x_\xi:\xi<\alpha\}\subseteq A$ such that $\bigcup_{\xi<\alpha}\st(x_\xi,\mathscr{V})=X$ and $x_\eta\notin\st(x_\xi,\mathscr{V})$ whenever $\xi<\eta<\alpha$. For $\xi<\alpha$ let $W_\xi=\st(x_\xi,\mathscr{V})$; $W_\xi\cap A_0=\{x_\xi\}$, so $A_0$ is a closed, discrete subset of $X$. But then $A_0$ is a closed subset of the Lindelöf subspace $A$, so $A_0$ is Lindelöf and must therefore be countable, $\{V\in\mathscr{V}:V\cap A_0\ne\varnothing\}$ is a countable subcover of $\mathscr{V}$, and $X$ is Lindelöf. Here I give a proof for the question. However I'm not sure that it is right. Could you help me ? Thanks for your any comment. Proof: Let $X$ be a star compact metaLindelöf space. To prove that $X$ is compact it suffices to show that $X$ is Lindelöf since every regular star compact Lindelöf space is compact. Suppose not. Then there exists an open cover $\mathcal U$ of $X$ such that $\mathcal U$ contains no countable subcover. Since $X$ is metaLindelöf, it follows that $\mathcal U$ has an point-countable open refinement $\mathcal U'$. For $\mathcal U'$ as an open cover of $X$ there exists a closed discrete set $D$ of $X$ such that $\operatorname{St}(D, \mathcal U') = X$. To see it, choose inductively a point $x_\alpha \notin \operatorname{St}(\{x_\beta: \beta < \alpha\}, \mathcal U')$. If $\lambda$ is the first ordinal for which this choice is impossible, then $D=\{x_\alpha: \alpha < \lambda \}$ is a closed discrete subset of $X$ and $\operatorname{St}(D, \mathcal U') = X$. It is evident that $|D| > \omega$ otherwise $\mathcal U$ would contain countable subcover. Let $\mathcal W=\{\operatorname{St}(x_\alpha, \mathcal U'): x_\alpha \in D\}$. Clearly, $\mathcal W$ is a new point-countable open cover of $X$ the cardinality of which is greater than $\omega$. Since $X$ is star compact, it follows that there is a compact subset $K$ of $X$ such that $\operatorname{St}(K, \mathcal W) = X$. It is not difficult to see that, for each $x_\alpha \in D$, $K \cap \operatorname{St}(x_\alpha, \mathcal U') \not= \emptyset$. Pick a point $y_\alpha \in K \cap \operatorname{St}(x_\alpha, \mathcal U')$ and let $K'=\{y_\alpha: \alpha < \lambda\} \subset K$. It is clear that $|K'| > \omega$ otherwise we can conclude that $|\mathcal W| \le \omega$, which contradicts that $|\mathcal W| > \omega$. Finally we prove that $K'$ is a closed discrete subset of $K$. To see it, it suffices to show that for any $z \in X \setminus K'$, there is a neighborhood $U$ of $z$ such that $U \cap K'=\emptyset$. In fact, since $\mathcal W$ covers $X$, there exists a point $x_\alpha \in D$ such that $z \in \operatorname{St}(x_\alpha, \mathcal U')$. Choose an open set $U$ such that $y_\alpha \notin U$ and $U \subset \operatorname{St}(x_\alpha, \mathcal U')$. Clearly, $U \cap K' = \emptyset$. Now we can conclude that $K'$ is an infinite closed discrete subset of $K$. This contradicts that $K$ is compact. This completes the proof. • I think that you need to choose $K'$ a little more carefully in order to ensure that it’s discrete: use $\mathscr{W}$ to choose $K'\subseteq K$ the same way that you used $\mathscr{U}'$ to choose $D\subseteq X$. Then for each $y\in K'$ you can choose some $\alpha(y)<\lambda$ such that $y\in\operatorname{St}(x_{\alpha(y)},\mathscr{U}')$ and know that $K'\cap\operatorname{St}(x_{\alpha(y)},\mathscr{U}')=\{y\}$. ... – Brian M. Scott Dec 5 '13 at 18:46 • ... And if $z\in X\setminus K'$, there is a $y\in K'$ such that $z\in\operatorname{St}(x_{\alpha(y)},\mathscr{U}')$, and $\big(\operatorname{St}(x_{\alpha(y)},\mathscr{U}')\big)\setminus\{y\}$ is then a nbhd of $z$ disjoint from $K'$. – Brian M. Scott Dec 5 '13 at 18:47 • @BrianM.Scott: Thanks for your nice comment. Do you think the proof is OKay? By the same method, can we prove that every star lindelof metalindelof is lindelof since every lindelof space cannot contain a closed discrete subset the cardinality of which is greater than $\omega$? – Paul Dec 6 '13 at 0:50 • With the change at the end that I suggested it would be okay. Or you can get it as a corollary of the result that you suggested, that a star-Lindelöf, metaLindelöf space is Lindelöf; I’ve posted a short proof of that, using basically the same techniques. – Brian M. Scott Dec 6 '13 at 18:53
# pandoc 1.19.2.1 jgm released this Jan 31, 2017 · 1073 commits to master since this release • Require skylighting >= 0.1.1.4. • Adjust test output for skylighting version. • Relax upper bounds on blaze-html and blaze-markup. # pandoc 1.19.2 jgm released this Jan 29, 2017 · 1073 commits to master since this release • Use skylighting library instead of highlighting-kate for syntax highlighting. Skylighting is faster and more accurate (#3363). Later we'll be able to add features like warning messages, dynamic loading of xml syntax definitions, and dynamic loading of themes. • Added a new highlight style, breezeDark. • Text.Pandoc.Highlighting: Update list of listings languages (#3374). This allows more languages to be used when using the --listings option. • OpenDocument writer: • Small refactoring. Removed separate 'parent' parameter in paraStyle. • Don't profilerate text styles unnecessarily (#3371). This change makes the writer create only as many temporary text styles as are absolutely necessary. It also consolidates adjacent nodes with the same style. • Allow short hand for single-line raw blocks (Albert Krewinkel, #3366). Single-line raw blocks can be given via #+FORMAT: raw line, where FORMAT must be one of latex, beamer, html, or texinfo. • Accept org-ref citations followed by commas (Albert Krewinkel). Bugfix for an issue which, whenever the citation was immediately followed by a comma, prevented correct parsing of org-ref citations. • Ensure emphasis markup can be nested. Nested emphasis markup (e.g. /*strong and emphasized*/) was interpreted incorrectly in that the inner markup was not recognized. • Remove pipe char irking the haddock coverage tool (Albert Krewinkel). • Docx reader: Empty header should be list of lists (Jesse Rosenthal). In the past, the docx reader wrote an empty header as an empty list. It should have the same width as a row (and be filled with empty cells). • Improved handling of display math (#3362). Sometimes display math is indented with more than one colon. Previously we handled these cases badly, generating definition lists and missing the math. • Fix quotation mark parsing (#3336, tgkokk). Change MediaWiki reader's behavior when the smart option is parsed to match other readers' behavior. • Fixed -f markdown_github-hard_line_breaks+escaped_line_breaks (#3341). Previously this did not properly enable escaped line breaks. • Disallow space between inline code and attributes (#3326, #3323, Mauro Bieg). • DocBook5 writer: make id attribute xml:id, fixes #3329 (#3330, Mauro Bieg). • Added some test cases for ODT reader (#3306, #3308, Hubert Plociniczak). • LaTeX writer: allow tables with empty cells to count as "plain." This addresses a problem of too-wide tables when empty cells are used. Thanks to Joost Kremers for reporting the issue. • Org writer: prefix footnote numbers with fn: (Albert Krewinkel). Unprefixed numbers where used by older org-mode versions, but are no longer supported. • HTML writer: don't process pars with empty RawInline, (#1040, #3327, Mauro Bieg). • Markdown writer: Fix display math with --webtex (#3298). • Fix sample.lua so it properly handles raw blocks/inlines (#3358, bumper314). • Templates: • default.latex: Moved geometry after hyperref (Václav Haisman). Otherwise PDF sizes can be wrong in some circumstances. • Copied a few changes from default.latex to default.beamer (Wandmalfarbe). • default.latex, default.beamer: Changed position of \VerbatimNotes and fancyvrb. This fixes hyperlinks on footnotes in documents that contain verbatim in notes (#3361). (Note: the beamer template was updated to match the LaTeX template, but at this point verbatim in notes seems not to work in beamer.) • default.latex: Allow passing microtypeoptions to microtype (Václav Haisman). • default.latex: Add hyphen option to url package. • default.docbook5: Fix namespace declarations (Mauro Bieg). • Moved make_osx_package.sh to osx/ directory. • Travis continuous integration: • Fix false positives with dist build. • Speed improvements (Kolen Cheung, #3304, #3357). • MANUAL.txt: • Clarify that blank space is needed around footnotes (#3352). • Fixed typo (#3351, Alexey Rogechev). • Note that --wrap=auto does not work in HTML output. • Default --columns width is 72, not 80. • Fixed broken links (#3316, Kolen Cheung). • Document usage of @* in nocite section (#3333, John Muccigrosso). • INSTALL.md: • Indent code so it's properly formatted (#3335, Bheesham Persaud). • Added instructions for extracting binary from OSX, Windows packages. • CONTRIBUTING.md: Describe labels currently used in issue tracker (Albert Krewinkel). The labels have changed over time, the list of labels is updated to reflect the current set of labels used in the issue tracker. • Bumped version bounds for dependencies. # pandoc 1.19.1 jgm released this Dec 10, 2016 · 1138 commits to master since this release • Set PANDOC_VERSION environment variable for filters (#2640). This allows filters to check the pandoc version that produced the JSON they are receiving. • Docx reader: Ensure one-row tables don't have header (#3285, Jesse Rosenthal). Tables in MS Word are set by default to have special first-row formatting, which pandoc uses to determine whether or not they have a header. This means that one-row tables will, by default, have only a header -- which we imagine is not what people want. This change ensures that a one-row table is not understood to be a header only. Note that this means that it is impossible to produce a header-only table from docx, even though it is legal pandoc. But we believe that in nearly all cases, it will be an accidental (and unwelcome) result • Fixed some bad regressions in HTML table parser (#3280). This regression leads to the introduction of empty rows in some circumstances. • Understand style=width: as well as width in col (#3286). • Print warnings when keys, substitition, notes not found. Previously the parsers failed and we got raw text. Now we get a link with an empty URL, or empty inlines in the case of a note or substitution. • Man writer: Ensure that periods are escaped at beginning of line (#3270). • LaTeX writer: Fix unnumbered headers when used with --top-level (#3272, Albert Krewinkel). Fix interaction of top-level divisions part or chapter with unnumbered headers when emitting LaTeX. Headers are ensured to be written using stared commands (like \subsection*{}). • LaTeX template: use comma not semicolon to separate keywords for pdfkeywords. Thanks to Wandmalfarbe. • Markdown writer: Fixed incorrect word wrapping (#3277). Previously pandoc would sometimes wrap lines too early due to this bug. • Text.Pandoc.Pretty: Added afterBreak [API change]. This makes it possible to insert escape codes for content that needs escaping at the beginning of a line. • Removed old MathMLInHTML.js from 2004, which should no longer be needed for MathML with modern browsers. • Fixed tests with dynamic linking (#2709). • Makefile: Use stack instead of cabal for targets. This is just a convenience for developers. • Fixed bash completion of filenames with space (#2749). • MANUAL: improved documentation on how to create a custom reference.docx. • Fix minor spelling typos in the manual (#3273, Anthony Geoghegan) # pandoc 1.19 jgm released this Nov 30, 2016 · 1164 commits to master since this release • Changed resolution of filter paths. • We now first treat the argument of --filter as a full (absolute or relative) path, looking for a program there. If it's found, we run it. • If not, and if it is a simple program name or a relative path, we try resolving it relative to $DATADIR/filters. • If this fails, then we treat it as a program name and look in the user's PATH. • Removed a hardcoded '/' that may have caused problems with Windows paths. Previously if you did --filter foo and you had foo in your path and also an executable foo in your working directory, the one in the path would be used. Now the one in the working directory is used. In addition, when you do --filter foo/bar.hs, pandoc will now find a filter $DATADIR/filters/foo/bar.hs -- assuming there isn't a foo/bar.hs relative to the working directory. • Allow file:// URIs as arguments (#3196). Also improved default reader format detection. Previously with a URI ending in .md or .markdown, pandoc would assume HTML input. Now it treats these as markdown. • Allow to overwrite top-level division type heuristics (#3258, Albert Krewinkel). Pandoc uses heuristics to determine the most reasonable top-level division type when emitting LaTeX or Docbook markup. It is now possible to overwrite this implicitly set top-level division via the top-level-division command line parameter. • Text.Pandoc.Options [API changes]: • Removed writerStandalone field in WriterOptions, made writerTemplate a Maybe value. Previously setting writerStandalone = True did nothing unless a template was provided in writerTemplate. Now a fragment will be generated if writerTemplate is Nothing; otherwise, the specified template will be used and standalone output generated. • Division has been renamed TopLevelDivision (#3197). The Section, Chapter, and Part constructors were renamed to TopLevelSection, TopLevelChapter, and TopLevelPart, respectively. An additional TopLevelDefault constructor was added, which is now also the new default value of the writerTopLevelDivision field in WriterOptions. • Improved error if they give wrong arg to --top-level-division. • Use new module from texmath to lookup MS font codepoints in Docx reader. Removed unexported module Text.Pandoc.Readers.Docx.Fonts. Its code now lives in texmath (0.9). • DocBook reader: Fixed xref lookup (#3243). It previously only worked when the qnames lacked the docbook namespace URI. • Improved table parsing (#3027). We now check explicitly for non-1 rowspan or colspan attributes, and fail when we encounter them. Previously we checked that each row had the same number of cells, but that could be true even with rowspans/colspans. And there are cases where it isn't true in tables that we can handle fine -- e.g. when a tr element is empty. So now we just pad rows with empty cells when needed. • Treat [itex] as MathML by default unless something else is explicitly specified in xmlns. Provided it parses as MathML, of course. Also fixed default which should be to inline math if no display attribute is used. • Only treat "a" element as link if it has href (#3226). Otherwise treat as span. • Add a placeholder value for CHART. We wrap [CHART] in a <span class="chart">. Note that it maps to inlines because, in docx, anything in a drawing tag can be part of a larger paragraph. • Be more specific in parsing images We not only want w:drawing, because that could also include charts. Now we specify w:drawing/pic:pic. This shouldn't change behavior at all, but it's a first step toward allowing other sorts of drawing data as well. • Abstract out function to avoid code repetition. • Update tests for img title and alt (#3204). • Handle Alt text and titles in images. We use the "description" field as alt text and the "title" field as title. These can be accessed through the "Format Picture" dialog in Word. • Docx reader utils: handle empty namespace in elemName. Previously, if given an empty namespace (elemName ns "" "foo") elemName would output a QName with a Just "" namespace. This is never what we want. Now we output a Nothing. If someone does want a Just "" in the namespace, they can enter the QName value explicitly. • Inline code when text has a special style (Hubert Plociniczak). When a piece of text has a text Source_Text then we assume that this is a piece of the document that represents a code that needs to be inlined. Adapted the writer to also reflect that change. Previously it was just writing a 'preformatted' text using a non-distinguishable font style. Code blocks are still not recognized by the ODT reader. That's a separate issue. • Infer table's caption from the paragraph (#3224, Hubert Plociniczak). ODT's reader always put empty captions for the parsed tables. This commit 1. checks paragraphs that follow the table definition 2. treats specially a paragraph with a style named 'Table' 3. does some postprocessing of the paragraphs that combines tables followed immediately by captions The ODT writer used the TableCaption style for the caption paragraph. This commit follows the OpenOffice approach which allows for appending captions to table but uses a built-in style named Table instead of TableCaption. Users of a custom reference.odt should change the style's name from TableCaption to Table. • ODT reader: Infer tables' header props from rows (#3199, Hubert Plociniczak). ODT reader simply provided an empty header list which meant that the contents of the whole table, even if not empty, was simply ignored. While we still do not infer headers we at least have to provide default properties of columns. • Allow reference link labels starting with @... if citations extension disabled (#3209). Example: in $link text$$@a$ link text isn't hyperlinked because [@a] is parsed as a citation. Previously this happened whether or not the citations extension was enabled. Now it happens only if the citations extension is enabled. • Allow alignments to be specified in Markdown grid tables. For example, +-------+---------------+--------------------+ | Right | Left | Centered | +=========:+:=================+:=============:+ | Bananas | 1.34 | built-in wrapper | +-------+---------------+--------------------+ • Allow Small Caps elements to be created using bracketed spans (as they already can be using HTML-syntax spans) (#3191, Kolen Cheung). • LaTeX reader: • Don't treat \vspace and \hspace as block commands (#3256). Fixed an error which came up, for example, with \vspace inside a caption. (Captions expect inlines.) • Improved table handling. We can now parse all of the tables emitted by pandoc in our tests. The only thing we don't get yet are alignments and column widths in more complex tables. See #2669. • Limited support for minipage. • Allow for []s inside LaTeX optional args. Fixes cases like: • Handle BVerbatim from fancyvrb (#3203). • Handle hungarumlaut (#3201). • Allow beamer-style <...> options in raw LaTeX (also in Markdown) (#3184). This allows use of things like \only<2,3>{my content} in Markdown that is going to be converted to beamer. • Use pre-wrap for code in dzslides template (Nicolas Porcel). Otherwise overly long code will appear on every slide. • Org reader (Albert Krewinkel): • Respect column width settings (#3246). Table column properties can optionally specify a column's width with which it is displayed in the buffer. Some exporters, notably the ODT exporter in org-mode v9.0, use these values to calculate relative column widths. The org reader now implements the same behavior. Note that the org-mode LaTeX and HTML exporters in Emacs don't support this feature yet, which should be kept in mind by users who use the column widths parameters. • Allow HTML attribs on non-figure images (#3222). Images which are the only element in a paragraph can still be given HTML attributes, even if the image does not have a caption and is hence not a figure. The following will add set the width attribute of the image to 50%: +ATTR\_HTML: :width 50% ======================= $\[file:image.jpg$\] • Support ATTR_HTML for special blocks (#3182). Special blocks (i.e. blocks with unrecognized names) can be prefixed with an ATTR_HTML block attribute. The attributes defined in that meta-directive are added to the Div which is used to represent the special block. • Support the todo export option. The todo export option allows to toggle the inclusion of TODO keywords in the output. Setting this to nil causes TODO keywords to be dropped from headlines. The default is to include the keywords. • Add support for todo-markers. Headlines can have optional todo-markers which can be controlled via the #+TODO, #+SEQ_TODO, or #+TYP_TODO meta directive. Multiple such directives can be given, each adding a new set of recognized todo-markers. If no custom todo-markers are defined, the default TODO and DONE markers are used. Todo-markers are conceptually separate from headline text and are hence excluded when autogenerating headline IDs. The markers are rendered as spans and labelled with two classes: One class is the markers name, the other signals the todo-state of the marker (either todo or done). • LaTeX writer: • Use \autocites* when "suppress-author" citation used. • Ensure that simple tables have simple cells (#2666). If cells contain more than a single Plain or Para, then we need to set nonzero widths and put contents into minipages. • Remove invalid inlines in sections (#3218, Hubert Plociniczak). • Markdown writer: • Fix calculation of column widths for aligned multiline tables (#1911, Björn Peemöller). This also fixes excessive CPU and memory usage for tables when --columns is set in such a way that cells must be very tiny. Now cells are guaranteed to be big enough so that single words don't need to line break, even if this pushes the line length above the column width. • Use bracketed form for native spans when bracketed_spans enabled (#3229). • Fixed inconsistent spacing issue (#3232). Previously a tight bullet sublist got rendered with a blank line after, while a tight ordered sublist did not. Now we don't get the blank line in either case. • Fix escaping of spaces in super/subscript (#3225). Previously two backslashes were inserted, which gave a literal backslash. • Adjust widths in Markdown grid tables so that they match on round-trip. • Docx writer: • Give full detail when there are errors converting tex math. • Handle title text in images (Jesse Rosenthal). We already handled alt text. This just puts the image "title" into the docx "title" attr. • Fixed XML markup for empty cells (#3238). Previously the Compact style wasn't being applied properly to empty cells. • HTML writer: • Updated renderHtml import from blaze-html. • Text.Pandoc.Pretty: • Fixed some bugs that caused blank lines in tables (#3251). The bugs caused spurious blank lines in grid tables when we had things like blankline$blankline. • Add exported function minOffet [API change] (Björn Peemöller). • Added error message for illegal call to block (Björn Peemöller). • Text.Pandoc.Shared: • Put warn in MonadIO. • fetchItem: Better handling of protocol-relative URL (#2635). If URL starts with // and there is no "base URL" (as there would be if a URL were used on the command line), then default to http:. • Export Text.Pandoc.getDefaultExtensions [API change] (#3178). • In --version, trap error in getAppUserDataDirectory (#3241). This fixes a crash with pandoc --version on unusual systems with no real user (e.g. SQL Server 2016). • Added weigh-pandoc for memory usage diagnostics (#3169). • Use correct mime types for woff and woff2 (#3228). • Remove make_travis_yml.hs (#3235, Kolen Cheung). • changelog: Moved an item that was misplaced in the 1.17.2 section to the 1.18 section where it belongs. • CONTRIBUTING.md: minor change in wording and punctuation (#3252, Kolen Cheung). • Further revisions to manual for --version changes (#3244). ## Downloads # pandoc 1.18 jgm released this Oct 26, 2016 · 1246 commits to master since this release • Added --list-input-formats, --list-output-formats, --list-extensions, --list-highlight-languages, and --list-highlight-styles (#3173). Removed list of highlighting languages from --version output. Removed list of input and output formats from default --help output. • Added --reference-location=block|section|document option (Jesse Rosenthal). This determines whether Markdown link references and footnotes are placed at the end of the document, the end of the section, or the end of the top-level block. • Added --top-level-division=section|chapter|part (Albert Krewinkel). This determines what a level-1 header corresponds to in LaTeX, ConTeXt, DocBook, and TEI output. The default is section. The --chapters option has been deprecated in favor of --top-level-division=chapter. • Added LineBlock constructor for Block (Albert Krewinkel). This is now used in parsing RST and Markdown line blocks, DocBook linegroup/line combinations, and Org-mode VERSE blocks. Previously Para blocks with hard linebreaks were used. LineBlocks are handled specially in the following ouput formats: AsciiDoc (as [verse] blocks), ConTeXt (\startlines/\endlines), HTML (div with a style), Markdown (line blocks if line_blocks is enabled), Org-mode (VERSE blocks), RST (line blocks). In other output formats, a paragraph with hard linebreaks is emitted. • Allow binary formats to be written to stdout (but not to tty) (#2677). Only works on posix, since we use the unix library to check whether output is to tty. On Windows, pandoc works as before and always requires an output file parameter for binary formats. • Changed JSON output format (Jesse Rosenthal). Previously we used generically generated JSON, but this was subject to change depending on the version of aeson pandoc was compiled with. To ensure stability, we switched to using manually written ToJSON and FromJSON instances, and encoding the API version. Note: pandoc filter libraries will need to be revised to handle the format change. Here is a summary of the essential changes: • The toplevel JSON format is now {"pandoc-api-version" : [MAJ, MIN, REV], "meta" : META, "blocks": BLOCKS} instead of [{"unMeta": META}, [BLOCKS]]. Decoding fails if the major and minor version numbers don't match. • Leaf nodes no longer have an empty array for their "c" value. Thus, for example, a Space is encoded as {"t":"Space"} rather than {"t":"Space","c":[]} as before. • Removed tests/Tests/Arbitrary.hs and added a Text.Pandoc.Arbitrary module to pandoc-types (Jesse Rosenthal). This makes it easier to use QuickCheck with pandoc types outside of pandoc itself. • Add bracketed_spans Markdown extension, enabled by default in pandoc markdown. This allows you to create a native span using this syntax: [Here is my span]{#id .class key="val"}. • Added angle_brackets_escapable Markdown extension (#2846). This is needed because github flavored Markdown has a slightly different set of escapable symbols than original Markdown; it includes angle brackets. • Export Text.Pandoc.Error in Text.Pandoc [API change]. • Print highlighting-kate version in --version. • Text.Pandoc.Options: • Extension has new constructors Ext_brackted_spans and Ext_angle_brackets_escapable [API change]. • Added ReferenceLocation type [API change] (Jesse Rosenthal). • Added writerReferenceLocation field to WriterOptions (Jesse Rosenthal). • --filter: we now check $DATADIR/filters for filters before looking in the path (#3127, Jesse Rosenthal, thanks to Jakob Voß for the idea). Filters placed in this directory need not be executable; if the extension is .hs, .php, .pl, .js, or .rb, pandoc will run the right interpreter. • For --webtex, replace deprecated Google Chart API by CodeCogs as default (Kolen Cheung). • Removed raw_tex extension from markdown_mmd defaults (Kolen Cheung). • Execute .js filters with node (Jakob Voß). • Support bc.. extended code blocks (#3037). Also, remove trailing newline in code blocks (consistently with Markdown reader). • Improve table parsing. We now handle cell and row attributes, mostly by skipping them. However, alignments are now handled properly. Since in pandoc alignment is per-column, not per-cell, we try to devine column alignments from cell alignments. Table captions are also now parsed, and textile indicators for thead and tfoot no longer cause parse failure. (However, a row designated as tfoot will just be a regular row in pandoc.) • Improve definition list parsing. We now allow multiple terms (which we concatenate with linebreaks). An exponential parsing bug (#3020) is also fixed. • Disallow empty URL in explicit link (#3036). • Use Div instead of BlockQuote for admonitions (#3031). The Div has class admonition and (if relevant) one of the following: attention, caution, danger, error, hint, important, note, tip, warning. Note: This will change the rendering of some RST documents! The word ("Warning", "Attention", etc.) is no longer added; that must be done with CSS or a filter. • A Div is now used for sidebar as well. • Skip whitespace before note (Jesse Rosenthal, #3163). RST requires a space before a footnote marker. We discard those spaces so that footnotes will be adjacent to the text that comes before it. This is in line with what rst2latex does. • Allow empty lines when parsing line blocks (Albert Krewinkel). • Allow empty lines when parsing line blocks (Albert Krewinkel). • Allow attributes on autolinks (#3183, Daniele D'Orazio). • More robust parsing of unknown environments (#3026). We no longer fail on things like ^ inside options for tikz. • Be more forgiving of non-standard characters, e.g. ^ outside of math. Some custom environments give these a meaning, so we should try not to fall over when we encounter them. • Drop duplicate * in bibtexKeyChars (Albert Krewinkel) • Fix for unquoted attribute values in mediawiki tables (#3053). Previously an unquoted attribute value in a table row could cause parsing problems. • Improved treatment of verbatim constructions (#3055). Previously these yielded strings of alternating Code and Space elements; we now incorporate the spaces into the Code. Emphasis etc. is still possible inside these. • Properly interpret XML tags in pre environments (#3042). They are meant to be interpreted as literal text. • EPUB reader: don't add root path to data: URIs (#3150). Thanks to @lep for the bug report and patch. • Preserve indentation of verse lines (#3064). Leading spaces in verse lines are converted to non-breaking spaces, so indentation is preserved. • Ensure image sources are proper links. Image sources as those in plain images, image links, or figures, must be proper URIs or relative file paths to be recognized as images. This restriction is now enforced for all image sources. This also fixes the reader's usage of uncleaned image sources, leading to file: prefixes not being deleted from figure images. Thanks to @bsag for noticing this bug. • Trim verse lines properly (Albert Krewinkel). • Extract meta parsing code to module. Parsing of meta-data is well separable from other block parsing tasks. Moving into new module to get small files and clearly arranged code. • Read markup only for special meta keys. Most meta-keys should be read as normal string values, only a few are interpreted as marked-up text. • Allow multiple, comma-separated authors. Multiple authors can be specified in the #+AUTHOR meta line if they are given as a comma-separated list. • Give precedence to later meta lines. The last meta-line of any given type is the significant line. Previously the value of the first line was kept, even if more lines of the same type were encounterd. • Read LaTeX_header as header-includes. LaTeX-specific header commands can be defined in #+LaTeX_header lines. They are parsed as format-specific inlines to ensure that they will only show up in LaTeX output. • Set documentclass meta from LaTeX_class. • Set classoption meta from LaTeX_class_options. • Read HTML_head as header-includes. HTML-specific head content can be defined in #+HTML_head lines. They are parsed as format-specific inlines to ensure that they will only show up in HTML output. • Respect author export option. The author option controls whether the author should be included in the final markup. Setting #+OPTIONS: author:nil will drop the author from the final meta-data output. • Respect email export option. The email option controls whether the email meta-field should be included in the final markup. Setting #+OPTIONS: email:nil will drop the email field from the final meta-data output. • Respect creator export option. The creator option controls whether the creator meta-field should be included in the final markup. Setting #+OPTIONS: creator:nil will drop the creator field from the final meta-data output. Org-mode recognizes the special value comment for this field, causing the creator to be included in a comment. This is difficult to translate to Pandoc internals and is hence interpreted the same as other truish values (i.e. the meta field is kept if it's present). • Respect unnumbered header property (#3095). Sections the unnumbered property should, as the name implies, be excluded from the automatic numbering of section provided by some output formats. The Pandoc convention for this is to add an "unnumbered" class to the header. The reader treats properties as key-value pairs per default, so a special case is added to translate the above property to a class instead. • Allow figure with empty caption (Albert Krewinkel, #3161). A #+CAPTION attribute before an image is enough to turn an image into a figure. This wasn't the case because the parseFromString function, which processes the caption value, would fail on empty values. Adding a newline character to the caption value fixes this. • Use XML convenience functions (Jesse Rosenthal). The functions isElem and elemName (defined in Docx/Util.hs) make the code a lot cleaner than the original XML.Light functions, but they had been used inconsistently. This puts them in wherever applicable. • Handle anchor spans with content in headers. Previously, we would only be able to figure out internal links to a header in a docx if the anchor span was empty. We change that to read the inlines out of the first anchor span in a header. • Let headers use exisiting id. Previously we always generated an id for headers (since they wouldn't bring one from Docx). Now we let it use an existing one if possible. This should allow us to recurs through anchor spans. • Use all anchor spans for header ids. Previously we only used the first anchor span to affect header ids. This allows us to use all the anchor spans in a header, whether they're nested or not (#3088). • Test for nested anchor spans in header. This ensures that anchor spans in header with content (or with other anchor spans inside) will resolve to links to a header id properly. • Include list's starting value. Previously the starting value of the lists' items has been hardcoded to 1. In reality ODT's list style definition can provide a new starting value in one of its attributes. • Infer caption from the text following the image. Frame can contain other frames with the text boxes. • Add fig: to title for Image with a caption (as expected by pandoc's writers). • Basic support for images in ODT documents. • Don't duplicate text for anchors (#3143). When creating an anchor element we were adding its representation as well as the original content, leading to text duplication. • DocBook writer: • Include an anchor element when a div or span has an id (#3102). Note that DocBook does not have a class attribute, but at least this provides an anchor for internal links. • LaTeX writer: • Don't use * for unnumbered paragraph, subparagraph. The starred variants don't exist. This helps with part of #3058...it gets rid of the spurious *s. But we still have numbers on the 4th and 5th level headers. • Properly escape backticks in verbatim (#3121, Jesse Rosenthal). Otherwise they can cause unintended ligatures like ?. • Handle NARRAOW NO-BREAK SPACE into LaTeX (Vaclav Zeman) as \,. • Don't include [htbp] placement for figures (#3103, Václav Haisman). This allows figure placement defaults to be changed by the user in the template. • HTML writer (slide show formats): In slide shows, don't change slide title to level 1 header (#2221). • TEI writer: remove heuristic to detect book template (Albert Krewinkel). TEI doesn't have <book> elements but only generic <divN> division elements. Checking the template for a trailing </book> is nonsensical. • MediaWiki writer: transform filename with underscores in images (#3052). foo bar.jpg becomes foo_bar.jpg. This was already done for internal links, but it also needs to happen for images. • ICML writer: replace partial function (!!) in table handling (#3175, Mauro Bieg). • Man writer: allow section numbers that are not a single digit (#3089). • AsciiDoc writer: avoid unnecessary use of "unconstrained" emphasis (#3068). In AsciiDoc, you must use a special form of emphasis (double __) for intraword emphasis. Pandoc was previously using this more than necessary. • EPUB writer: use stringify instead of plain writer for metadata (#3066). This means that underscores won't be used for emphasis, or CAPS for bold. The metadata fields will just have unadorned text. • Docx Writer: • Implement user-defined styles (Jesse Rosenthal). Divs and Spans with a custom-style key in the attributes will apply the corresponding key to the contained blocks or inlines. • Clean up and streamline RTL behavior (Jesse Rosenthal, #3140). You can set dir: rtl in YAML metadata, or use -M dir=rtl on the command line. For finer-grained control, you can set the dir attribute in Div or Span elements. • Org writer (Albert Krewinkel): • Remove blank line after figure caption. Org-mode only treats an image as a figure if it is directly preceded by a caption. • Ensure blank line after figure. An Org-mode figure should be surrounded by blank lines. The figure would be recognized regardless, but images in the following line would unintentionally be treated as figures as well. • Ensure link targets are paths or URLs. Org-mode treats links as document internal searches unless the link target looks like a URL or file path, either relative or absolute. This change ensures that this is always the case. • Translate language identifiers. Pandoc and Org-mode use different programming language identifiers. An additional translation between those identifiers is added to avoid unexpected behavior. This fixes a problem where language specific source code would sometimes be output as example code. • Drop space before footnote markers (Albert Krewinkel, #3162). The writer no longer adds an extra space before footnote markers. • Markdown writer: • Don't emit HTML for tables unless raw_html extension is set (#3154). Emit [TABLE] if no suitable table formats are enabled and raw HTML is disabled. • Check for the raw_html extension before emiting a raw HTML block. • Abstract out note/ref function (Jesse Rosenthal). • HTML, EPUB, slidy, revealjs templates: Use <p> instead of <h1> for subtitle, author, date (#3119). Note that, as a result of this change, authors may need to update CSS. • revealjs template: Added notes-server option (jgm/pandoc-templates#212, Yoan Blanc). • Beamer template: • Restore whitespace between paragraphs. This was a regression in the last release (jgm/pandoc-templates#207). • Added themeoptions variable (Carsten Gips). • Added beamerarticle variable. This causes the beamerarticle package to be loaded in beamer, to produce an article from beamer slides. (Carsten Gips) • Added support for fontfamilies structured variable (Artem Klevtsov). • Added hypersetup options (Jake Zimmerman). • LaTeX template: • Added dummy definition for \institute. This isn't a standard command, and we want to avoid a crash when institute is used with the default template. • Define default figure placement (Václav Haisman), since pandoc no longer includes [htbp] for figures. Users with custom templates will want to add this. See #3103. • Use footnote package to fix notes in tables (jgm/pandoc-templates#208, Václav Haisman). • Moved template compiling/rendering code to a separate library. doctemplates. This allows the pandoc templating system to be used independently. • Text.Pandoc.Error: Fix out of index error in handleError (Matthew Pickering). The fix is to not try to show the exact line when it would cause an out-of-bounds error as a result of included files. • Text.Pandoc.Shared: Add linesToBlock function (Albert Krewinkel). • Text.Pandoc.Parsing.emailAddress: tighten up parsing of email addresses. Technically **@user is a valid email address, but if we allow things like this, we get bad results in markdown flavors that autolink raw email addresses (see #2940). So we exclude a few valid email addresses in order to avoid these more common bad cases. • Text.Pandoc.PDF: Don't crash with nonexistent image (#3100). Instead, emit the alt text, emphasized. This accords with what the ODT writer currently does. The user will still get a warning about a nonexistent image. • Fix example in API documentation (#3176, Thomas Weißschuh). • Tell where to get tarball in INSTALL (#3062). • Replace COPYING with Markdown version COPYING.md from GNU (Kolen Cheung). • MANUAL.txt: • Put note on structured vars in separate paragraph (#2148, Albert Krewinkel). Make it clearer that structured author variables require a custom template • Note that --katex works best with html5 (#3077). • Fix the LaTeX and EPUB links in manual (Morton Fox). • Document biblio-title variable. • Improve spacing of footnotes in --help output (Waldir Pimenta). • Update KaTeX to v0.6.0 (Kolen Cheung). • Allow latest dependencies. • Use texmath 0.8.6.6 (#3040). • Allow http-client 0.4.30, which is the version in stackage lts. Previously we required 0.5. Remove CPP conditionals for earlier versions. • Remove support for GHC < 7.8 (Jesse Rosenthal). • Remove Compat.Monoid. • Remove an inline monad compatibility macro. • Remove Text.Pandoc.Compat.Except. • Remove directory compat. • Change constraint on mtl. • Remove unnecessary CPP condition in UTF8. • Bump base lower bound to 4.7. • Remove 7.6 build from .travis.yaml. • Bump supported ghc version in CONTRIBUTING.md. • Remove GHC 7.6 from list of tested versions (Albert Krewinkel). • Remove TagSoup compat. • Add EOL note to time compat module. Because time 1.4 is a boot library for GHC 7.8, we will support the compatibility module as long as we support 7.8. But we should be clear about when we will no longer need it. • Remove blaze-html CPP conditional. • Remove unnecessary CPP in custom Prelude. # pandoc 1.17.2 jgm released this Jul 17, 2016 · 1526 commits to master since this release • Added Zim Wiki writer, template and tests. zimwiki is now a valid output format. (Alex Ivkin) • Changed email-obfuscation default to no obfuscation (#2988). • writerEmailObfuscation in defaultWriterOptions is now NoObfuscation. • the default for the command-line --email-obfuscation option is now none. • Docbook writer: Declare xlink namespace in Docbook5 output (Ivo Clarysse). • Org writer: • Support arbitrary raw inlines (Albert Krewinkel). Org mode allows arbitrary raw inlines ("export snippets" in Emacs parlance) to be included as @@format:raw foreign format text@@. • Improve Div handling (Albert Krewinkel). Div blocks handling is changed to make the output look more like idiomatic org mode: • Div-wrapped content is output as-is if the div's attribute is the null attribute. • Div containers with an id but neither classes nor key-value pairs are unwrapped and the id is added as an anchor. • Divs with classes associated with greater block elements are wrapped in a #+BEGIN...#+END block. • The old behavior for Divs with more complex attributes is kept. • HTML writer: Better support for raw LaTeX environments (#2758). Previously we just passed all raw TeX through when MathJax was used for HTML math. This passed through too much. With this patch, only raw LaTeX environments that MathJax can handle get passed through. This patch also causes raw LaTeX environments to be treated as math, when possible, with MathML and WebTeX output. • Markdown writer: use raw HTML for simple, pipe tables with linebreaks (#2993). Markdown line breaks involve a newline, and simple and pipe tables can't contain one. • Make --webtex work with the Markdown writer (#1177). This is a convenient option for people using websites whose Markdown flavors don't provide for math. • Docx writer: • Set paragraph to FirstPara after display math (Jesse Rosenthal). We treat display math like block quotes, and apply FirstParagraph style to paragraphs that follow them. These can be styled as the user wishes. (But, when the user is using indentation, this allows for paragraphs to continue after display math without indentation.) • Use actual creation time as doc prop (Jesse Rosenthal). Previously, we had used the user-supplied date, if available, for Word's document creation metadata. This could lead to weird results, as in cases where the user post-dates a document (so the modification might be prior to the creation). Here we use the actual computer time to set the document creation. • LaTeX writer: • Don't URI-escape image source (#2825). Usually this is a local file, and replacing spaces with %20 ruins things. • Allow 'standout' as a beamer frame option (#3007). ## Slide title {.standout}. • RST reader: Fixed links with no explicit link text. The link _ should have foo as both its link text and its URL. See RST spec at http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#embedded-uris-and-aliases Closes Debian #828167 -- reported by Christian Heller. • Fixed attributes (#2984). Attributes can't be followed by a space. So, _(class)emph_ but _(noclass) emph_. • Fixed exponential parsing bug (#3020). • Fix overly aggressive interpretation as images (#2998). Spaces are not allowed in the image URL in textile. • Fix \cite so it is a NormalCitation not AuthorInText. • Strip off double quotes around image source if present (#2825). Avoids interpreting these as part of the literal filename. • Add semicolon to list of special chars (Albert Krewinkel) Semicolons are used as special characters in citations syntax. This ensures the correct parsing of Pandoc-style citations: [prefix; @key; suffix]. Previously, parsing would have failed unless there was a space or other special character as the last character. • Add support for "Berkeley-style" cites (Albert Krewinkel, #1978). A specification for an official Org-mode citation syntax was drafted by Richard Lawrence and enhanced with the help of others on the orgmode mailing list. Basic support for this citation style is added to the reader. • Support arbitrary raw inlines (Albert Krewinkel). Org mode allows arbitrary raw inlines ("export snippets" in Emacs parlance) to be included as @@format:raw foreign format text@@. • Remove partial functions (Albert Krewinkel, #2991). Partial functions like head lead to avoidable errors and should be avoided. They are replaced with total functions. • Support figure labels (Albert Krewinkel, #2496, #2999). Figure labels given as #+LABEL: thelabel are used as the ID of the respective image. This allows e.g. the LaTeX to add proper \label markup. • Improve tag and properties type safety (Albert Krewinkel). Specific newtype definitions are used to replace stringly typing of tags and properties. Type safety is increased while readability is improved. • Parse as headlines, convert to blocks (Albert Krewinkel). Emacs org-mode is based on outline-mode, which treats documents as trees with headlines are nodes. The reader is refactored to parse into a similar tree structure. This simplifies transformations acting on document (sub-)trees. • Refactor comment tree handling (Albert Krewinkel). Comment trees were handled after parsing, as pattern matching on lists is easier than matching on sequences. The new method of reading documents as trees allows for more elegant subtree removal. • Support archived trees export options (Albert Krewinkel). Handling of archived trees can be modified using the arch option. Archived trees are either dropped, exported completely, or collapsed to include just the header when the arch option is nil, non-nil, or headline, respectively. • Put export setting parser into module (Albert Krewinkel). Export option parsing is distinct enough from general block parsing to justify putting it into a separate module. • Support headline levels export setting (Albert Krewinkel). The depths of headlines can be modified using the H option. Deeper headlines will be converted to lists. • Replace ugly code with view pattern (Albert Krewinkel). Some less-than-smart code required a pragma switching of overlapping pattern warnings in order to compile seamlessly. Using view patterns makes the code easier to read and also doesn't require overlapping pattern checks to be disabled. • Fix parsing of verbatim inlines (Albert Krewinkel, #3016). Org rules for allowed characters before or after markup chars were not checked for verbatim text. This resultet in wrong parsing outcomes of if the verbatim text contained e.g. space enclosed markup characters as part of the text (=is_substr = True=). Forcing the parser to update the positions of allowed/forbidden markup border characters fixes this. • LaTeX template: fix for obscure hyperref/xelatex issue. Here's a minimal case: \documentclass[]{article} \usepackage{hyperref} \begin{document} \section{\%á} \end{document} Without this change, this fails on the second invocation of xelatex. This affects inputs this like # %á with pdf output via xelatex. • trypandoc: call results 'html' instead of 'result'. This is for better compatibility with babelmark2. • Document MultiMarkdown as input/output format (Albert Krewinkel, #2973). MultiMarkdown was only mentioned as a supported Markdown dialect but not as a possible input or output format. A brief mention is added everywhere the other supported markdown dialects are mentioned. • Document Org mode as a format containing raw HTML (Albert Krewinkel) Raw HTML is kept when the output format is Emacs Org mode. • Implement RawInline and RawBlock in sample lua custom writer (#2985). • Text.Pandoc.Shared: • Introduce blocksToInlines function (Jesse Rosenthal). This is a lossy function for converting [Block] -> [Inline]. Its main use, at the moment, is for docx comments, which can contain arbitrary blocks (except for footnotes), but which will be converted to spans. This is, at the moment, pretty useless for everything but the basic Para and Plain comments. It can be improved, but the docx reader should probably emit a warning if the comment contains more than this. • Add BlockQuote to blocksToInlines (Jesse Rosenthal). • Add further formats for normalizeDate (Jesse Rosenthal). We want to avoid illegal dates -- in particular years with greater than four digits. We attempt to parse series of digits first as %Y%m%d, then %Y%m, and finally %Y. • normalizeDate should reject illegal years (Jesse Rosenthal). We only allow years between 1601 and 9999, inclusive. The ISO 8601 actually says that years are supposed to start with 1583, but MS Word only allows 1601-9999. This should stop corrupted word files if the date is out of that range, or is parsed incorrectly. • Improve year sanity check in normalizeDate (Jesse Rosenthal). Previously we parsed a list of dates, took the first one, and then tested its year range. That meant that if the first one failed, we returned nothing, regardless of what the others did. Now we test for sanity before running msum over the list of Maybe values. Anything failing the test will be Nothing, so will not be a candidate. • Add simple comment functionality. (Jesse Rosenthal). This adds simple track-changes comment parsing to the docx reader. It is turned on with --track-changes=all. All comments are converted to inlines, which can list some information. In the future a warning will be added for comments with formatting that seems like it will be excessively denatured. Note that comments can extend across blocks. For that reason there are two spans: comment-start and comment-end. comment-start will contain the comment. comment-end will always be empty. The two will be associated by a numeric id. • Enable warnings in top-level reader (Jesse Rosenthal). Previously we had only allowed for warnings in the parser. Now we allow for them in the Docx.hs as well. The warnings are simply concatenated. • Add warning for advanced comment formatting. (Jesse Rosenthal). We can't guarantee we'll convert every comment correctly, though we'll do the best we can. This warns if the comment includes something other than Para or Plain. • Add tests for warnings. (Jesse Rosenthal). • Add tests for comments (Jesse Rosenthal). We test for comments, using all track-changes options. Note that we should only output comments if --track-changes=all. We also test for emitting warnings if there is complicated formatting. • Improved Windows installer - don't ignore properties set on command-line. See #2708. Needs testing to see if this resolves the issue. Thanks to @nkalvi. • Process markdown extensions on command line in L->R order (#2995). Previously they were processed, very unintuitively, in R->L order, so that markdown-tex_math_dollars+tex_math_dollars had tex_math_dollars disabled. • Added secnumdepth variable to LaTeX template (#2920). • Writers: treat SoftBreak as space for stripping (Jesse Rosenthal) In Writers.Shared, we strip leading and trailing spaces for display math. Since SoftBreak's are treated as spaces, we should strip those too. • beamer, latex templates: pass biblatexoptions directly in package load. This allows runtime optinos to be used. Fixes jgm/pandoc-citeproc#201 • CPP workaround for deprecation of parseUrl in http-client. • Removed some redundant class constraints. • make_oxs_package.sh - use OSX env variable. • Added winpkg target to Makefile. This downloads the windows package from appveyor and signs it using the key. • Document Org mode as a format containing raw TeX (Albert Krewinkel). Raw TeX is kept verbatim when the output format is Emacs Org mode. • Support math with haddock-library >= 1.4. • Removed -rtsopts from library stanza. It has no effect, and Hackage wouldn't accept the package. • Update library dependency versions. # pandoc 1.17.1 jgm released this Jun 4, 2016 · 1628 commits to master since this release • New output format: docbook5 (Ivo Clarysse). • Text.Pandoc.Options: Add writerDocBook5 to WriterOptions (API change). • Org writer: • Add :PROPERTIES: drawer support (Albert Krewinkel, #1962). This allows header attributes to be added to org documents in the form of :PROPERTIES: drawers. All available attributes are stored as key/value pairs. This reflects the way the org reader handles :PROPERTIES: blocks. • Add drawer capability (Carlos Sosa). For the implementation of the Drawer element in the Org Writer, we make use of a generic Block container with attributes. The presence of a drawer class defines that the Div constructor is a drawer. The first class defines the drawer name to use. The key-value list in the attributes defines the keys to add inside the Drawer. Lastly, the list of Block elements contains miscellaneous blocks elements to add inside of the Drawer. • Use CUSTOM_ID in properties (Albert Krewinkel). The ID property is reserved for internal use by Org-mode and should not be used. The CUSTOM_ID property is to be used instead, it is converted to the ID property for certain export format. • LaTeX writer: • Ignore --incremental unless output format is beamer (#2843). • Fix polyglossia to babel env mapping (Mauro Bieg, #2728). Allow for optional argument in square brackets. • Recognize la-x-classic as Classical Latin (Andrew Dunning). This allows one to access the hyphenation patterns in CTAN's hyph-utf8. • Add missing languages from hyph-utf8 (Andrew Dunning). • Improve use of \strut with \minipage inside tables (Jose Luis Duran). This improves spacing in multiline tables. • Use {} around options containing special chars (#2892). • Avoid lazy foldl. • Don't escape underscore in labels (#2921). Previously they were escaped as ux5f. • brazilian -> brazil for polyglossia (#2953). • HTML writer: Ensure mathjax link is added when math appears in footnote (#2881). Previously if a document only had math in a footnote, the MathJax link would not be added. • EPUB writer: set navpage variable on nav page. This allows templates to treat it differently. • DocBook writer: • Use docbook5 if writerDocbook5 is set (Ivo Clarysse). • Properly handle ulink/link (Ivo Clarysse). • Unescape URIs in spine (#2924). • Parse moveTo and moveFrom (Jesse Rosenthal). moveTo and moveFrom are track-changes tags that are used when a block of text is moved in the document. We now recognize these tags and treat them the same as insert and delete, respectively. So, --track-changes=accept will show the moved version, while --track-changes=reject will show the original version. • Tests for track-changes moving (Jesse Rosenthal). • ODT, EPUB, Docx readers: throw PandocError on unzip failure (Jesse Rosenthal) Previously, readDocx, readEPUB, and readOdt would error out if zip-archive failed. We change the archive extraction step from toArchive to toArchiveOrFail, which returns an Either value. • Markdown, HTML readers: be more forgiving about unescaped & in HTML (#2410). We are now more forgiving about parsing invalid HTML with unescaped & as raw HTML. (Previously any unescaped & would cause pandoc not to recognize the string as raw HTML.) • Fix pandoc title blocks with lines ending in 2 spaces (#2799). • Added -s to markdown-reader-more test. • HTML reader: fixed bug in pClose. This caused exponential parsing behavior in documnets with unclosed tags in dl, dd, dt. • MediaWiki reader: Allow spaces before ! in MediaWiki table header (roblabla). • RST reader: Support :class: option for code block in RST reader (Sidharth Kapur). • Org reader (all Albert Krewinkel, except where noted otherwise): • Stop padding short table rows. Emacs Org-mode doesn't add any padding to table rows. The first row (header or first body row) is used to determine the column count, no other magic is performed. • Refactor rows-to-table conversion. This refactors the codes conversing a list table lines to an org table ADT. The old code was simplified and is now slightly less ugly. • Fix handling of empty table cells, rows (Albert Krewinkel, #2616). This fixes Org mode parsing of some corner cases regarding empty cells and rows. Empty cells weren't parsed correctly, e.g. ||| should be two empty cells, but would be parsed as a single cell containing a pipe character. Empty rows where parsed as alignment rows and dropped from the output. • Fix spacing after LaTeX-style symbols. The org-reader was droping space after unescaped LaTeX-style symbol commands: \ForAll \Auml resulted in ∀Ä but should give ∀ Ä instead. This seems to be because the LaTeX-reader treats the command-terminating space as part of the command. Dropping the trailing space from the symbol-command fixes this issue. • Print empty table rows. Empty table rows should not be dropped from the output, so row-height is always set to be at least 1. • Move parser state into separate module. The org reader code has become large and confusing. Extracting smaller parts into submodules should help to clean things up. • Add support for sub/superscript export options. Org-mode allows to specify export settings via #+OPTIONS lines. Disabling simple sub- and superscripts is one of these export options, this options is now supported. • Support special strings export option Parsing of special strings (like ... as ellipsis or -- as en dash) can be toggled using the - option. • Support emphasized text export option. Parsing of emphasized text can be toggled using the * option. This influences parsing of text marked as emphasized, strong, strikeout, and underline. Parsing of inline math, code, and verbatim text is not affected by this option. • Support smart quotes export option. Reading of smart quotes can be toggled using the ' option. • Parse but ignore export options. All known export options are parsed but ignored. • Refactor block attribute handling. A parser state attribute was used to keep track of block attributes defined in meta-lines. Global state is undesirable, so block attributes are no longer saved as part of the parser state. Old functions and the respective part of the parser state are removed. • Use custom anyLine. Additional state changes need to be made after a newline is parsed, otherwise markup may not be recognized correctly. This fixes a bug where markup after certain block-types would not be recognized. • Add support for ATTR_HTML attributes (#1906). Arbitrary key-value pairs can be added to some block types using a #+ATTR_HTML line before the block. Emacs Org-mode only includes these when exporting to HTML, but since we cannot make this distinction here, the attributes are always added. The functionality is now supported for figures. • Add :PROPERTIES: drawer support (#1877). Headers can have optional :PROPERTIES: drawers associated with them. These drawers contain key/value pairs like the header's id. The reader adds all listed pairs to the header's attributes; id and class attributes are handled specially to match the way Attr are defined. This also changes behavior of how drawers of unknown type are handled. Instead of including all unknown drawers, those are not read/exported, thereby matching current Emacs behavior. • Use CUSTOM_ID in properties. See above on Org writer changes. • Respect drawer export setting. The d export option can be used to control which drawers are exported and which are discarded. Basic support for this option is added here. • Ignore leading space in org code blocks (Emanuel Evans, #2862). Also fix up tab handling for leading whitespace in code blocks. • Support new syntax for export blocks. Org-mode version 9 uses a new syntax for export blocks. Instead of #+BEGIN_<FORMAT>, where <FORMAT> is the format of the block's content, the new format uses #+BEGIN_export <FORMAT> instead. Both types are supported. • Refactor BEGIN...END block parsing. • Fix handling of whitespace in blocks, allowing content to be indented less then the block header. • Support org-ref style citations. The org-ref package is an org-mode extension commonly used to manage citations in org documents. Basic support for the cite:citeKey and [[cite:citeKey][prefix text::suffix text]] syntax is added. • Split code into separate modules, making for cleaner code and better decoupling. • Added docbook5 template. • --mathjax improvements: • Use new CommonHTML output for MathJax (updated default MathJax URL, #2858). • Change default mathjax setup to use TeX-AMS_CHTML configuration. This is designed for cases where the input is always TeX and maximal conformity with TeX is desired. It seems to be smaller and load faster than what we used before. See #2858. • Bumped upper version bounds to allow use of latest packages and compilation with ghc 8. • Require texmath 0.8.6.2. Closes several texmath-related bugs (#2775, #2310, #2310, #2824). This fixes behavior of roots, e.g. \sqrt[3]{x}, and issues with sub/superscript positioning and matrix column alignment in docx. • Clarified documentation of implicit_header_references (#2904). • Improved documentation of --columns option. • Added appveyor setup, with artefacts (Jan Schulz). • stack.yaml versions: Use proper flags used for texmath, pandoc-citeproc. • LaTeX template: support for custom font families (vladipus). Needed for correct polyglossia operation with Cyrillic fonts and perhaps can find some other usages. Example usage in YAML metadata: fontfamilies: - name: \cyrillicfont font: Liberation Serif - name: \cyrillicfonttt options: Scale=MatchLowercase font: Liberation • Create unsigned msi as build artifact in appveyor build. • On travis, test with ghc 8.0.1; drop testing for ghc 7.4.1. • Mar 25, 2016 ### 1.17.0.3 Updated changelog. # pandoc 1.17.0.2 jgm released this Mar 23, 2016 · 1789 commits to master since this release • Fixed serious regression in htmlInBalanced, which caused newlines to be omitted in some raw HTML blocks in Markdown (#2804). • File scope is no longer used when there are no input files (i.e., when input comes from stdin). Previously file scope was triggered when the json reader was specified and input came from stdin`, and this caused no output to be produced. (Fix due to Jesse Rosenthal; thanks to Fedor Sheremetyev for calling the bug to our attention.)
# Creating domain-specific languages in Julia using macros Since the beginning of Julia, it has been tempting to use macros to write domain-specific languages (DSLs), i.e. to extend Julia syntax to provide a simpler interface to create Julia objects with complicated behaviour. The first, and still most extensive, example is JuMP. Since the fix for the infamous early Julia issue #265, which was incorporated in Julia 0.6, some previous methods for creating DSLs in Julia, mainly involving eval, ceased to work. In this post, we will describe a recommended pattern (i.e., a reusable structure) for creating DSLs without the use of eval, using syntax suitable for Julia 0.6 and later versions; it is strongly recommended to upgrade to Julia 0.6. ## Creating a Model object containing a function This blog post arose from a question in the JuliaCon 2017 hackathon about the Modia modelling language, where there is a @model macro. Here we will describe the simplest possible version of such a macro, which will create a Model object that contains a function, and is itself callable. First we define the Model object. It is tempting to write it like this: struct NaiveModel f::Function end We can then create an instance of the NaiveModel type (i.e., an object of that type) using the default constructor, e.g. by passing it an anonymous function: julia> m1 = NaiveModel(x -> 2x) NaiveModel(#1) and we can call the function using julia> m1.f(10) 20 If we wish instances like m to themselves behave like functions, we can overload the call syntax on the NaiveModel object: julia> (m::NaiveModel)(x) = m.f(x) so that we can now just write julia> m1(10) 20 ## Parametrising the type Since Function is an abstract type, for performance we should not have a field of this type inside our object. Rather, we parametrise the type using the type of the function: struct Model{F} f::F end (m::Model)(x) = m.f(x) julia> m2 = Model(x->2x) Model{##3#4}(#3) julia> m2(10) 20 Let’s compare the performance: julia> using BenchmarkTools julia> @btime m1(10); 41.482 ns (0 allocations: 0 bytes) julia> @btime m2(10); 20.212 ns (0 allocations: 0 bytes) Indeed we have removed some overhead in the second case. ## Manipulating expressions We wish to define a macro that will allow us to use a simple syntax, of our choosing, to create objects. Suppose we would like to use the syntax julia> @model 2x to define a Model object containing the function x -> 2x. Note that 2x on its own is not valid Julia syntax for creating a function; the macro will allow us to use this simplified syntax for our own purposes. Before getting to macros, let’s first build some tools to manipulate the expression 2x in the correct way to build a Model object from it, using standard Julia functions. First, let’s create a function to manipulate our expression: function make_function(ex::Expr) return :(x -> $ex) end julia> ex = :(2x); julia> make_function(ex) :(x->begin # In[12], line 2: 2x end) Here, we have created a Julia expression called ex, which just contains the expression 2x that we would like for the body of our new function, and we have passed this expression into make_function, which wraps it into a complete anonymous function. This assumes that ex is an expression containing the variable x and makes a new expression representing an anonymous function with the single argument x. (See e.g. my JuliaCon 2017 tutorial for an example of how to walk through the expression tree in order to extract automatically the variables that it contains.) Now let’s define a function make_model that takes a function, wraps it, and passes it into a Model object: function make_model(ex::Expr) return :(Model($ex)) end julia> make_model(make_function(:(2x))) :(Model((x->begin # In[12], line 2: 2x end))) If we evaluate this “by hand”, we see that it correctly creates a Model object: julia> m3 = eval(make_model(make_function(:(2x)))) Model{##7#8}(#7) julia> m3(10) 20 ## Macros However, this is ugly and clumsy. Instead, we now wrap everything inside a macro. A macro is a code manipulator: it eats code, massages it in some way (possibly including completely rewriting it), and spits out the new code that was produced. This makes macros an incredibly powerful (and, therefore, dangerous) tool when correctly used. In the simplest case, a macro takes as argument a single Julia Expr object, i.e. an unevaluated Julia expression (i.e., a piece of Julia code). It manipulates this expression object to create a new expression object, which it then returns. The key point is that this returned expression is “spliced into” the newly-generated code in place of the old code. The compiler will never actually see the old code, only the new code. macro model(ex) @show ex @show typeof(ex) return nothing end This just shows the argument that it was passed and exits, returning an empty expression. julia> m4 = @model 2x ex = :(2x) typeof(ex) = Expr We see that the Julia Expr object has been automatically created from the explicit code that we typed. Now we can plug in our previous functions to complete the macro’s functionality: julia> macro model(ex) return make_model(make_function(ex)) end @model (macro with 1 method) julia> m5 = @model 2x Model{##7#8}(#7) julia> m5(10) 20 To check that the macro is doing what we think it is, we can use the @macroexpand command, which itself is a macro (as denoted by the initial @): julia> @macroexpand @model 2x :((Main.Model)((#71#x->begin # In[12], line 2: 2#71#x end))) ## Macro “hygiene” However, our macro has an issue, called macro “hygiene”. This has to do with where variables are defined. Let’s put everything we have so far inside a module: module Models export Model, @model struct Model{F} f::F end (m::Model)(x) = m.f(x) function make_function(ex::Expr) return :(x -> $ex) end function make_model(ex::Expr) return :(Model($ex)) end macro model(ex) return make_model(make_function(ex)) end end Now we import the module and use the macro: julia> using Models julia> m6 = @model 2x; julia> m6(10) 20 So far so good. But now let’s try to include a global variable in the expression: julia> a = 2; julia> m7 = @model 2*a*x Models.Model{##7#8}(#7) julia> m7(10) UndefVarError: a not defined Stacktrace: [1] #7 at ./In[1]:12 [inlined] [2] (::Models.Model{##7#8})(::Int64) at ./In[1]:9 We see that it cannot find a. Let’s see what the macro is doing: julia> @macroexpand @model 2*a*x :((Models.Model)((#4#x->begin # In[1], line 12: 2 * Models.a * #4#x end))) We see that Julia is looking for Models.a, i.e. a variable a defined inside the Models module. To fix this problem, we must write an “unhygienic” macro, by “escaping” the code, using the esc function. This is a mechanism telling the compiler to look for variable definitions in the scope from which the macro is called (here, the current module Main), rather than the scope where the macro is defined (here, the Models module): module Models2 export Model, @model struct Model{F} f::F end (m::Model)(x) = m.f(x) function make_function(ex::Expr) return :(x -> $ex) end function make_model(ex::Expr) return :(Model($ex)) end macro model(ex) return make_model(make_function(esc(ex))) end end julia> using Models2 julia> a = 2; julia> m8 = @model 2*a*x Models2.Model{##3#4}(#3) julia> m8(10) 40 This is the final, working version of the macro. ## Conclusion We have successfully completed our task: we have seen how to create a macro that enables a simple syntax for creating a Julia object that we can use later. For some more in-depth discussion of metaprogramming techniques and macros, see my video tutorial Invitation to intermediate Julia, given at JuliaCon 2016:
# Sort a multidimensional list row-wise [closed] Suppose I have a multidimensional list as shown below: RandomInteger[{1,10},{3,7}] I obtain the results: $$\left( \begin{array}{ccccccc} 3 & 8 & 3 & 1 & 1 & 2 & 6 \\ 9 & 1 & 9 & 4 & 2 & 3 & 5 \\ 8 & 10 & 9 & 9 & 2 & 1 & 8 \\ \end{array} \right)$$ I applied the function Sort[%]: $$\left( \begin{array}{ccccccc} 3 & 8 & 3 & 1 & 1 & 2 & 6 \\ 8 & 10 & 9 & 9 & 2 & 1 & 8 \\ 9 & 1 & 9 & 4 & 2 & 3 & 5 \\ \end{array} \right)$$ But the problem is, I'm trying to sort row-wise. I've played around with other functions but I'm not quite getting what I want. Can someone guide me please? - ## closed as off-topic by Artes, Kuba, m_goldberg, ubpdqn, bobthechemistJul 9 '14 at 14:18 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question arises due to a simple mistake such as a trivial syntax error, incorrect capitalization, spelling mistake, or other typographical error and is unlikely to help any future visitors, or else it is easily found in the documentation." – Artes, Kuba, m_goldberg, ubpdqn, bobthechemist If this question can be reworded to fit the rules in the help center, please edit the question. row-wise means? –  Apple Jul 9 '14 at 12:50 Try Sort /@ % instead? –  Verbeia Jul 9 '14 at 12:51 Thanks @Verbeia. Thank works great. –  Afloz Jul 9 '14 at 12:55
Formatted question description: https://leetcode.ca/all/1461.html # 1461. Check If a String Contains All Binary Codes of Size K (Medium) Given a binary string s and an integer k. Return True if any binary code of length k is a substring of s. Otherwise, return False. Example 1: Input: s = "00110110", k = 2 Output: true Explanation: The binary codes of length 2 are "00", "01", "10" and "11". They can be all found as substrings at indicies 0, 1, 3 and 2 respectively. Example 2: Input: s = "00110", k = 2 Output: true Example 3: Input: s = "0110", k = 1 Output: true Explanation: The binary codes of length 1 are "0" and "1", it is clear that both exist as a substring. Example 4: Input: s = "0110", k = 2 Output: false Explanation: The binary code "00" is of length 2 and doesn't exist in the array. Example 5: Input: s = "0000000001011100", k = 4 Output: false Constraints: • 1 <= s.length <= 5 * 10^5 • s consists of 0's and 1's only. • 1 <= k <= 20 Related Topics: String, Bit Manipulation ## Solution 1. Sliding window We can use a sliding window to keep track of the substring of length k, and mark the corresponding binary representation n as visited. Then we check if all the numbers in range [0, 2^k) are visited. // OJ: https://leetcode.com/problems/check-if-a-string-contains-all-binary-codes-of-size-k/ // Time: O(N + min(N, 2^K)) // Space: O(2^K) class Solution { public: bool hasAllCodes(string s, int k) { vector<bool> v(1 << k); int n = 0, mask = (1 << k) - 1; for (int i = 0; i < s.size(); ++i) { n = (n << 1) & mask | (s[i] - '0'); if (i >= k - 1) v[n] = true; } for (int i = 0; i < (1 << k); ++i) { if (!v[i]) return false; } return true; } }; ## Solution 2. Sliding window Same idea as Solution 1, but using unordered_set to store the visited info and we just need to check the size of the set in the end. // OJ: https://leetcode.com/problems/check-if-a-string-contains-all-binary-codes-of-size-k/ // Time: O(N) // Space: O(2^K) class Solution { public: bool hasAllCodes(string s, int k) { unordered_set<int> st; int n = 0, mask = (1 << k) - 1; for (int i = 0; i < s.size(); ++i) { n = (n << 1) & mask | (s[i] - '0'); if (i >= k - 1) st.insert(n); if (st.size() == (1 << k)) return true; } return false; } }; Java class Solution { public boolean hasAllCodes(String s, int k) { int length = s.length(); int counts = (int) Math.pow(2, k); if (length - k + 1 < counts) return false; Set<String> set = new HashSet<String>(); StringBuffer sb = new StringBuffer(); for (int i = 0; i < k; i++) sb.append(s.charAt(i));
# In the context quantum computation, what is the evidence that querying classical oracles in superposition makes sense? In quantum computation it is often assumed that if $$f$$ denotes some (classical) Boolean circuit $$\{0,1\}^n \rightarrow \{0,1\}$$, then a quantum circuit can have oracle access to $$f$$, that is the quantum algorithm can query in superposition via, $$\lvert x, y\rangle\ \mapsto\ \lvert x, f(x)\oplus y\rangle$$ What evidence is there that this (i.e. querying in superposition) is indeed a physically reasonable assumption? Given blackbox access to some $$f$$ how would one go about integrating this into a quantum circuit?
2. Economics 3. 1 calculate the inflation rate for 20072008 20082009 20142015 20152016... # Question: 1 calculate the inflation rate for 20072008 20082009 20142015 20152016... ###### Question details 1. Calculate the inflation rate for 2007–2008: 2008–2009: 2014–2015: 2015–2016: 2. In 2016, there were approximately 159.2 million people in the labor force and the unemployment rate was 4.9 percent. If the unemployment rate had been 4.5 percent instead of 4.9 percent, a. How many fewer workers would have been unemployed? ___ million b. How many more would have been employed? ___ million
Article | Open | Published: # Chromatin Protamination and Catsper Expression in Spermatozoa Predict Clinical Outcomes after Assisted Reproduction Programs ## Abstract Identification of parameters predicting assisted reproductive technologies (ARTs) success is a major goal of research in reproduction. Quality of gametes is essential to achieve good quality embryos and increase the success of ARTs. We evaluated two sperm parameters, chromatin maturity and expression of the sperm specific calcium channel CATSPER, in relation to ART outcomes in 206 couples undergoing ARTs. Chromatin maturity was evaluated by Chromomycin A3 (CMA3) for protamination and Aniline Blue (AB) for histone persistence and CATSPER expression by a flow cytometric method. CMA3 positivity and CATSPER expression significantly predicted the attainment of good quality embryos with an OR of 6.6 and 14.3 respectively, whereas AB staining was correlated with fertilization rate. In the subgroup of couples with women ≤35 years, CATSPER also predicted achievement of clinical pregnancy (OR = 4.4). Including CMA3, CATSPER and other parameters affecting ART outcomes (female age, female factor and number of MII oocytes), a model that resulted able to predict good embryo quality with high accuracy was developed. CMA3 staining and CATSPER expression may be considered two applicable tools to predict ART success and useful for couple counseling. This is the first study demonstrating a role of CATSPER expression in embryo development after ARTs programs. ## Introduction Infertility is a condition of global proportion affecting about 15% of couples1, expected to increase in the future. Assisted reproduction technologies (ARTs) are a valid and widely used treatment option for couple infertility. Although huge improvements in outcomes of ARTs have been made in the last few years, the successful pregnancy rate remains quite low, averaging, in European countries, 29.6% for in vitro fertilization (IVF) and 27.8% for intracytoplasmic sperm injection (ICSI)2. Lack of ART success implicates financial burden on both health services and patients and impacts negatively on life quality of couples. Failures in ART can be attributed to embryonic factors, as embryo quality plays a crucial role in the attainment of pregnancy3. Currently, no reliable markers are available to predict embryo quality and other early ART outcomes, which are related directly to the quality of the couple gametes. At present, male gamete assessment is based on semen analysis which is poorly predictive of both natural4 and assisted5,6 reproduction, as the semen of about 20–30% of normozoospermic men has low fertilizing ability7. Indeed, beside normal motility and morphology, a spermatozoon must have intact DNA and other essential features to be able to fertilize the oocyte and to allow a correct embryo development. Since identification/assessment of molecular markers of oocyte quality is basically unfeasible, the identification of sperm markers able to predict ART outcomes represents a priority to reduce negative psychological and economic consequences to the couples. In the present study, we focused on chromatin maturity status and expression of the calcium channel CATSPER, two essential features for correct sperm function. During spermatogenesis histones are replaced by protamines to stabilize chromatin structure8. Such process allows organizing sperm DNA into a tightly packed structure to preserve paternal genome during the transit in male and female genital tracts until the interaction with the oocyte. Although about 15% of histones are physiologically retained by spermatozoa, a greater persistence of histones or a decreased protamination are an index of chromatin immaturity that can affect sperm quality and the fertilizing capacity9. Among the tests used to evaluate sperm chromatin compaction, Chromomycin A3 (CMA3) and Aniline Blue (AB) stainings are two simple, low cost and easy to perform methods. CMA3 competes with protamines for binding to DNA minor groove representing an indirect measure of protamination state. AB staining assesses the histone persistence by binding to lysine residues of these nuclear proteins. Although several studies10,11 evaluated the impact of chromatin compaction on ARTs, the heterogeneity of the ART outcomes taken into account and the lack of consideration of possible confounding factors such as female age and female infertility factors, do not allow to draw clear conclusions about the predictive ability of this parameter. Indeed, it is known that female age is an important predictor of ART success, in particular, the probability of pregnancy decreases markedly after the age of 35 years12,13. Similarly, some female infertility causes (i.e. poor ovarian reserve, endometriosis and polycystic ovary syndrome) are associated with reduced chance of fertilization, implantation and pregnancy in ARTs14,15,16. Another key limiting factor in female fertility is oocyte maturation, determined by the acquisition of a series of competencies during follicular development which allow reaching the metaphase II (MII) stage17. Studies on animal models highlighted the key role of the sperm-specific calcium channel CATSPER (Cationic Channel of Sperm) in the development of hyperactivated motility18,19,20, an essential sperm characteristic. The functional channel is formed by four homologous subunits (CATSPER 1–4) and at least three auxiliary subunits21,22. In the mouse, Qi et al.19 demonstrated that lack of any CATSPER subunits leads to complete absence of the channel in mature spermatozoa. Recently, we demonstrated a positive correlation between the level of expression of CATSPER1 subunit (measured by a flow cytometric method23,24) and sperm number, progressive motility and hyperactivation24, suggesting that expression of the channel may be indicative of sperm quality. On the other hand, the few men with deletions in CATSPER subunits genes, leading to absence of a functional channel in spermatozoa, are infertile and show poor semen quality25,26. However, until now, it is unknown whether CATSPER expression is implicated in human fertilization process or related to ART outcomes. We here assessed chromatin maturity status (by CMA3 and AB staining) and CATSPER1 expression (by a flow cytometric method) in semen samples from male partners of 206 couples undergoing ART treatments. To determine if these male molecular markers may be predictive of ART success, it was evaluated their association with ART outcomes, both as single test and in combination, taking into account several confounding factors affecting the statistical analysis. ## Results ### Chromatin compaction and ART outcomes Age and semen parameters values of the male partners of the 206 couples included in the study are shown in Table 1. None of the semen parameters evaluated on the day of pick up nor male age were related to early ART outcomes, pregnancy achievement or delivery (not shown). In addition, semen parameters on the day of pick up were similar in groups with EQA <50% or ≥50%, FR <80% or ≥80% and in couples achieving or not clinical pregnancy or ending or not with delivery (Table 1), confirming lack or poor predictivity of ART outcomes by semen parameters. The median percentages of spermatozoa showing chromatin immaturity revealed by AB (n = 163) and CMA3 (n = 149) techniques were 20.0% [13.0–28.0] and 23.0% [16.0–33.5], respectively. The two measures were significantly correlated (r = 0.5, p < 0.0001, n = 147). Correlations between levels of CMA3 and AB staining and ART outcomes are reported in Table 2. The percentage of CMA3 positive spermatozoa resulted negatively associated with EQA (Table 2) even after adjusting for female age, female factor and number of MII oocytes (adj. β = −0.2, p = 0.04). No significant correlations were found between CMA3 and other ART outcomes. After categorizing couples according to the percentage of embryos with A quality (EQA ≥50% and EQA <50%), CMA3 positivity was significantly lower when embryo quality was higher (EQA <50%: 23.0 [16.5–34.5], n = 133; EQA ≥50%: 12.0 [8.5–22.0], n = 9, p = 0.005, Fig. 1A, middle panel). The difference was confirmed in a confounder-adjusted model (p = 0.02). To establish a CMA3 value able to predict an EQA ≥50%, ROC analysis was performed (Fig. 1A, lower panel). At a threshold of 19.5%, CMA3 predicted the attainment of EQA ≥50% with 78% sensitivity and 65% specificity. Applying a logistic regression model, we found that the probability of obtaining EQA ≥50% was higher when the CMA3 positivity was ≤19.5% (OR = 6.6, CI 95%: 1.29–33.63, p = 0.02). AB positivity was negatively associated with FR (Table 2 and Fig. 1B, middle panel), even in a confounder-adjusted model (adj. β = −0.2, p = 0.02). No correlation was observed with other ART outcomes (Table 2). To determine the accuracy of AB in predicting the FR, we used ROC as a binary classifier system choosing a value of 80% FR, which corresponds to median value of the cohort (Fig. 1B, lower panel). At a threshold of 25.5%, AB predicted FR ≥80% with a good sensitivity (78%) but low specificity (41%) (Fig. 1B, lower panel). A post hoc binary logistic regression analysis indicated that the probability of obtaining an FR ≥80% was higher when the AB positivity was ≤25.5%, with an OR of 2.3 (CI 95%: 1.19–4.79, p = 0.01). To further investigate whether female age affects the association between CMA3 and AB positivity and ART outcomes, a subgroup analysis was performed according to women age ≤35 or >35 years (the median age of our cohort and the threshold above which the risk of miscarriage and chromosomal aberrations significantly increase27,28 and the probability of pregnancy decreases12,13). In couples with women ≤35 years (n = 115), the subgroup analysis confirmed the significant difference in CMA3 positivity between EQA <50% and EQA ≥50% (not shown) as well as the correlation between AB positivity and FR (not shown) found in the entire cohort. In addition, the OR to predict the achievement of an EQA ≥50% for CMA3 threshold of 19.5 increased to 10.7 (CI 95%: 1.78–97.74, p = 0.04), and the OR to obtain an FR ≥80% for the AB threshold of 25.5% to 2.8 (CI 95%: 1.12–6.95, p = 0.03). ### CATSPER1 expression and ART outcomes In the 141 male partners of the cohort, the median value of CATSPER1 expression was 4.5 [3.5–5.8]. At a first glance, CATSPER1 expression was found to be correlated with no ART outcome (not shown). However, after adjustment for female age, female factor and number of MII oocytes, a positive correlation between CATSPER1 expression and EQA was unmasked (adj. β = 0.2, p = 0.03). Figure 2B shows that CATSPER1 MFI was significantly higher in the group with EQA ≥50% (5.3 [4.3–6.9, n = 16] in EQA ≥50% vs 4.3 [3.4–5.6, n = 120] in EQA <50%, p = 0.002). The difference was also confirmed after adjusting for confounders (p = 0.03). A ROC curve analysis was performed to determine the threshold of CATSPER1 expression associated with EQA ≥50% (Fig. 2C). We found that the attainment of a good embryo quality was predicted with a specificity of 91% and a sensitivity of 44% at the CATSPER1 value of 6.74. By binary logistic regression, we found that above this threshold (CATSPER1 ≥6.74) the OR to obtain an EQA ≥50% was 14.3 (CI 95%: 3.50–58.09; p < 0.0001). In the subgroup of younger women (≤35 years), CATSPER1 expression was significantly correlated with EQA, IR and PR (Table 3), even after adjusting for female factor and number of MII oocytes (EQA: adj. β = 0.3, p = 0.005; IR: adj. β = 0.3, p = 0.03; PR: adj. β = 0.2, p = 0.05). As in the entire cohort, CATSPER1 MFI was higher in couples with EQA ≥50% (6.8 [4.5–7.2, n = 9] vs 4.1 [3.4–5.3, n = 64] in EQA <50%, p = 0.006), even in a confounder-adjusted model, p = 0.02. In addition, in this subgroup, CATSPER1 expression was higher in couples achieving clinical pregnancy (5.2 [3.7–6.8, n = 20] vs 4.05 [3.2–5.1, n = 45] in the non-pregnant, p = 0.02; Fig. 2D). This difference was confirmed after adjusting for female factor and number of MII oocytes (p = 0.05). CATSPER expression was also slightly, but not significantly, higher in couples ending with a delivery (median values: 5.9 [3.7–6.9, n = 13] vs 4.6 [3.6–6.4, n = 7], p = ns). Logistic regression analysis showed that when CATSPER1 MFI was ≥6.74 (see above) the odds to obtain EQA ≥50% (OR = 17.7, CI 95%: 3.07–102.07, p = 0.001) and to achieve clinical pregnancy (OR = 4.4, CI 95%: 1.08–18.21, p = 0.04) were higher. ### Development of an Embryo quality prediction model The results concerning the ability of both CMA3 and CATSPER1 levels to predict EQA ≥50% prompted us to build up an embryo quality prediction model. As mentioned above, embryo quality is considered, indeed, as a strong predictor of implantation, pregnancy and live birth after ARTs3,13,29,30. In particular, in our cohort, couples with an EQA ≥50% had 3.17 higher probability of pregnancy (CI 95%: 1.04–9.62, p = 0.04). Considering previously published studies regarding the clinical significance of female parameters on ART outcomes, the model not only included CMA3 positivity and CATSPER1 expression, but also female age12,13, female factor14,15,16 and number of MII oocytes17. Table 4 reports coefficients from the logistic regression model that could be used to calculate a probability of obtaining an EQA ≥50% for all couples. The equation describing the probability of developing good embryos is: $${\rm{P}}={e}^{x}\div(1+{e}^{x})$$ with x = b0 + b1*p1 + b2*p2 + b3*p3 + b4*p4+b5*p5, where p1…p5 are our predictors’ values and b0…b5 are the coefficients derived from the model (Table 4). We assessed the discrimination of the predictive model by calculating the area under the ROC curve (Fig. 3), which denotes that our model is able to predict the achievement of an EQA ≥50% with good probability. The goodness of fit of the model was evaluated using the Hosmer–Lemeshow statistic test. Such test demonstrated no statistically significant difference between the predicted and observed values (x² = 3.107, p = 0.927). ## Discussion Despite the considerable progresses done in the latest years, the rate of clinical pregnancy after ARTs remains low (about 30%). The identification of one or more parameters able to predict the outcomes of ARTs appears mandatory to increase the percentage of success, avoid psychological stress, optimize the correct counselling of the couples and reduce the costs. Among the critical steps for ART success, the development of a good quality embryo appears of upmost importance, as it is highly related to the attainment of clinical pregnancy, as demonstrated in the current and previous studies3. We show here that two sperm parameters, chromatin compaction as evaluated by CMA3 staining and expression of the sperm specific calcium channel subunit CATSPER1, show significant correlations with development of good quality embryos, suggesting a certain grade of dependence of embryo quality from the two parameters. Based on the evaluation of the two sperm parameters, and including other parameters that are known to affect embryo quality, we developed a model which resulted able to predict the ability of couples to obtain good quality embryos with high accuracy. In addition, we demonstrated that expression of CATSPER1 unveils the importance of sperm quality in pregnancy achievement in couples with female age below 35 years. In our study, chromatin maturity status has been assessed by two methods, CMA3 and AB, which are widely described in the literature10,11. Theoretically the two methods should evaluate the same sperm aspect and, indeed, a positive relationship is present between the results obtained with the two techniques (31–32 and present study). However, the correlation is not as tight as expected (r = 0.5 in the present study and 0.4 in the Iranpour et al.31 study), suggesting that histones retention (evaluated by AB) does not necessarily correspond to a decrease of protamination (evaluated by CMA3) and vice versa. Such conclusion is strengthened by the demonstration that CMA3 and AB results are differently related to ART outcomes (present study and refs31,33). In particular, we found that whereas CMA3 values are associated to the development of good quality embryos, AB staining is only weakly associated with fertilization rate. Our data on the association between CMA3 positivity and embryo quality is in apparent contrast with two previous studies, where such correlation was not observed34,35. Of note, the latter studies were conducted on a small number of couples and without considering female factor and/or female age34 or simply excluding female factors from the analysis35. We now show not only that CMA3 predicts achievement of EQA ≥50% in the entire cohort, but also that the OR for this prediction is almost twice higher in couples with younger women. This suggests that women age could mask the sperm contribution to embryo development and evidences the necessity of considering female age and female factors in the analysis on the impact of sperm characteristics on ARTs results. Protamines are important actors in the fertilization process: they are exchanged for maternal nucleosomes shortly after fertilization36 and this process may be important for reprogramming to totipotency of the zygote37. At difference with other studies31,32,33,38,39 we did not observe any relationship between CMA3 and fertilization rate, probably because of the high FR obtained in our study (about 80%). However, a negative relationship with FR was observed for AB staining, suggesting that the chromatin status may impact the ability of spermatozoa to fertilize the oocyte. It appears from our study that histones persistence, rather than protamination, is more important in the fertilization process. Other groups did not find any associations between sperm AB staining and ART outcomes probably due to the small number of included couples31,32,40. Further studies are needed to better understand the role of histone persistence in fertilization. Overall our results remark the importance of chromatin maturation for ART success. On the other hand, an incorrect chromatin compaction exposes spermatozoa to DNA damage41 which can impact ART outcomes42,43,44. With regard to most assays assessing sperm DNA fragmentation, the tests detecting chromatin immaturity used in our study are easy and rapid to perform, require a low number of cells and no technologically advanced instruments, hence they could be performed in any ART centre. All these advantages make CMA3 and AB methods valuable tools to support routine semen analysis in the diagnosis of male partner of infertile couples. The present study is the first one evaluating the association between CATSPER1 protein expression (likely reflecting the expression of the entire CATSPER channel19) and ART outcomes. A role of CATSPER channel in sperm-oocyte interaction is expected since KO mice for any CATSPER subunit are infertile19 because their spermatozoa are unable to develop hyperactivated motility and to penetrate the zona pellucida. In addition, men with mutations/deletions in genes encoding for CATSPER subunits show fertility problems26 likely due to absence of a functional channel25. In human spermatozoa CATSPER is activated with a non-genomic mechanism45 by progesterone46,47, a hormone present in high concentrations at fertilization site, by a rise of intracellular pH, such as induced by 4-aminopyridine48, and by other components present in follicular fluid49. Sperm responsiveness to both progesterone and 4-aminopyridine are correlated to fertilization rate in ART programs48,50,51. We found here that CATSPER1 expression levels are higher in couples with good embryo quality in the entire cohort and in couples achieving clinical pregnancy in the subgroup of young women. Most importantly, we demonstrate that, at the threshold of ≥6.74 MFI, CATSPER1 expression predicts development of good embryos with an OR of 14.3. This result indicates that a higher expression of the channel in sperm is important for a correct human embryo development. However, whether CATSPER channel is indicative of a sperm characteristic implicated in embryo development or is required itself for the process is presently unknown. In a previous paper by our group we demonstrated that the percentage of human spermatozoa expressing CATSPER1 predicts with high accuracy the sperm ability to hyperactivate24, suggesting that the channel is involved in the capacitation process leading to hyperactivation in vitro. It has been demonstrated that spermatozoa of CATSPER KO mice partially reacquire the ability to fertilize the oocyte by in vitro fertilization and to develop blastocysts when capacitation is accelerated by the addition of the calcium ionophore A23187, evidencing a crucial role of CATSPER in capacitation52. Recently, in bovine, it has been demonstrated that artificial induction of capacitation is important for blastocyst formation also when fertilization is obtained with ICSI53. Capacitation is essential for mammalian male fertility, as demonstrated by several KO models where the process is impaired54,55. Interestingly, Navarrete et al.52 showed that artificial induction of capacitation in some of these models (including, as mentioned above, CATSPER) restores the fertilizing ability of spermatozoa in vitro. We speculate that a higher CATSPER1 expression leads to a higher degree of capacitation of spermatozoa which is important for embryo development. Whereas the reason why lack of capacitation can affect fertilization competence in in vitro fertilization is understandable (lack of hyperactivated motility, inability to respond to acrosome reaction stimuli), why it might influence embryo development is obscure. Capacitated spermatozoa show many features that may affect oocyte activation and embryo development56, including elevated intracellular calcium levels57, tyrosine phosphorylation of proteins58,59, and modifications of the extracellular membrane composition60. Lack or low CATSPER expression may result in alteration of capacitation-related sperm calcium balance that could impact on oocyte activation. A recent study demonstrated that sperm expression of the protein PAWP is associated with embryo development in couples undergoing ICSI61 although, also in this case, the mechanisms involved in such action are poorly defined. Of note, a recent study demonstrated the importance of the sperm calcium channel TRP-3 in mediating the calcium wave occurring at fertilization in C. elegans oocyte62. To assess the role of CATSPER channel during fertilization and embryo development further studies are necessary in animal models. The involvement of CATSPER1 expression in EQ is reinforced by our results within the younger women subgroup, where the same parameter was associated with and predicted pregnancy achievement with an OR of 4.4. The association between CATSPER1 and pregnancy achievement could simply reflect the higher embryo quality obtained in subjects with higher CATSPER1 expression. The relationship between CATSPER1 and pregnancy is observed in the subgroup with younger women, where the probability of pregnancy is higher and the contribution of male factor is likely unmasked. None of the sperm parameters evaluated in our study were related to delivery rate. It should be considered that there are many factors that may influence a term pregnancy and the birth of a healthy child, some of which could be independent from the fact that clinical pregnancy has been obtained by ARTs. Since both CMA3 and CATSPER1 are able to discriminate between low and high quality embryos, we introduced these markers in a model including also female parameters (female age, female factor and number of MII oocytes) in order to ameliorate the prediction value of EQA. The probability of developing a good quality embryo, derived from such model, results more accurate than the single parameters. Moreover, in the present study, the predictive and observed values did not differ, confirming the reliability of the model. Several prediction models focusing on embryo quality, clinical pregnancy or live births (using IVF or ICSI) as primary outcome are present in the literature. Such studies include female factors (age, causes of infertility, hormone levels etc.) and/or general couple data (history and type of infertility) omitting entirely male parameters or simply considering the presence/absence of male factor infertility63,64,65,66. To our knowledge, this is the first study to present a novel model encompassing both female parameters and sperm intrinsic characteristics, such as the chromatin maturity status and expression of CATSPER1, as available prognostic factors in prediction models for IVF/ICSI outcomes. This study has the strength of determining the impact of chromatin immaturity and CATSPER1 expression on ART outcomes taking into account female age, female factor and number of MII oocytes as confounders in the statistical analysis in a large number of couples. The study has some limitations. In particular, CATSPER1 expression was evaluated only in subjects with a sufficient number of spermatozoa to allow the determination (i.e. when it was possible to harvest 10 million spermatozoa from the entire ejaculate before selection for ARTs). Thus, patients with a low initial sperm number or severe male factor were not included. Since CATSPER1 expression is positively related with sperm number24, it is possible that subjects with low CATSPER1 expression were less represented in our cohort. In conclusion, our study demonstrates that sperm histone retention plays a role in oocyte fertilization, whereas sperm protamine content and expression of CATSPER1 are involved in the development of good quality embryos. Combining the latter two markers with female age and female factor, we developed a prediction model of embryo quality which could be applicable in clinical practice and in the management of couples undergoing ARTs. ## Materials and Methods ### Study design and participants The experimental protocol has been approved by the internal ethical committee of Demetra ART Center of Florence (Italy). We enrolled in a prospective cohort study 206 consecutive couples undergoing ART cycles at the Demetra ART Center of Florence (Italy) from March 2015 to October 2016. The obtainment of an informed written consent from the couples was the only criterion for inclusion in the study. All the couples were informed that, after the normal clinical practice for the ART treatment, the eventual remaining semen or selected spermatozoa would be used for the study. The infertility diagnosis was: 56% female factor, 18% male factor, 10% male and female factor in combination and 16% unexplained. 156 couples were treated with ICSI and 50 with IVF. In 30 couples it was not possible to perform fresh transfer. Indications for deferred embryo transfer were: risk of ovarian hyperstimulation syndrome67, elevated progesterone levels (≥1.5 ng/ml) and inadequate endometrium on the trigger day68 in 20, 4 and 6 cases, respectively. To avoid a potential confounding bias due to different embryo transfer (fresh or frozen), implantation rate, pregnancy rate and delivery rate were calculated only for fresh embryo transfer (176/206). We transferred 1 embryo in 45 cases (26%), 2 in 124 (70%) and 3 in 7 (4%). The median age of subjects was 35 [23–43] and 38 [27–55] years for female and male partners, respectively. ### Ovarian stimulation, IVF, ICSI, and Embryo Development All the patients were treated according to the standard ovarian stimulation protocols of the clinic: 1) midluteal-phase GnRH-agonist (triptorelin, Decapeptyl, Ipsen Pharma) long protocol, followed by gonadotropin stimulation; 2) follicular phase GnRH-agonist/flare protocol (triptorelin, Decapeptyl, Ipsen Pharma), started with gonadotropin stimulation; 3) short protocol including gonadotropin stimulation from day 2 of the cycle, combined with a flexible antagonist protocol (cetrorelix 0.25 mg/day Cetrotide, Merck Serono or ganirelix 0.25 mg, Orgalutran, MSD Italia). In all cases follicle stimulation was performed with individual dosage of recombinant follicle stimulating hormone r–FSH (Gonal F, Merk Serono or Puregon, MSD Italia) or of highly purified human menopausal gonadotropin hMG (Meropur, Ferring), with a starting dose ranging from 150 to 450 IU, according to age, body mass index, ovarian reserve index and response to previous ovarian stimulation. The dose was then modified according to the ovarian response (determined by serum estradiol levels and ultrasound evaluation at 2 days interval), until at least two follicles reached 17 mm in mean diameter. Finally, oocytes maturation was induced by injection of 5000 IU of u-hCG (Gonasi, Ibsa Farmaceutici Italia) or 250 µg r-hCG (Ovitrelle, Merck Serono). Gonadotropin stimulation and GnRH-agonist or GnRH-antagonist was continued until the day of hCG triggering, when progesterone levels were measured. Oocytes retrieval was performed about 35 hours later by sonographically guided puncture of the follicles, under sedation and local anesthesia. For IVF, undecumulated oocytes were incubated overnight with about 50.000 spermatozoa/oocyte in Continuous Single Culture® Complete medium (Irvine Scientific, Santa Ana, CA, USA). For ICSI, Nikon Eclipse TE2000-S microscope equipped with Narishige IM-9B Microinjector was used. After 18 ± 1 hours from insemination (IVF) or after 17 ± 1 hours from microinjection (ICSI), oocytes were assessed for 2 pro-nuclei presence. Continuous Single Culture® Complete medium was used for embryo culture. After 24 ± 1, 44 ± 1 and 68 ± 1 hours, pace of division, degree of fragmentation, size and symmetry of the blastomeres were evaluated by Nikon Eclipse TE2000-S microscope (Nikon, Tokyo, Japan). Embryos were incubated in a MINC benchtop incubator (Cook Medical, Bloomington, USA) at 37 °C, 6% CO2 and 5% O2 and were scored according to the criteria detailed in Supplementary Fig. 1. Embryos showing the best properties were classified into A class. Embryos showing slight deviation in the degree of fragmentation (5–30%), symmetry and division pace were classified into B and B/C (Supplemental Fig. 1). More considerable deviations were the cause for classifying them into C and D. Degenerated or arrested embryos (type E) were not transferred. Surplus transferable embryos were cryopreserved. After 3 days post oocyte retrieval, embryos were transferred into the uterus. Luteal support was given to all patients, administered as intravaginal micronized progesterone (Progeffik, 200 mg three times daily, EFFIK Italia), from the day after oocyte pick up until 12 days after embryo transfer, when serum hCG was measured. In case of positive hCG levels, clinical pregnancy was verified by ultrasound about 15 days later. ### Sperm preparation Semen samples were collected by masturbation after 2–7 days of abstinence on the day of oocyte insemination. Sperm number, progressive motility and morphology were evaluated after liquefaction at 37 °C, according to WHO criteria69. Briefly, sperm number was evaluated by improved Neubauer chamber after appropriate dilution, motility by Nikon Eclipse TE2000 microscope scoring at least 100 spermatozoa/slide and morphology after Diff-Quick staining69. Sperm selection for oocyte insemination was performed by swim up (95 samples) or density gradient centrifugation (111 samples), according to sample characteristics. Swim up was performed by washing seminal fluid with Sperm Wash Medium (Irvine, Santa Ana, CA, USA) supplemented with 1% human serum albumin (HSA), and centrifuging at 300 g for 10 min. The obtained pellet was gently layered with 1 ml of the same medium and incubated at 37 °C. After 45 min, 800 µl of the upper medium phase was collected. Density gradient centrifugation was performed layering 1 ml semen samples on 1 ml 45% and 90% stratified PureSperm (Nidacon, Gothenberg, Sweden) fractions (prepared in Sperm Wash Medium /HSA medium) and centrifuged at 300 g for 10 min at room temperature (RT). The resulting pellet was collected and transferred to separate test tubes. Then, each fraction was washed with 1 ml of Sperm Wash/HSA medium and then re-suspended in the same medium. After selection, the obtained fraction was checked for sperm count and motility, kept at 37 °C in the same medium and used to inseminate the oocytes within 15 minutes from selection. ### Sperm chromatin immaturity Sperm chromatin immaturity was evaluated in selected spermatozoa remaining after oocyte insemination by AB (n = 149) and CMA3 (n = 163) staining. After sperm selection and fixation in paraformaldehyde [PFA, 500 µL, 4% in phosphate-buffered saline (PBS) pH 7.4, for 30 min at RT], 4 × 105 spermatozoa were stained with 100 µL of CMA3 (Sigma Aldrich, St Louis, MO, USA) solution [0.25 mg/mL in McIlvane’s buffer (0.2 M Na2HPO4, 0.1 M citric acid), pH 7.0, containing 10 mM MgCl2], for 20 min at RT in the dark. Cells were then washed and resuspended in 10 µL of McIlvane’s buffer, pH 7.0, containing 10 mM MgCl2, smeared on slide, air-dried and mounted with PBS: glycerol (1:1). Two hundred spermatozoa were analyzed on each slide by fluorescence microscope (Axiolab A1 FL; Carl Zeiss, Milan, Italy), equipped with Filter set 49 and an oil immersion 100x magnification objective. Two types of staining patterns were identified: bright green fluorescence of the sperm head (abnormal chromatin packaging) and weak green staining (normal chromatin packaging) (Fig. 1A)70. AB staining, which selectively stains lysine-rich histones71 was performed as previously described70. Briefly, after fixation in 4% PFA, 1 × 105 spermatozoa were smeared on slide, air-dried and then stained with 5% aqueous AB (Sigma Aldrich, St Louis, MO, USA) mixed with 4% acetic acid (pH 3.5) for 5 min72 at RT. Two hundred spermatozoa were analyzed on each slide under a light microscope (Leica DM LS; Leica, Wetzlar, Germany). Spermatozoa showing dark-blue staining were considered as AB positive (Fig. 1B)72. ### Detection of CATSPER1 The extent of CATSPER1 expression in spermatozoa was determined in whole semen (n = 141) remaining after sperm preparation for ART, by an immunofluorescence-flow cytometric method, as previously described23,24. 10 × 106 unselected spermatozoa were fixed in 4% PFA and washed twice in 1% NGS (normal goat serum, Sigma Aldrich, St Louis, MO, USA)-PBS, before permeabilization with 0.1% Triton X-100 in 100 µL 0.1% sodium citrate for 4 min in ice. After splitting into three identical aliquots, sperm samples were incubated for 1 hour at RT either with anti-CATSPER1 antibody (4 µg/ml, test sample, Santa Cruz Biotechnology, Dallas, TX, USA) or normal rabbit serum (4 µg/ml, Signet Laboratories, Hayward, CA, USA), the latter for negative control. The samples were washed twice in 1% NGS- PBS, and subsequently were incubated for 1 hour in the dark with goat anti-rabbit IgG-FITC (Southern Biotech, Birmingham, AL, USA) diluted 1:100 in 1% NGS-PBS. After two washing procedures, spermatozoa were resuspended in 300 µL PBS and incubated in the dark for 15 min at RT with 4.5 µL Propidium Iodide (PI, 50 µg/ml in PBS) to stain the nuclei. The third aliquot of spermatozoa was prepared with the same procedure but omitting the PI staining, for instrumental compensation. Samples were acquired using a flow cytometer (FACScan, Becton Dickinson, Mountain View, CA, USA) equipped with a 15-mW argon ion laser used at 488 nm for excitation. Green fluorescence of FITC-conjugated goat anti-mouse IgG was revealed by an FL-1 (515–555-nm wavelength band) detector; red fluorescence of PI was detected by an FL-2 (563–607-nm wavelength band) detector. We acquired 8000 nucleated events (i.e. the events stained with PI) in the gate of the characteristic forward scatter/side scatter region of sperm cells73. CATSPER1 expression in the different samples was expressed as median fluorescence intensity (MFI), calculated by the ratio between the median intensity of cells of the test sample and the median intensity of cells of the corresponding negative control (a fluorescence histogram depicting a negative control and a test sample is shown in Fig. 2A). Spermatozoa stained with the anti-CATSPER1 antibody used in our experiments were observed using Axiolab A1 FL (Carl Zeiss, Milan, Italy) fluorescence microscope using an oil immersion 100x magnification objective. The staining reveals a patchy and punctate pattern in the tail of most CATSPER positive spermatozoa (Fig. 2B)23,24. ### Statistical analysis The following ART outcomes were considered: fertilization rate (FR, number of fertilized oocytes/number of inseminated oocytes); cleavage rate (CR, number of embryos/number of fertilized oocytes); good embryo quality (EQA, number of embryos of A quality/number of total embryos); implantation rate (IR, number of gestational sac with fetal heart beat/number of transferred embryos); pregnancy rate (PR, number of clinical pregnancy/number of transferred embryos) and delivery rate (DR, number of delivery/number of clinical pregnancy). Data were analyzed with SPSS (Statistical Package for the Social Sciences, Chicago, IL, USA), version 24.0 for Windows. Continuous variables that were found to be not normally distributed after the Kolmogorov-Smirnov test were expressed as median (interquartile range- IQR) value. Considering that no statistically significant differences were observed between IVF and ICSI for each outcome (not shown), the analysis was conducted in the entire cohort. Correlations were assessed using Spearman’s methods and Mann–Whitney U test was used for comparisons between groups. Multivariate analysis was performed to adjust data for confounding factors known to influence ART outcomes, such as female age12,13, female factors14,15,16 and number of MII oocytes17. For female factors only poor ovarian reserve, endometriosis and polycystic ovary syndrome were considered as, in ARTs, all other female factors are overcome by embryo transfer. We used receiver operating characteristic (ROC) curve analysis to test the accuracy (as area under the curve, AUC) with 95% confidence interval, the sensitivity and the specificity, as well as to identify cut-off values of the different sperm variables (AB, CMA3 and CATSPER1) in predicting ART outcomes. Logistic regression was used to estimate adjusted odds ratios (OR) with 95% confidence intervals (CI). Prediction models were constructed for those outcomes resulting correlated to one or more evaluated parameters. Predictors significantly associated with such outcomes were analyzed at multivariable logistic regression including in the model female age, female factor and number of MII oocytes as covariates. The performance of the models was quantified with respect to discrimination74, i.e. how the goodness of the model is able to distinguish between the two groups achieving or not the outcome, which was quantified with the ROC AUC. The reliability of the prediction produced by the model was statistically tested by the Hosmer-Lemeshow goodness-of-fit test. All statistical tests were 2-sided, and P values of ≤0.05 were considered statistical significant. ### Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Agarwal, A., Mulgund, A., Hamada, A. & Chyatte, M. R. A unique view on male infertility around the globe. Reprod. Biol. Endocrinol. 26, 13–37 (2015). 2. 2. European IVF-Monitoring Consortium (EIM) for the European Society of Human Reproduction and Embryology (ESHRE), et al. Assisted reproductive technology in Europe, 2013: results generated from European registers by ESHRE. Hum. Reprod. (2017). 3. 3. Cai, Q., Wan, F., Appleby, D., Hu, L. & Zhang, H. Quality of embryos transferred and progesterone levels are the most important predictors of live birth after fresh embryo transfer: a retrospective cohort study. J. Assist. Reprod. Genet. 31, 185–194 (2014). 4. 4. Leushuis, E. et al. Semen analysis and prediction of natural conception. Hum. Reprod. 29, 1360–1367 (2014). 5. 5. Hotaling, J. M., Smith, J. F., Rosen, M., Muller, C. H. & Walsh, T. J. The relationship between isolated teratozoospermia and clinical pregnancy after in vitro fertilization with or without intracytoplasmic sperm injection: a systematic review and meta-analysis. Fertil. Steril. 95, 1141–1145 (2011). 6. 6. Shabtaie, S. A., Gerkowicz, S. A., Kohn, T. P. & Ramasamy, R. Role of Abnormal Sperma Morphology in Predicting Pregnancy Outcomes. Curr. Urol. Rep. 17, 67 Review (2016). 7. 7. Liu, D. Y. & Baker, H. W. Disordered zona pellucida-induced acrosome reaction and failure of in vitro fertilization in patients with unexplained infertility. Fertil. Steril. 79, 74–80 (2003). 8. 8. Shaman, J. A., Prisztoka, R. & Ward, W. S. Topoisomerase IIB and an extracellular nuclease interact to digest sperm DNA in an apoptotic-like manner. Biol. Reprod. 75, 741–748 (2006). 9. 9. Ward, W. S. & Coffey, D. S. DNA packaging and organization in mammalian spermatozoa: comparison with somatic cells. Biol. Reprod. 44, 569–574 Review. (1991). 10. 10. Tavalaee, M., Razavi, S. & Nasr-Esfahani, M. H. Influence of sperm chromatin anomalies on assisted reproductive technology outcome. Fertil. Steril. 91, 1119–1126 (2009). 11. 11. Irez, T. et al. Investigation of the association between the outcomes of sperm chromatin condensation and decondensation tests, and assisted reproduction techniques. Andrologia. 47, 438–447 (2015). 12. 12. Sharma, V., Allgar, V. & Rajkhowa, M. Factors influencing the cumulative conception rate and discontinuation of in vitro fertilization treatment for infertility. Fertil. Steril. 78, 40–46 (2002). 13. 13. Cai, Q. F., Wan, F., Huang, R. & Zhang, H. W. Factors predicting the cumulative outcome of IVF/ICSI treatment: a multivariable analysis of 2450 patients. Hum. Reprod. 26, 2532–2540 (2011). 14. 14. Broekmans, F. J., Kwee, J., Hendriks, D. J., Mol, B. W. & Lambalk, C. B. A systematic review of tests predicting ovarian reserve and IVF outcome. Hum. Reprod. Update. 12, 685–718 Review (2006). 15. 15. Barnhart, K., Dunsmoor-Su, R. & Coutifaris, C. Effect of endometriosis on in vitro fertilization. Fertil. Steril. 77, 1148–1155 (2002). 16. 16. Qiao, J. & Feng, H. L. Extra- and intra-ovarian factors in polycystic ovary syndrome: impact on oocyte maturation and embryo developmental competence. Hum. Reprod. Update. 17, 17–33 Review (2011). 17. 17. Swain, J. E. & Pool, T. B. ART failure: oocyte contributions to unsuccessful fertilization. Hum. Reprod. Update. 14, 431–446 Review (2008). 18. 18. Ren, D. et al. A sperm ion channel required for sperm motility and male fertility. Nature. 413, 603–609 (2001). 19. 19. Qi, H. et al. All four CatSper ion channel proteins are required for male fertility and sperm cell hyperactivated motility. Proc. Natl. Acad. Sci. USA 104, 1219–1223 (2007). 20. 20. Jin, J. et al. Catsper3 and Catsper4 are essential for sperm hyperactivated motility and male fertility in the mouse. Biol. Reprod. 77, 37–44 (2007). 21. 21. Ren, D. & Xia, J. Calcium signaling through CatSper channels in mammalian fertilization. Physiology (Bethesda) 25, 165–175 (2010). 22. 22. Chung, J. J., Navarro, B., Krapivinsky, G., Krapivinsky, L. & Clapham, D. E. A novel gene required for male fertility and functional CATSPER channel formation in spermatozoa. Nat. Commun. 2, 153 (2011). 23. 23. Tamburrino, L. et al. The CatSper calcium channel in human sperm: relation with motility and involvement in progesterone-induced acrosome reaction. Hum Reprod. 29, 418–428 (2014). 24. 24. Tamburrino, L. et al. Quantification of CatSper1 expression in human spermatozoa and relation to functional parameters. Hum. Reprod. 30, 1532–1544 (2015). 25. 25. Smith, J. F. et al. Disruption of the principal, progesterone-activated sperm Ca2+ channel in a CatSper2-deficient infertile patient. Proc. Natl. Acad. Sci. USA 110, 6823–6828 (2013). 26. 26. Hildebrand, M. S. et al. Genetic male infertility and mutation of CATSPER ion channels. Eur. J. Hum. Genet. 18, 1178–1184 (2010). 27. 27. Heffner, L. J. Advanced maternal age–how old is too old? N. Engl. J. Med. 351, 1927–1929 (2004). 28. 28. Franasiak, J. M. et al. The nature of aneuploidy with increasing age of the female partner: a review of 15,169 consecutive trophectoderm biopsies evaluated with comprehensive chromosomal screening. Fertil. Steril. 101, 656–663.e1 Review. (2014). 29. 29. van Loendersloot, L. L. et al. Predictive factors in in vitro fertilization (IVF): a systematic review and meta-analysis. Hum. Reprod. Update. 16, 577–589 (2010). 30. 30. Volpes, A. et al. Number of good quality embryos on day 3 is predictive for both pregnancy and implantation rates in in vitro fertilization/intracytoplasmic sperm injection cycles. Fertil. Steril. 82, 1330–1336 (2004). 31. 31. Iranpour, F. G. Impact of sperm chromatin evaluation on fertilization rate in intracytoplasmic sperm injection. Adv. Biomed. Res. 3, 229 (2014). 32. 32. Razavi, S., Nasr-Esfahani, M. H., Mardani, M., Mafi, A. & Moghdam, A. Effect of human sperm chromatin anomalies on fertilization outcome post-ICSI. Andrologia. 35, 238–243 (2003). 33. 33. Nasr-Esfahani, M. H., Razavi, S. & Mardani, M. Relation between different human sperm nuclear maturity tests and in vitro fertilization. J. Assist. Reprod. Genet. 18, 219–225 (2001). 34. 34. Nasr-Esfahani, M. H. et al. Effect of sperm DNA damage and sperm protamine deficiency on fertilization and embryo development post-ICSI. Reprod. Biomed. Online. 11, 198–205 (2005). 35. 35. Sadeghi, M. R. et al. Relationship between sperm chromatin status and ICSI outcome in men with obstructive azoospermia and unexplained infertile normozoospermia. Rom. J. Morphol. Embryol. 52, 645–651 (2011). 36. 36. Rodman, T. C., Pruslin, F. H., Hoffmann, H. P. & Allfrey, V. G. Turnover of basic chromosomal proteins in fertilized eggs: a cytoimmunochemical study of events in vivo. J. Cell. Biol. 90, 351–361 (1981). 37. 37. Okada, Y. & Yamaguchi, K. Epigenetic modifications and reprogramming in paternal pronucleus: sperm, preimplantation embryo, and beyond. Cell. Mol. Life. Sci. 74, 1957–1967 Review. (2017). 38. 38. Esterhuizen, A. D., Franken, D. R., Lourens, J. G., Prinsloo, E. & van Rooyen, L. H. Sperma chromatin packaging as an indicator of in-vitro fertilization rates. Hum. Reprod. 15, 657–661 (2000). 39. 39. Nasr-Esfahani, M. H., Razavi, S., Mozdarani, H., Mardani, M. & Azvagi, H. Relationship between protamine deficiency with fertilization rate and incidence of sperma premature chromosomal condensation post-ICSI. Andrologia. 36, 95–100 (2004). 40. 40. Hammadeh, M. E. et al. The effect of chromatin condensation (aniline blue staining) and morphology (strict criteria) of human spermatozoa on fertilization, cleavage and pregnancy rates in an intracytoplasmic sperm injection programme. Hum. Reprod. 11, 2468–2471 (1996). 41. 41. Muratori, M. et al. Investigation on the Origin of Sperm DNA Fragmentation: Role of Apoptosis, Immaturity and Oxidative Stress. Mol. Med. 21, 109–122 (2015). 42. 42. Simon, L., Zini, A., Dyachenko, A., Ciampi, A. & Carrell, D. T. A systematic review and meta-analysis to determine the effect of sperm DNA damage on in vitro fertilization and intracytoplasmic sperm injection outcome. Asian. J. Androl. 19, 80–90 (2017). 43. 43. Cissen, M. et al. Sperm DNA Fragmentation and Clinical Outcomes of Medically Assisted Reproduction: A Systematic Review and Meta-Analysis. PLoS One. 11, e0165125 (2016). 44. 44. Tamburrino, L. et al. Mechanisms and clinical correlates of sperm DNA damage. Asian J. Androl. 14, 24–31 (2012). 45. 45. Miller, M. R. et al. Unconventional endocannabinoid signaling governs sperm activation via the sex hormone progesterone. Science. 352, 555–559 (2016). 46. 46. Lishko, P. V., Botchkina, I. L. & Kirichok, Y. Progesterone activates the principal Ca2+ channel of human sperm. Nature. 471, 387–391 (2011). 47. 47. Strünker, T. et al. The CatSper channel mediates progesterone-induced Ca2+ influx in human sperm. Nature. 471, 382–386 (2011). 48. 48. Alasmari, W. et al. The clinical significance of calcium-signalling pathways mediating human sperm hyperactivation. Hum. Reprod. 28, 866–876 (2013). 49. 49. Brown, S. G. et al. Depolarization of sperm membrane potential is a common feature of men with subfertility and is associated with low fertilization rate at IVF. Hum. Reprod. 31, 1147–1157 (2016). 50. 50. Krausz, C. et al. Intracellular calcium increase and acrosome reaction in response to progesterone in human spermatozoa are correlated with in-vitro fertilization. Hum. Reprod. 10, 120–124 (1995). 51. 51. Krausz, C. et al. Two functional assays of sperm responsiveness to progesterone and their predictive values in in-vitro fertilization. Hum. Reprod. 11, 1661–1667 (1996). 52. 52. Navarrete, F. A. et al. Transient exposure to calcium ionophore enables in vitro fertilization in sterile mouse models. Sci. Rep. 6, 33589 (2016). 53. 53. Águila, L., Zambrano, F., Arias, M. E. & R, F. Sperm capacitation pretreatment positively impacts bovine intracytoplasmic sperm injection. Mol. Reprod. Dev. [Epub ahead of print] (2017). 54. 54. Santi, C. M. et al. The SLO3 sperm-specific potassium channel plays a vital role in male fertility. FEBS. Lett. 584, 1041–1046 (2010). 55. 55. Hess, K. C. et al. The “soluble” adenylyl cyclase in sperm mediates multiple signaling events required for fertilization. Dev. Cell. 9, 249–259 (2005). 56. 56. Martin, J. H., Bromfield, E. G., Aitken, R. J. & Nixon, B. Biochemical alterations in the oocyte in support of early embryonic development. Cell. Mol. Life. Sci. 74, 469–485 Review (2017). 57. 57. Baldi, E. et al. Intracellular calcium accumulation and responsiveness to progesterone in capacitating human spermatozoa. J. Androl. 12, 323–330 (1991). 58. 58. Asquith, K. L., Baleato, R. M., McLaughlin, E. A., Nixon, B. & Aitken, R. J. Tyrosine phosphorylation activates surface chaperones facilitating sperm-zona recognition. J. Cell. Sci. 117, 3645–3657 (2004). 59. 59. Barbonetti, A. et al. Dynamics of the global tyrosine phosphorylation during capacitation and acquisition of the ability to fuse with oocytes in human spermatozoa. Biol. Reprod. 79, 649–656 (2008). 60. 60. Redgrove, K. A. et al. Investigation of the mechanisms by which the molecular chaperone HSPA2 regulates the expression of sperm surface receptors involved in human sperm-oocyte recognition. Mol. Hum. Reprod. 19, 120–135 (2013). 61. 61. Aarabi, M. et al. Sperm content of postacrosomal WW binding protein is related to fertilization outcomes in patients undergoing assisted reproductive technology. Fertil. Steril. 102, 440–447 (2014). 62. 62. Takayama, J. & Onami, S. The Sperm TRP-3 Channel Mediates the Onset of a Ca(2+) Wave in the Fertilized C. elegans Oocyte. Cell. Rep. 15, 625–637 (2016). 63. 63. Vaegter, K. K. et al. Which factors are most predictive for live birth after in vitro fertilization and intracytoplasmic sperm injection (IVF/ICSI) treatments? Analysis of 100 prospectively recorded variables in 8,400 IVF/ICSI single-embryo transfers. Fertil. Steril. 107, 641–648.e2 (2017). 64. 64. Roberts, S. A. et al. Embryo and uterine influences on IVF outcomes: an analysis of a UK multi-centre cohort. Hum. Reprod. 25, 2792–2802 (2010). 65. 65. Lintsen, A. M. et al. Predicting ongoing pregnancy chances after IVF and ICSI: a national prospective study. Hum. Reprod. 22, 2455–2462 (2007). 66. 66. Elizur, S. E. et al. Factors predicting IVF treatment outcome: a multivariate analysis of 5310 cycles. Reprod. Biomed. Online. 10, 645–649 (2005). 67. 67. Shapiro, B. S. et al. Evidence of impaired endometrial receptivity after ovarian stimulation for in vitro fertilization: a prospective randomized trial comparing fresh and frozen-thawed embryo transfer in normal responders. Fertil. Steril. 96, 344–348 (2011). 68. 68. Roque, M., Valle, M., Guimarães, F., Sampaio, M. & Geber, S. Freeze-all policy: fresh vs. frozen-thawed embryo transfer. Fertil. Steril. 103, 1190–1193 (2015). 69. 69. World Health Organization. WHOLaboratory Manual for the Examination and Processing of Human Semen, (5th edn.) Cambridge, UK: Cambridge University Press (2010). 70. 70. Marchiani, S. et al. Characterization and sorting of flow cytometric populations in human semen. Andrology. 2, 394–401 (2014). 71. 71. Auger, J., Mesbah, M., Huber, C. & Dadoune, J. P. Aniline blue staining as a marker of sperm chromatin defects associated with different semen characteristics discriminates between proven fertile and suspected infertile men. Int. J. Androl. 13, 452–462 (1990). 72. 72. Franken, D. R., Franken, C. J., de la Guerre, H. & de Villiers, A. Normal sperm morphology and chromatin packaging: comparison between aniline blue and chromomycin A3 staining. Andrologia 31, 361–366 (1999). 73. 73. Muratori, M. et al. Nuclear staining identifies two populations of human sperm with different DNA fragmentation extent and relationship with semen parameters. Hum. Reprod. 23, 1035–1043 (2008). 74. 74. Swets, J. A. Measuring the accuracy of diagnostic systems. Science. 240, 1285–1293 Review. (1988). ## Acknowledgements The study was supported by grants from Italian Ministry of University and Scientific Research (PRIN project to E.B., prot number: 2015XSNA83_008) and University of Florence. We thank Dr. Claudia Livi and Dr. Elisabetta Chelo of Centro Procreazione Assistita “Demetra” for helpful advice in couples recruitment and collection of data and Dr. Monica Muratori (Dept. of Experimental and Clinical Biomedical Sciences, University of Florence) for helpful advice in methods set up. ## Author information ### Author notes 1. S. Marchiani and L. Tamburrino contributed equally to this work. ### Affiliations 1. #### Dept. of Experimental and Clinical Medicine, Center of Excellence DeNothe, University of Florence, Florence, Italy • S. Marchiani • , L. Tamburrino • , R. Dolce •  & E. Baldi 2. #### Centro Procreazione Assistita “Demetra”, Florence, Italy • F. Benini • , L. Fanfani •  & S. Pellegrini 3. #### Dept. of Experimental and Clinical Biomedical Sciences “Mario Serio”, Center of Excellence DeNothe, University of Florence, Florence, Italy • G. Rastrelli •  & M. Maggi ### Contributions S.M. and L.T. designed the study, performed experiments of chromatin maturity and CATSPER1 expression and data analysis, interpreted the results, and drafted the article; F.B. was responsible for sperm preparation, IVF and ICSI procedures; L.F. and R.D. performed the experiments and collected the data; G.R. contributed to statistical analysis and revised the article; M.M. revised critically the article; S.P. supervised all phases of ART treatments, provided data and revised critically the article; E.B. conceived and designed the study, contributed to draft the article and revised critically the article. All the Authors approved the final version to be submitted. ### Competing Interests The authors declare that they have no competing interests. ### Corresponding authors Correspondence to S. Marchiani or E. Baldi. ## Electronic supplementary material ### DOI https://doi.org/10.1038/s41598-017-15351-3 • ### Chromosome positioning and male infertility: it comes with the territory • Zaida Sarrate • , Mireia Solé • , Francesca Vidal • , Ester Anton •  & Joan Blanco Journal of Assisted Reproduction and Genetics (2018)
# Article Full entry | PDF   (0.1 MB) Keywords: prime ring; semiprime ring; derivation; Jordan derivation; Jordan triple derivation; left (right) centralizer; left (right) Jordan centralizer; centralizer Summary: The main result: Let $R$ be a $2$-torsion free semiprime ring and let $T:R\rightarrow R$ be an additive mapping. Suppose that $T(xyx) = xT(y)x$ holds for all $x,y\in R$. In this case $T$ is a centralizer. References: [1] Brešar M., Vukman J.: Jordan derivations on prime rings. Bull. Austral. Math. Soc. 37 (1988), 321-322. MR 0943433 [2] Brešar M.: Jordan derivations on semiprime rings. Proc. Amer. Math. Soc. 104 (1988), 1003-1006. MR 0929422 [3] Brešar M.: Jordan mappings of semiprime rings. J. Algebra 127 (1989), 218-228. MR 1029414 [4] Cusack J.: Jordan derivations on rings. Proc. Amer. Math. Soc. 53 (1975), 321-324. MR 0399182 | Zbl 0327.16020 [5] Herstein I.N.: Jordan derivations of prime rings. Proc. Amer. Math. Soc. 8 (1957), 1104-1110. MR 0095864 [6] Vukman J.: An identity related to centralizers in semiprime rings. Comment. Math. Univ. Carolinae 40 (1999), 447-456. MR 1732490 | Zbl 1014.16021 [7] Zalar B.: On centralizers of semiprime rings. Comment. Math. Univ. Carolinae 32 (1991), 609-614. MR 1159807 | Zbl 0746.16011 Partner of
• ### Bowditch's JSJ tree and the quasi-isometry classification of certain Coxeter groups, with an appendix written jointly with Christopher Cashen(1402.6224) Oct. 20, 2017 math.GT, math.GR Bowditch's JSJ tree for splittings over 2-ended subgroups is a quasi-isometry invariant for 1-ended hyperbolic groups which are not cocompact Fuchsian. Our main result gives an explicit, computable "visual" construction of this tree for certain hyperbolic right-angled Coxeter groups. As an application of our construction we identify a large class of such groups for which the JSJ tree, and hence the visual boundary, is a complete quasi-isometry invariant, and thus the quasi-isometry problem is decidable. We also give a direct proof of the fact that among the Coxeter groups we consider, the cocompact Fuchsian groups form a rigid quasi-isometry class. In an appendix, written jointly with Christopher Cashen, we show that the JSJ tree is not a complete quasi-isometry invariant for the entire class of Coxeter groups we consider. • ### Commensurability for certain right-angled Coxeter groups and geometric amalgams of free groups(1610.06245) Oct. 5, 2017 math.GT, math.GR We give explicit necessary and sufficient conditions for the abstract commensurability of certain families of 1-ended, hyperbolic groups, namely right-angled Coxeter groups defined by generalized theta-graphs and cycles of generalized theta-graphs, and geometric amalgams of free groups whose JSJ graphs are trees of diameter at most 4. We also show that if a geometric amalgam of free groups has JSJ graph a tree, then it is commensurable to a right-angled Coxeter group, and give an example of a geometric amalgam of free groups which is not quasi-isometric (hence not commensurable) to any group which is finitely generated by torsion elements. Our proofs involve a new geometric realization of the right-angled Coxeter groups we consider, such that covers corresponding to torsion-free, finite-index subgroups are surface amalgams. • ### Dimensions of affine Deligne-Lusztig varieties: a new approach via labeled folded alcove walks and root operators(1504.07076) Nov. 30, 2016 math.CO, math.AG, math.RT, math.GR Let G be a reductive group over the field F=k((t)), where k is an algebraic closure of a finite field, and let W be the (extended) affine Weyl group of G. The associated affine Deligne-Lusztig varieties $X_x(b)$, which are indexed by elements b in G(F) and x in W, were introduced by Rapoport. Basic questions about the varieties $X_x(b)$ which have remained largely open include when they are nonempty, and if nonempty, their dimension. We use techniques inspired by geometric group theory and representation theory to address these questions in the case that b is a pure translation, and so prove much of a sharpened version of Conjecture 9.5.1 of G\"ortz, Haines, Kottwitz, and Reuman. Our approach is constructive and type-free, sheds new light on the reasons for existing results in the case that b is basic, and reveals new patterns. Since we work only in the standard apartment of the building for G(F), our results also hold in the p-adic context, where we formulate a definition of the dimension of a p-adic Deligne-Lusztig set. We present two immediate consequences of our main results, to class polynomials of affine Hecke algebras and to affine reflection length. • ### Palindromic automorphisms of right-angled Artin groups(1510.03939) Oct. 28, 2016 math.GT, math.GR We introduce the palindromic automorphism group and the palindromic Torelli group of a right-angled Artin group A_G. The palindromic automorphism group Pi A_G is related to the principal congruence subgroups of GL(n,Z) and to the hyperelliptic mapping class group of an oriented surface, and sits inside the centraliser of a certain hyperelliptic involution in Aut(A_G). We obtain finite generating sets for Pi A_G and for this centraliser, and determine precisely when these two groups coincide. We also find generators for the palindromic Torelli group. • ### Maximal torsion-free subgroups of certain lattices of hyperbolic buildings and Davis complexes(1607.01678) July 6, 2016 math.GR We give an explicit construction of a maximal torsion-free finite-index subgroup of a certain type of Coxeter group. The subgroup is constructed as the fundamental group of a finite and non-positively curved polygonal complex. First we consider the special case where the universal cover of this polygonal complex is a hyperbolic building, and we construct finite-index embeddings of the fundamental group into certain cocompact lattices of the building. We show that in this special case the fundamental group is an amalgam of surface groups over free groups. We then consider the general case, and construct a finite-index embedding of the fundamental group into the Coxeter group whose Davis complex is the universal cover of the polygonal complex. All of the groups which we embed have minimal index among torsion-free subgroups, and therefore are maximal among torsion-free subgroups. • ### C*-algebras associated to graphs of groups(1602.01919) Feb. 5, 2016 math.OA, math.GR To a large class of graphs of groups we associate a C*-algebra universal for generators and relations. We show that this C*-algebra is stably isomorphic to the crossed product induced from the action of the fundamental group of the graph of groups on the boundary of its Bass-Serre tree. We characterise when this action is minimal, and find a sufficient condition under which it is locally contractive. In the case of generalised Baumslag-Solitar graphs of groups (graphs of groups in which every group is infinite cyclic) we also characterise topological freeness of this action. We are then able to establish a dichotomy for simple C*-algebras associated to generalised Baumslag-Solitar graphs of groups: they are either a Kirchberg algebra, or a stable Bunce-Deddens algebra. • ### Infinite reduced words and the Tits boundary of a Coxeter group(1301.0873) Sept. 18, 2014 math.CO, math.GT, math.RT, math.GR Let (W,S) be a finite rank Coxeter system with W infinite. We prove that the limit weak order on the blocks of infinite reduced words of W is encoded by the topology of the Tits boundary of the Davis complex X of W. We consider many special cases, including W word hyperbolic, and X with isolated flats. We establish that when W is word hyperbolic, the limit weak order is the disjoint union of weak orders of finite Coxeter groups. We also establish, for each boundary point \xi, a natural order-preserving correspondence between infinite reduced words which "point towards" \xi, and elements of the reflection subgroup of W which fixes \xi. • ### Divergence in right-angled Coxeter groups(1211.4565) June 18, 2013 math.GT, math.GR Let W be a 2-dimensional right-angled Coxeter group. We characterise such W with linear and quadratic divergence, and construct right-angled Coxeter groups with divergence polynomial of arbitrary degree. Our proofs use the structure of walls in the Davis complex. • ### Characterising star-transitive and st(edge)-transitive graphs(1301.1775) Jan. 9, 2013 math.CO, math.GR Recent work of Lazarovich provides necessary and sufficient conditions on a graph L for there to exist a unique simply-connected (k,L)-complex. The two conditions are symmetry properties of the graph, namely star-transitivity and st(edge)-transitivity. In this paper we investigate star-transitive and st(edge)-transitive graphs by studying the structure of the vertex and edge stabilisers of such graphs. We also provide new examples of graphs that are both star-transitive and st(edge)-transitive. • ### Cocompact lattices in complete Kac-Moody groups with Weyl group right-angled or a free product of spherical special subgroups(1203.2680) Sept. 3, 2012 math.GR Let G be a complete Kac-Moody group of rank n \geq 2 over the finite field of order q, with Weyl group W and building \Delta. We first show that if W is right-angled, then for all q \neq 1 mod 4 the group G admits a cocompact lattice \Gamma which acts transitively on the chambers of \Delta. We also obtain a cocompact lattice for q =1 mod 4 in the case that \Delta is Bourdon's building. As a corollary of our constructions, for certain right-angled W and certain q, the lattice \Gamma has a surface subgroup. We also show that if W is a free product of spherical special subgroups, then for all q, the group G admits a cocompact lattice \Gamma with \Gamma a finitely generated free group. Our proofs use generalisations of our results in rank 2 concerning the action of certain finite subgroups of G on \Delta, together with covering theory for complexes of groups. • ### Cocompact lattices on \tilde{A}_n buildings(1206.5356) June 23, 2012 math.GR Let K be the field of formal Laurent series over the finite field of order q. We construct cocompact lattices \Gamma'_0 < \Gamma_0 in the group G = PGL_d(K) which are type-preserving and act transitively on the set of vertices of each type in the building associated to G. The stabiliser of each vertex in \Gamma'_0 is a Singer cycle and the stabiliser of each vertex in \Gamma_0 is isomorphic to the normaliser of a Singer cycle in PGL_d(q). We then show that the intersections of \Gamma'_0 and \Gamma_0 with PSL_d(K) are lattices in PSL_d(K), and identify the pairs (d,q) such that the entire lattice \Gamma'_0 or \Gamma_0 is contained in PSL_d(K). Finally we discuss minimality of covolumes of cocompact lattices in SL_3(K). Our proofs combine a construction of Cartwright and Steger with results about Singer cycles and their normalisers, and geometric arguments. • ### Density of commensurators for uniform lattices of right-angled buildings(0812.2280) May 25, 2012 math.GR Let G be the automorphism group of a regular right-angled building X. The "standard uniform lattice" \Gamma_0 in G is a canonical graph product of finite groups, which acts discretely on X with quotient a chamber. We prove that the commensurator of \Gamma_0 is dense in G. This result was also obtained by Haglund. For our proof, we develop carefully a technique of "unfoldings" of complexes of groups. We use unfoldings to construct a sequence of uniform lattices \Gamma_n in G, each commensurable to \Gamma_0, and then apply the theory of group actions on complexes of groups to the sequence \Gamma_n. As further applications of unfoldings, we determine exactly when the group G is nondiscrete, and we prove that G acts strongly transitively on X. • ### Lattices in hyperbolic buildings(1204.0287) April 2, 2012 math.GT, math.DS, math.GR This survey is a brief introduction to the theory of hyperbolic buildings and their lattices, with a focus on recent results. • ### Cocompact lattices of minimal covolume in rank 2 Kac-Moody groups, Part II(1005.5702) June 9, 2011 math.GR Withdrawn. • ### Lattices in complete rank 2 Kac-Moody groups(0907.1350) June 9, 2011 math.GR Let \Lambda be a minimal Kac-Moody group of rank 2 defined over the finite field F_q, where q = p^a with p prime. Let G be the topological Kac-Moody group obtained by completing \Lambda. An example is G=SL_2(K), where K is the field of formal Laurent series over F_q. The group G acts on its Bruhat-Tits building X, a tree, with quotient a single edge. We construct new examples of cocompact lattices in G, many of them edge-transitive. We then show that if cocompact lattices in G do not contain p-elements, the lattices we construct are the only edge-transitive lattices in G, and that our constructions include the cocompact lattice of minimal covolume in G. We also observe that, with an additional assumption on p-elements in G, the arguments of Lubotzky for the case G = SL_2(K) may be generalised to show that there is a positive lower bound on the covolumes of all lattices in G, and that this minimum is realised by a non-cocompact lattice, a maximal parabolic subgroup of Lambda. • ### Existence, covolumes and infinite generation of lattices for Davis complexes(0807.3312) March 21, 2011 math.GR Let $\Sigma$ be the Davis complex for a Coxeter system (W,S). The automorphism group G of $\Sigma$ is naturally a locally compact group, and a simple combinatorial condition due to Haglund--Paulin determines when G is nondiscrete. The Coxeter group W may be regarded as a uniform lattice in G. We show that many such G also admit a nonuniform lattice $\Gamma$, and an infinite family of uniform lattices with covolumes converging to that of $\Gamma$. It follows that the set of covolumes of lattices in G is nondiscrete. We also show that the nonuniform lattice $\Gamma$ is not finitely generated. Examples of $\Sigma$ to which our results apply include buildings and non-buildings, and many complexes of dimension greater than 2. To prove these results, we introduce a new tool, that of "group actions on complexes of groups", and use this to construct our lattices as fundamental groups of complexes of groups with universal cover $\Sigma$. • ### Surface quotients of hyperbolic buildings(1007.5140) Feb. 15, 2011 math.GT, math.GR Let I(p,v) be Bourdon's building, the unique simply-connected 2-complex such that all 2-cells are regular right-angled hyperbolic p-gons and the link at each vertex is the complete bipartite graph K(v,v). We investigate and mostly determine the set of triples (p,v,g) for which there exists a uniform lattice {\Gamma} in Aut(I(p,v)) such that {\Gamma}\I(p,v) is a compact orientable surface of genus g. Surprisingly, the existence of {\Gamma} depends upon the value of v. The remaining cases lead to open questions in tessellations of surfaces and in number theory. Our construction of {\Gamma}, together with a theorem of Haglund, implies that for p>=6, every uniform lattice in Aut(I) contains a surface subgroup. We use elementary group theory, combinatorics, algebraic topology, and number theory. • ### Infinite generation of non-cocompact lattices on right-angled buildings(1009.4235) Jan. 25, 2011 math.GR Let \Gamma be a non-cocompact lattice on a locally finite regular right-angled building X. We prove that if \Gamma has a strict fundamental domain then \Gamma is not finitely generated. We use the separation properties of subcomplexes of X called tree-walls. • ### Finite generation of lattices on products of trees(1005.1238) Aug. 15, 2010 math.GR We prove that an irreducible lattice acting on a product of two or more locally finite, biregular trees is finitely generated. • ### Lattices acting on right-angled buildings(math/0508385) April 21, 2009 math.GR Let X be a right-angled building. We show that the lattices in Aut(X) share many properties with tree lattices. For example, we characterise the set of covolumes of uniform and of nonuniform lattices in Aut(X), and show that the group Aut(X) admits an infinite ascending tower of uniform and of nonuniform lattices. These results are proved by constructing a functor from graphs of groups to complexes of groups. • ### Problems on automorphism groups of nonpositively curved polyhedral complexes and their lattices(0803.2484) June 18, 2008 math.GR The goal of this paper is to present a number of problems about automorphism groups of nonpositively curved polyhedral complexes and their lattices, meant to highlight possible directions for future research. • ### Hyperbolic Geometry and Distance Functions on Discrete Groups(0712.4294) Dec. 27, 2007 math.GR, math.HO Chapter 1 is a short history of non-Euclidean geometry, which synthesises my readings of mostly secondary sources. Chapter 2 presents each of the main models of hyperbolic geometry, and describes the tesselation of the upper half-plane induced by the action of $PSL(2,\mathbb{Z})$. Chapter 3 gives background on symmetric spaces and word metrics. Chapter 4 then contains a careful proof of the following theorem of Lubotzky--Mozes--Raghunathan: the word metric on $PSL(2,\mathbb{Z})$ is not Lipschitz equivalent to the metric induced by its action on the associated symmetric space (the upper half-plane), but for $n \geq 3$, these two metrics on $PSL(n,\mathbb{Z})$ are Lipschitz equivalent. • ### Covering theory for complexes of groups(math/0605303) Oct. 4, 2007 math.GR We develop an explicit covering theory for complexes of groups, parallel to that developed for graphs of groups by Bass. Given a covering of developable complexes of groups, we construct the induced monomorphism of fundamental groups and isometry of universal covers. We characterize faithful complexes of groups and prove a conjugacy theorem for groups acting freely on polyhedral complexes. We also define an equivalence relation on coverings of complexes of groups, which allows us to construct a bijection between such equivalence classes, and subgroups or overgroups of a fixed lattice $\Gamma$ in the automorphism group of a locally finite polyhedral complex $X$. • ### A construction of lattices for certain hyperbolic buildings(math/0607430) Dec. 4, 2006 math.GR We construct a nonuniform lattice and an infinite family of uniform lattices in the automorphism group of a hyperbolic building with all links a fixed finite building of rank 2 associated to a Chevalley group. We use complexes of groups and basic facts about spherical buildings. • ### Covolumes of uniform lattices acting on polyhedral complexes(math/0502258) Aug. 20, 2005 math.GR Let X be a polyhedral complex with finitely many isometry classes of links. We establish a restriction on the covolumes of uniform lattices acting on X. When X is two-dimensional and has all links isometric to either a complete bipartite graph or the building for a Chevalley group of rank 2 over a field of prime order, we obtain further restrictions on covolumes.
# GCD and Prime Numbers Proof I was wondering how to prove the following statement: For a prime number $p$ and integer $n$, prove that $$p = \prod_{k=0}^{p-1} \gcd(n+k,p).$$ I think it just comes done to showing that one of the $\gcd$s is $p$ but I am not sure how such a proof would proceed. • And the reason is that otherwise p would not be a prime? – Tomas Smith Nov 2 '15 at 18:19 • Well, it should scream at you like a hammer on an ingrown thumbnail, that if it is true and p is prime, then exactly one of the gcd(n + k,p) = p and the rest of the gcd(n+k, p) = 1 otherwise p isn't prime. – fleablood Nov 2 '15 at 18:51 • gcd(x, p) = 1 or p because gcd(x,p) | p and only 1 and p divide p. So gcd(x,p) = p if p|x and gcd(x,p) = 1 if p does not divide x. – fleablood Nov 2 '15 at 18:54 $\gcd(x, y) \mid y$. So if $p$ is prime then $\gcd(x, p) \mid p$. So $\gcd(x, p) = 1$ or $p$. So $\gcd(x, p) = p$ if $p \mid x$. And $\gcd(x, p) = 1$ if $p$ does not divide $x$. Let $n = mp - i$; $0 \leq i < p$. Then $p \mid n + i$ but $p$ does not divide any $n + k$ where $k$ does not equal $i$ and $0 \leq k < p$. So $$\prod_{k = 0}^{p - 1} \gcd(n + k, p) = \prod_{k = 0}^{p - 1} \{p \textrm{ if } k = i; 1 \textrm{ if } k \ne i \} = 1.1.1 \ldots p \ldots 1.1.1 = p.$$ Exactly one of $n,n+1,n+2,\ldots,n+(p-1)$ is divisible by $p$. Therefore exactly one of $\gcd(n,p),\gcd(n+1,p),\gcd(n+2,p),\ldots,\gcd(n+(p-1),p)$ is equal to $p$ and all the others are equal to $1$. More generally, exactly one of $n,n+1,n+2,\ldots,n+(k-1)$ is divisible by $k$, which is because clearly $$\{n\bmod k,n+1\bmod k,\ldots,n+(k-1)\bmod k\}=\{0,1,2,\ldots,k-1\}$$
# Cheap flights from New Delhi to Tel Aviv Return Economy From? To? Wed 2/2 Wed 9/2 ## Flight information for New Delhi to Tel Aviv(DEL to TLV) ##### Find info about flight duration, direct flights, and airports for your flight from New Delhi to Tel Aviv Direct flights None There are no direct flights from New Delhi to Tel Aviv. There are no popular flight routes from New Delhi to Tel Aviv. Airports in Tel Aviv 2 airports There are 2 airports near Tel Aviv: Tel Aviv Sde Dov (SDV), Tel Aviv Ben Gurion Intl (TLV) ## How to get the cheapest flight ticket from New Delhi to Tel Aviv #### What is the cheapest month to fly from New Delhi to Tel Aviv? October The cheapest time of year to fly to Tel Aviv from New Delhi is October. The most expensive is July. #### How far in advance should you book New Delhi to Tel Aviv flights? 40 days before The cheapest time to buy a flight from New Delhi to Tel Aviv is approximately 40 days to departure • #### How far is New Delhi Indira Gandhi Intl to Tel Aviv Ben Gurion Intl by plane? There are 4059.7 km between New Delhi Indira Gandhi Intl and Tel Aviv Ben Gurion Intl.
# Inverting Op Amp question I'm having trouble with this particular inverting Op Amp question. I know that there are 2 nodes and I must apply KCL on the nodes. What I tried was $\frac{0-V_i}{49k\Omega}+\frac{0-V_o}{79k\Omega}=0$ and I'm not sure about the second KCL equation. and I'm pretty sure that R3 isn't parallel with R4. The answer is -68.8 • $V_-$ is virtual ground, so $R_2$ and $R_4$ are in parallel. With that knowledge you can simplify $R_3$ – jippie Apr 13 '13 at 7:51 There is an easier way to think about this problem, assuming ideal components. First, since the noninverting pin is grounded, the inverting pin is at virtual ground. This means: $V_{R1}=V_{in}$. $\therefore I_{R1}=I_{R2\parallel R4}=I_{R3}$ Calculate the voltage drop across $R_{2\parallel 4}$ in series with $R_3$, and you'll arrive at $V_o$ from there. • Possible correction: R2 and R4 are in parallel to ground. This combo is in series with R3. – helloworld922 Apr 13 '13 at 6:47 • That's right, not sure how I missed that when I specifically pointed out virtual ground. – Matt Young Apr 13 '13 at 6:50 • i figured the equation is: -Vin (R2/R1) (1 + (R3/R2) + (R3/R4)) = Vout – George Randall Apr 13 '13 at 7:19 FWIW, here's another method; form the Thevenin equivalent circuit looking into R2 from the inverting input. The equivalent circuit is, by inspection: $V_{TH} = V_{OUT}\dfrac{R_4}{R_3+R_4}$ $R_{TH} = R_2 + R_3||R_4$ Now, there's just one node to consider. The KCL equation for the remaining node is, by inspection: $\dfrac{V_{IN}}{R_1} + \dfrac{V_{TH}}{R_{TH}} = 0$ Thevenin Equivalent circuit comes in handy solving this T Negative feedback network. At the node between R4 and R2.You can apply Thevenin Equivalent such that Vout thevenin=( R4/(R3+R4)) and Requivalent2=(R3//R4). Now you can solve like normal inverting op amp configuration. KCL at V-: $$\\frac{V1 - V_{in}}{R1} + \frac{V1 - V2}{R2} = 0\$$ KCL at V2: (just above R4) $$\\frac{V2-V1}{R2} + \frac{V2-0}{R4} + \frac{V2-V_{out}}{R3} = 0\$$ Vout should equal: $$\V_{out}=-V_{in} \frac{R2}{R1} (1 + \frac{R3}{R2} + \frac{R3}{R4})\$$
# Magnetic Dipoles & eq's 1. Aug 19, 2004 ### pervect Staff Emeritus here seems to be some interest in magnetic dipoles, such as spinning electrons, and current loops. So I thought I would start a thread and present some of the relevant equations that describe the forces and fields generated by magnetic dipoles. These equations are very similar to those for electric dipoles, BTW. A current loop with an area A and carrying a current i has a magnetic dipole moment of $$\mu = i A$$. The dipole moment is sometimes expressed as a vector $$\vec{\mu}$$ in which case the vector is perpendicular to the area A. Some useful properties of the diople moment are given below Torque generated by an external field $$\vec{\mu} \times \vec{B}$$ Energy in an external field $$-\vec{\mu} \cdot \vec{B}$$ Field from dipole at distant points along axis |B| = $$\frac {\mu_0}{2 \pi} \frac {\mu}{r^3}$$ Field from dipole at distant points along bisector |B| = $$\frac {\mu_0}{4 \pi} \frac {\mu}{r^3}$$ Field from dipole, vector form $$\vec{B} = \frac {\mu_0 \mu}{4 \pi r^3} (2 cos(\theta) \vec{r} + sin(\theta) \vec{\theta})$$ Net force on dipole from a constant magnetic field zero Net force on a dipole from a varying magnetic field $$\nabla (\vec{\mu} \cdot \vec{B})$$ Note that the force between two dipoles will drop off with the 4th power of the distance - as the field generated by a dipole is proportional to 1/r^3, the gradient of the field is proportional to 1/r^4, and the force will be the dipole moment multiplied by the field gradient. Last edited: Aug 19, 2004 2. Aug 19, 2004 ### krab "Constant" or "varying" usually means w.r.t. time. I think here you mean w.r.t. space, so common usage is "uniform" or "non-uniform". 3. Aug 20, 2004 ### pervect Staff Emeritus Yes, thats what I mean. To develop a net force, one needs the field to be different at the two ends of the dipole, which means that the field must be varying in space.
## Tuesday, March 22, 2011 • Fixed a bug in the conic optimizer that in special cases made MOSEK report an nonoptimal solution. • Fixed several bugs in the mixed integer optimizer. • Downgraded Flexlm to 11.9.0 on the UNIX platforms. • Fixed a couple of bugs in the graph partitioning based ordering algorithm (employed in the interior-point optimizers). • Improved the network structure detection capabilities. • Fixed a bug in the basis identification procedure. This bug occurs very rarely.
Next: LSQR ALGORITHM OF PAIGE Up: Berryman: Resolution for Lanczos Previous: RESOLUTION MATRICES METHOD OF LANCZOS One common method of solving linear inversion problems of the form is Lanczos's method (Lanczos, 1950; Golub and Van Loan, 1983; van der Sluis and van der Vorst, 1987). This method introduces a sequence of orthonormal vectors through a process of tridiagonalization. Applying this method to the normal equations ,Lanczos's method is a projection procedure equivalent to the following: ^(1)(^(1))^T^T= ^T, [^(1)(^(1))^T+^(2)(^(2) )^T]^T^(1) = ^T^(1),   and, for , [^(k-1)(^(k-1))^T+^(k)(^(k) )^T+^(k+1)(^(k+1))^T]^T ^(k) = ^T^(k).   It is now clear that (lanczos1) defines as the unit vector in the direction of , and can have no terms from the right null space of . Equation (lanczos2) defines as the unit vector found by removing the component of in the direction of and then normalizing. Equation (lanczos3) determines as the unit vector along the component of that is orthogonal to both and . By construction, (^(i))^T^(j) = _ij,   so the vectors are orthonormal. Defining the constants N_1 = |^T| = (^(1))^T^T, D_k = (^(k))^T^T^(k)  for   k = 1,2,...,   and N_k+1 = (^(k+1))^T^T^(k)  for   k = 1,2,...,   it is then clear that the equations (lanczos1)-(lanczos3) determine a tridiagonal system of the form 0&...&0&^(k+1)N_k+1+ _kD_1 & N_2 & & & N_2 & D_2 & N_3 & & & N_3 & D_3 & N_4 & & & & & & & & N_k & D_k = ^T_k   for  2kr,   where the orthogonal matrix composed of the resulting orthonormal vectors is _k = ^(1)&^(2)&^(3)&...&^(k). The process stops when k=r (the rank of the matrix) because then Nr+1 = 0, or is numerically negligible. It follows that this tridiagonalization process results in the identity ^T= _r_r_r^T,   where the tridiagonal matrix of coefficients is defined by _k = D_1 & N_2 & & & N_2 & D_2 & N_3 & & & N_3 & D_3 & N_4 & & & & & & & & N_k & D_k   for  2k r.   Since is invertible (by the definition of the rank r of ), (^T)^= _r(_r)^-1_r^T. The solution to the least-squares inversion problem may therefore be written as = _r(_r)^-1_r^T^T= N_1_r(_r)^-1100,   where I used (lanczos1), (d1), and the fact that _r^T^(1) = 100.   It follows that only the first column of the inverse of is needed to solve this inversion problem. Since Lanczos's method directly produces a sequence of orthonormal vectors in the model space, it is straightforward to see that the model resolution matrix for this method is given by _model = ^T(^T)^= _r_r^T = _k=1^r ^(k)(^(k))^T,   which is also clearly symmetric. It is however more difficult to compute the data resolution matrix. Using the fact that _data = ^= (^T)^ ^T   together with (Adagger), I find that _data = _r(_r)^-1_r^T^T.   It is clear that both of the resolution matrices are symmetric when the full Lanczos inverse has been computed. If the process is terminated early so that k < r by setting the constant Nk+1 equal to zero, then a Lanczos approximate inverse is given by _k = _k(_k)^-1_k^T^T,   so that for all k, but if k < r. The effective resolution matrices are given by and ,or are found by replacing and in (modelreslanczos) and (datareslanczos) by and , respectively. Clearly, the effective resolutions are also symmetric. An alternative method of computing the resolution matrices involves noting that the vector sequence may also be used as the starting set of vectors for the method of conjugate directions. Then, I can produce a set of conjugate vectors from this orthogonal set and easily compute both the model resolution matrix and the data resolution matrix if desired. This alternative requires additional work, however, to produce the sequence of conjugate vectors and it also generally produces a model resolution that is not symmetric, and therefore difficult to interpret. Thus, the Lanczos procedure appears to be superior to conjugate directions/gradients from this point of view. The main disadvantage of the Lanczos method is that the amount of storage increases with the iteration number k, since all the vectors must be stored until the final calculation of . In contrast, conjugate gradients requires a fixed amount of storage, but an easily interpretable model resolution must be sacrificed to gain this advantage. Next: LSQR ALGORITHM OF PAIGE Up: Berryman: Resolution for Lanczos Previous: RESOLUTION MATRICES Stanford Exploration Project 11/17/1997
# Despite IPOs, Next-Gen Biofuels Still Creeping Forward in 2011 Despite a series of successful biofuel IPOs and more biofuel IPOs in the pipeline, along with U.S. government support and attention from incumbent oil makers, the production of next-gen biofuels, in any kind of volumes that would make a dent in the transportation sector, seems to be creeping very slowly forward. I know these things don’t happen all that quickly, but they also seem to happen more slowly than some of these companies, government groups and investors initially anticipated (also see my Next-Gen Biofuels: Where Are They Now article from earlier this year). Amyris (s AMRS) and Gevo (s gevo), which both went public within the past year and a half, reported first quarter financials on Friday, and both are still in the early stages of production for biofuels. Amyris, which is a genetic engineering company that got its start developing bugs to turn sugar into anti-malarial drugs, reported first quarter revenue of $37.18 million, up from$13.66 million for the quarter the year earlier. Amyris’ net loss for the quarter was $33.15 million, up significantly from$16.34 million for the year prior quarter. Amyris’ financials could look like this for some time until it produces large volumes of biofuels at its planned plant with Brazilian sugarcane producer Grupo São Martinho, intended to open in the second quarter of 2012. The company makes a good chunk of its revenues by selling other company’s ethanol, and as of the end of 2010, Amyris had accumulated a deficit of $202.3 million. At the same time, Amyris announced last week that it had opened up its first industrial scale facility to turn sugarcane syrup into Biofene, a form of the industrial chemical Farnesene, which is a fragrant hydrocarbon that’s used to make cosmetics, lubricants and other materials. Farnesene now goes for about$1,000 a gallon, which makes it a lot more profitable than a gallon of fuel substitute, though for a much smaller market. A good deal of the next-gen biofuel makers are now turning to this strategy of creating non-biofuel products, like food additives and cosmetics. Algae fuel maker Solazyme — which filed to go public in March — was one of the first to start to produce these non-biofuel goods. Back in 2009 when I visited Solazyme’s factory, the team showed me some of their prototype algae milk, oils and lotions. In March, Solazyme announced a large non-biofuel deal to produce up to 60 million gallons of algae-based insulation fluid for transformers for Dow Chemical (s dow). Solazyme has also long said that it won’t be commercializing its biofuel until the 2012/2013 time frame or later. But biofuels in volumes to rival oil for transportation? Not so much — from any companies. The Environmental Protection Agency scaled back its estimates for how much cellulosic ethanol could be produced in 2010 (originally it was 100 million, but basically it turned out to be zero), and projected that for 2011, five companies will be able to produce about 6 million cellulosic ethanol-equivalent gallons. Those companies include Range Fuels, DuPont Danisco, Fiberight, KL Energy, and KiOR. Will the EPA even be able to make that 6 million forecast for cellulosic ethanol production in 2011? Well, one company on that list seemed to struggle as soon as the calendar flipped over to 2011. Range Fuels reportedly plans to shut down its plant in Georgia after making just one batch of cellulosic ethanol, laid off a bunch of workers and is trying to raise money. Range Fuels company spokesman Patrick Wright told us in January the company planned to still meet the EPA’s production goal. Another firm on the EPA projection 2011 list is KiOR, which is planning a $100 million IPO this year and seeking a$1 billion federal loan guarantee. KiOR has never reported any revenues (which is unusual for an S-1) and plans to produce its biofuel product at a commercial production facility in the second half of 2012. To date, KiOR has produced over 32,000 gallons of renewable crude, according to its S-1. The EPA will be updating the latest projection figures for cellulosic ethanol by the spring, an EPA spokesperson told me on Friday. The EPA maintained late last year that many more companies, including 20 plants, could produce potentially 300 million gallons of cellulosic ethanol in 2012. MIT Tech Review published an interesting article on Friday looking at whether or not unprofitable biofuel companies like Gevo, Amyris, and soon, KiOR, should be going public. I guess it’s up to Wall Street to read the S-1 very clearly. A prominent CEO of a biodiesel company once told me, several years after he left the company, that he would never do a biofuel startup again, because it was just too capital-intensive and took too long to scale.
# Ethics Case 9-11 Overstatement of ending inventory Ethics Case 9-11 Overstatement of ending inventory Danville Bottlers is a wholesale beverage company. Danville uses the FIFO inventory method to determine the cost of its ending inventory. Ending inventory quantities are determined by a physical count. For the fiscal year-end June 30, 2011, ending inventory was originally determined to be $3,265,000. However, on July 17, 2011, John Howard, the company's controller, discovered an error in the ending inventory count. He determined that the correct ending inventory amount should be$2,600,000. Danville is a privately owned corporation with significant financing provided by a local bank. The bank requires annual audited financial statements as a condition of the loan. By July 17, the auditors had completed their review of the financial statements which are scheduled to be issued on July 25. They did not discover the inventory error. John's first reaction was to communicate his finding to the auditors and to revise the financial statements before they are issued. However, he knows that his and his fellow workers' profit- sharing plans are based on annual pretax earnings and that if he revises the statements, everyone's profit-sharing bonus will be significantly reduced. Required: 1. Why will bonuses be negatively affected? What is the effect on pretax earnings? 2. If the error is not corrected in the current year and is discovered by the auditors during the following year's audit, how will it be reported in the company's financial statements? 3. Discuss the ethical dilemma John Howard faces.
# Energy usage in different reference frames Imagine a moving object at constant speed (like a car). This object is, then, accelerated for a brief moment. In different reference frames (at rest and moving along with the object), the variation of the car's kinetic energy is not the same. My question is the following: Suppose I have a battery that's holding a certain amount of energy. Connecting it to the motor, I discharge it completely to power up the wheels and increase the car's speed. If I used the battery's energy to increase the car's, how can I explain the kinetic energy difference between the frames now? I exchanged the same amount of the battery's energy in both reference frames (or not?, this is the important part), so why did the kinetic energy of the car increase comparatively more in the rest frame? A closed system can not speed itself up, that's the momentum conservation law which is also the key to your problem. As far as I can see you are implicitly supposing the following three equalities to hold $$E_i+A=E_f\\p_i=p_f\\m_i=m_f$$ where subscripts $i$ and $f$ stays for the initial (before acceleration) for the final (after acceleration) states respectively and $A$ is the energy stored in your battery. However, all three can not exist simultaneously: if $p_i=p_f$ and $m_i=m_f$ then from $E=\frac{p^2}{2m}$ one obtains $E_i=E_f$. Thus, in order to increase the energy of the system you need to relax one of these conditions. Let us suppose that $m_i=m_f$ but $p_i\neq p_f$, i.e. the momentum is not conserved. For example, our 'car' interacts with the road via friction force $F_{fr}$. Suppose that the speed of the car was increased with the constant acceleration $a$ from the initial value $v$ to the final value $v+\Delta v$. The work done by the friction force on the car is $$W_{fr}=F_{fr}\int_0^{\Delta v/a}v(t)dt=F_{fr}\int_0^{\Delta v/a}(v+at)dt=F_{fr}\frac{v\Delta v+\Delta v^2}{a}$$ Since the acceleration is caused only by that friction force we also have $ma=F_{fr}$ and therefore $$W_{fr}=m(\Delta v^2+v\Delta v)$$ Now consider the reference frame moving with a constant speed $v$. In this reference frame the car have traveled less, and less is the work done by the friction force. $$W'_{fr}=F_{fr}\int_0^{\Delta v/a}v'(t)dt=F_{fr}\int_0^{\Delta v/a}(at)dt=F_{fr}\frac{\Delta v^2}{a}=\frac{m\Delta v^2}{2}$$ You can see that the difference between these two works $W_{fr}-W'_{fr}=mv\Delta v$ is exactly the difference in variations of kinetic energy calculated in these two frames. Similarly, one can keep condition $p_i=p_f$ but relax condition $m_i=m_f$ thus considering the case of a jet engine. If done accurately, calculations in this case also yield perfect conservation of energy. • Then, to solve the problem of the car running off batteries, I need to take into account other factors, such as energy dissipation? Thinking about it another way: if I took the road into account (acceleration of the floor), could the calculations work out? Sep 24 '14 at 22:37 • @André Pereira I'm not sure I got your question right but still I'll try to answer. The work done by the friction force actually goes to the Earth kinetic energy accelerating it a little bit :) The amount of kinetic energy that Earth will acquire depends on the reference frame. Surely this difference exactly matches the difference for the kinetic energy of the car. Sep 25 '14 at 0:29
# Does infinite time = time with no end = never? Say an object is to travel from point A to point B, a finite distance of 2 meters. Say the object travels at 1m/s. After 1 meter the speed of the object is halved. After another half of the previous distance the speed is halved again, and so on. So for each term in the infinite series 1 + 1/2 + 1/4 + 1/8 + ..., the term represents the distance the object travels in meters before the speed is halved. Now apparently the object does eventually reach point B, but it takes an infinite amount of time to do so. Yet infinity has no end. So if the the object does ever reach point B, wouldn't that mark the end of the infinite length of time that it took to reach it and wouldn't that contradict with the definition of infinity? Yet if we say it never reaches point B, which is to say that there is no point in time that it reaches point B because that point in time would mark an end to infinity, then isn't it consistent with what we mean by infinity? - This is not the right place for this type of question. Try asking on physics.stackexchange.com –  Anonymous Sep 18 '13 at 19:48 @Anonymous: I'm not sure it would really fit there, either. It's almost a philosophical question, really. –  Cameron Buie Sep 18 '13 at 19:49 The short answer is that if it takes an infinite amount of time to reach point B, then the object never reaches it. Of course, in the real world, no matter how precise your measurement is, there will be some time after which the object is closer to B than your instrument can distinguish. –  Javier Badia Sep 18 '13 at 19:49 Assuming that distance is quantized (that is, there is a least possible positive distance, counterintuitive as that seems), then at some point, the object will actually stop before getting to point $B$. Javier's point is also well-taken. –  Cameron Buie Sep 18 '13 at 19:52 The key to cracking Zeno's paradox is the realization that an infinite series can have a finite sum. You mention an example of such a series: $$\sum\limits_{n=0}^\infty \frac{1}{2^n} = 1 +\frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \dots = 2$$ That yields the 2 meters from your example. You can divide that number infinitely, as Zeno did, but if you add those divisions back up, you still get 2: a finite number. Because it's a finite distance, it can be crossed in finite time. You still have to cross each partial distance, but each of these distances is also finite (in fact, you can subdivide each of the the same way you divided up the original distance), so they can all be crossed in finite time too. If you move at the same speed through each one, they form the very same sort of infinite series as the distances do. In fact, you don't even have to move at the same speed through each one; it just makes the math easier if you do. - There is perhaps the bigger question of how close would the object have to be to B to be considered at B as this series seems a lot like the Dichotomy in Zeno's Paradox that would be my suggestion for a place to answer it. For example, how large is the object that is it occupying X space physically? Zeno's Paradoxes has the Dichotomy example that is pretty much what you are have stated here though you do have to concede the object has to have infinitesimal size or else it will be close after going enough distances. - Assume the object has length x and the distance between point A and B is (1/1 + 1/2 + 1/4 + ...) - x or that the object is simply a geometric point itself. –  RussW Sep 18 '13 at 20:27
# Extensions of Carathéodory's theorem We know about the Carathéodory's theorem which is on the convex bodies of $\mathbb{R}^d$. My question is, how far we can extend it? Is it true for say, any convex object of Banach space, or for convex objects of any real manifold? I believe the answers are negative. However I want to know whether there is a similar result like Carathéodory (i.e. an upper bound with respect to dimension of the space/manifold). I am not sure whether it should be asked here. I have already asked it in math.stackexchange without getting any reply. The only comment was regarding the Choquet Theory, but I am yet to get anything related to my question. I am new to this subject, and possibly overlooked the required section. Advanced thanks for any help/suggestion/reference which can be (relatively) easily understood. Feel free to ask (and also edit) if you want more clarification. - Perhaps we need a definition of "convex" in a manifold. –  Gerald Edgar Apr 14 '13 at 16:44 Roughly speaking, Caratheodory's theorem states that the convex hull of a set $X$ is the union of the simplices whose vertices lie on $X$. To obtain an extension in this sense you need to generalize the notion of simplex. This is what Choquet did: encyclopediaofmath.org/index.php/Choquet_simplex That explains the relation of Choquet theory to your question. –  alvarezpaiva Apr 14 '13 at 17:31 That is, in a $d$-dimensional affine subspace, everything in the theorem for $\mathbb R^d$ still works, for the reasons cited. –  Gerald Edgar Apr 14 '13 at 16:43
# Solving for a single element of a solution of a linear system I wish to solve a linear system $$A x =b$$ in which $$A$$ is dense but not too large, say no larger than $$10\times10$$. However, I am not interested in the full solution vector $$x = [x_0, x_1, \dots]$$, rather I only care about accurately determining $$x_0$$. I know that Cramer's rule can compute $$x_0$$ directly, but this requires the evaluation of two determinants. It is my understanding that calculating these determinants is not typically faster than solving the full linear system, and is also more susceptible to catastrophic cancellation. Are there algorithms for determining $$x_0$$ alone whose expected computational complexity beats Gaussian elimination or LU factorization? Does the answer change if I am willing to accept an approximation for $$x_0$$, say from an iterative approach? Edit: I'll provide some additional context, which might help stimulate some ideas. In my application, I have a sequence of linear systems $$A^{(n)} x^{(n)} = b^{(n)}$$, with $$n=1,2,\dots$$ being the dimension of the system. Concretely, $$A^{(1)} x^{(1)} = b^{(1)} \to a_{00} x^{(1)}_0 = b_0 \\ A^{(2)} x^{(2)} = b^{(2)} \to \begin{bmatrix} a_{00} & a_{01} \\ a_{10} & a_{11} \end{bmatrix} \begin{bmatrix} x^{(2)}_0 \\ x^{(2)}_1 \end{bmatrix} = \begin{bmatrix} b_0 \\ b_1 \end{bmatrix}$$ and so on, with each linear system in the sequence simply appending another row and column to $$A$$ and appending an element to $$b$$, leaving the lower-order elements intact. As this sequence progresses, the value of $$x_0^{(n)}$$ should converge toward some limiting value $$x^{(\infty)}_0$$. So if it is possible to use the results of lower-order calculations $$x^{(1)},\dots,x^{(n-1)}$$ to compute $$x_0^{(n)}$$ in better than $$O(n^3)$$ time, that would be a useful answer as well. • I think if the matrix is sparse, there are some tricks that can be used to lower the computational cost, but not the complexity. But if it is dense, I don't know any methods which can achieve what you want. Sep 18 at 6:34 • There is an $O(n^3)$ implementation of Cramer's rule, although I don't know if it would work in your case:. Ken Habgood; Itamar Arel (2012). "A condensation-based application of Cramerʼs rule for solving large-scale linear systems". Journal of Discrete Algorithms. 10: 98–109. doi:10.1016/j.jda.2011.06.007. Sep 18 at 13:34 • Apparently we need to find $x_0$ for many different cases. Is there anything in common for matrices A and vectors b for those cases? Or those are completely unrelated? I am wondering if it is possible to find $x_0$ for one given A,b and then to use that information for updated A and/or b. Sep 18 at 14:07 • Is $A$ special in any way? Symmetric, positive, non-negative, Hermitian, &c? Sep 18 at 15:03 • @MaximUmansky Very sharp observation. I've expanded the question to include more context about the problem which may help stimulate ideas along the lines of what you're thinking. Sep 19 at 2:30 You can update most matrix factorizations in $$O(n^2)$$ after adding a column and/or a row to the matrix (or, in general, after a rank-1 update). In real life, this is usually done with QR and/or Cholesky rather than other factorizations, for stability reasons; look for qrinsert in your language of choice; e.g. Python Matlab. So in your iteration when you enlarge the system at each step you can solve the full linear systems while paying $$O(n^2)$$ per step, by relying on factorization updates: at each step, you update the factorization by adding a row ($$O(n^2)$$), then adding a column ($$O(n^2)$$), then you use the factorization to solve the linear system ($$O(n^2)$$).
# Tag Info 33 This is inevitably going to be an unsatisfactory answer because your question is vastly more complicated than you (probably) realise. I'll attempt an answer in general terms, but you have to appreciate this is a pale shadow of the physics that describes this area. Anyhow, Einstein was the first to spot that energy and mass were equivalent, and you've no ... 33 When lasers cut something, they're only cutting in the sense that they're making atoms be not as attracted as they once were to each other. When you get down to the nitty-gritty details, it is not really the same as mechanical cutting. Remember that lasers shoot photons, and when photons hit atoms, they excite electrons. If you excite these electrons ... 18 I am assuming that by "energy" you mean photons. So you want to transform protons into photons. It is not possible. It would violate several conservation laws - mainly the charge conservation (protons are positively charged), but also baryon number conservation. The antiparticle is necessary to cancel these quantum charges to make the transition possible. 17 You've probably heard of Einstein's famous equation: $$e = mc^2$$ This states that mass and energy are equivalent, and indeed the LHC turns energy into matter every day. So to find the mass equivalent to an electron volt just convert eV to Joules and divide by $c^2$. 1 electron volt = $1.60217646 \times 10^{-19}$ joules, so 125 GeV is: $$125 \times ... 17 Yes, the total mass of a battery increases when the battery is charged and decreases when it is discharged. The difference boils to Einstein's E=mc^2 that follows from his special theory of relativity. Energy is equivalent to mass and c^2, the squared speed of light, is the conversion factor. I would omit the scenario I. If the lithium is leaking from ... 16 Because time is accumulating, to calculate the time lapse, you integrate. The elementary time interval transforms like mass. The difference is that the total time lapse is done by "summing" over all elementary intervals. For the mass, you don't do this. For mass:$$m=\frac {m_0} {\sqrt{1-\frac{v^2}{c^2}}}$$For time:$$dt=\frac {dt_0} ... 16 Firstly, mass is energy. There is no distinction at all. Not all of the "mass" of an atom comes from protons/neutrons/electrons, for example, there is a chunk that comes from potential energy — this chunk actually dominates over the mass of the constituents. However, we can reword your question to be "What stops nucleons from turning into other forms ... 16 The definition of an antiparticle is dependent on having the opposite quantum numbers of the particle so that they can annihilate, i.e. the sum of the conserved quantum numbers are zero. Thus the answer by @mpv is adequate. The implication of your question is then: is baryon number conservation a strict law or an emergent law that may be violated at some ... 14 To answer the question simply, $E=mc^2$. Energy is a manifestation of mass, and mass is a manifestation of energy. In a fusion or fission process, the total "energy" of the system remains constant, it just changes shape. By "energy" I mean the totality of the already present energy, and the bound energy of the mass that takes part in the reaction. 13 Cutting is a process when you deliver energy to break chemical bonds in material that you cut. When you use a saw, you deliver mechanical (kinetic) energy that converts into kinetic energy of particles of the thing you cut, so they can get out of the thing. Laser is just another way to deliver such energy, since the a photon has enough energy to break some ... 12 You can find the shortest and easiest derivation of this result in the paper where it was released by Einstein himself (what better reference can you find?) in 1905. It is not the main paper of Special Relativity, but a short document he added shortly afterwards. A. Einstein,Ist die Trägheit eines Körpers von seinem Energieinhalt Abhängig?, Annalen der ... 12 The answer is the there is some reduction in mass whenever energy is released, whether in nuclear fission or burning of coal or whatever. However, the amount of mass lost is very small, even compared to the masses of the constituent particles. A good overview is given in the Wikipedia article on mass excess. Basically, the mass of a nucleus will in general ... 10 The conversion between mass and energy isn't even really a conversion. It's more that mass (or "mass energy") is a name for some amount of an object's energy. But the same energy that you call the mass can actually be a different type of energy, if you look closer. For example, we say that a proton has a specific amount of mass, about $2\times 10^{-27}\text{ ... 10$E_0 = m_0 c^2$is only the equation for the "rest energy" of a particle/object. The full equation for the kinetic energy of a moving particle is actually: $$E-E_0 = \gamma m_0c^2 - m_0c^2,$$ where$\gamma$is defined as$\gamma = \frac{1}{\sqrt{1 - (v/c)^2}}, $where$v$is the relative velocity of the particle. An "intuitive" answer to the question can ... 10 Take a nucleus of U-235 and determine its mass. Induce it to fission by firing a neutron at it. When it does so, collect all the pieces (except the extra neutron) and determine their total mass. You will find that all the pieces weigh just a hair less than original nucleus. The difference is the "binding energy", also previously known as the "packing ... 9 If I'm reading rightly, I think your main question is: Why does only a small percentage of rest mass turn into energy [even for fusion]? It's because the universe is very strict about a certain small set of conservation rules, and certain combinations of these rules make ordinary matter extremely stable. Exactly why these rules are so strictly observed ... 9 The reason$c$is important is not because it is the speed of light. It's important because it is a universal conversion factor between time and distance. If you have a certain amount of time$t$, you can calculate the corresponding amount of distance by multiplying it by$c$. Note: I'm not talking about the distance any particular object travels in the ... 9 As noted by someone else, energy can be "converted" into mass e.g. via pair production. However, there is another example of this that you may be interested in: The mass of the matter you come into contact with on an everyday basis is almost entirely from protons and neutrons, which are roughly 2000x more massive than electrons. The proton, for example, is ... 9 Yes, everything generates a gravitational field, whether it is massive or massless like a photon. The source of the gravitational field is an object called the stress-energy tensor. This is normally written as a 4 x 4 symmetric matrix, and the top left entry is the energy density. Note that mass does not appear at all. We convert mass to energy by ... 9 Every relationship between mass and energy will contain two factors of velocity, for dimensional consistency. In special relativity we have the more exact relationship $$E^2 - p^2c^2 = m^2c^4$$ where the momentum$p$is $$p = \frac{mv}{\sqrt{1-v^2/c^2}}.$$ You can do a little algebra to show that the total energy is always $$E = \frac{ ... 9 In short, no, it is not a coincidence, they are related. Namely, you may derive the kinetic energy as the first order approximation to the relativistic energy. We have,$$ E_0 = mc^2 $$as you say correctly. Then$$ E = \gamma m c^2 = \left( 1 - \frac{v^2}{c^2}\right)^{-\frac{1}{2}} m c^2 $$or using a binomial expansion$$ E \simeq \left( 1 + ... 8 This is actually a more complex question than you might think, because the distinction between mass and energy kind of disappears once you start talking about small particles. So what is mass exactly? There are two common definitions: The quantity that determines an object's resistance to a change in motion, the$m$in$\sum F = ma$The quantity that ... 8 In the early days of special relativity it was noted that the mass of an object appeared to increase as the speed of the object approached the speed of light. It was common to see the notation$m_0$used for the rest mass and$m$for the relativistic mass. In this sense the equation$E = mc^2$is always true. However the concept of relativistic mass is ... 8 Energy and matter are not the same. Matter is a type of thing, whereas energy is a property of a thing, like velocity or volume. So your premise is flawed. In particular: there's no such thing as "a solid state of energy" - hopefully it makes sense that a property of something does not have states energy is not represented by waves, though it is a property ... 8 The annihilation produces gamma photons, whose total energy sums up to the total energy$E_0=\sqrt{p^2 \,c^2 + m^2\,c^4}$formerly contained in the matter / antimatter kinetic energy (the$p\,c$term) and that "frozen" in rest mass (the$m\,c^2$term). So energy is conserved. As for gravity, the Einstein field equations "can't tell the difference" between ... 8 Think of the laser process as being similar to melting a substance through a change in state. An analogy might be putting a hot wire on top of an ice cube. It makes a 'slice' by heat, not by cutting. It turns the solid state ice into water and gas which doesn't hold together the same anymore. 7 Strictly speaking it's a unit of energy. But using$m=\frac{E}{c^2}$, you can convert energy into mass. Operating, we get$1{\rm\,eV}/c^2 =1.78\cdot 10^{-36}\rm{\,kg}$. (The$c^2$is usually ommited.) 7 Starting with your given equation, we add$p^2 c^2$to both sides to get $$E^2=m^2 c^4 + p^2 c^2$$ now using the definition of relativistic momentum$p=\gamma m v$we substitute that in above to get $$E^2 = m^2 c^4 +(\gamma m v)^2 c^2=m^2 c^4 +\gamma^2 m^2 v^2 c^2$$ Now, factoring out a common$m^2 c^4$from both terms on the RHS in anticipation of the ... 7 As you may know, photons do not have mass. Relating relativistic momentum and relativistic energy, we get:$E^2 = p^2c^2+(mc^2)^2$. where$E$is energy,$p$is momentum,$m$is mass and$c$is the speed of light. As mass is zero,$E=pc$. Now, we know that$E=hf\$. Then we get the momentum for photon. Note that there is a term called effective inertial ... Only top voted, non community-wiki answers of a minimum length are eligible
# Significant figures with pgfplots Is there a possibility with pgfplots to print numbers with no decimals for the nodes? By this, I mean printing '2' instead of '2.0e0'. In my case, I want to remove the decimal part and add the percent sign, which would give'2%' instead of '2.0e0'. -- Furthermore, how can I put the text node '1%' on the top of the corresponding bar ? (if the bar is negative, it won't be anymore on the top) -- Here is a sample code and a picture which I want to improve. \documentclass{standalone} \usepackage{filecontents,pgfplots,pgfplotstable} \begin{filecontents}{data.dat} id a b name 0 1.1 20 Name1 1 2.2 -10 Name2 2 3.3 15 Name3 \end{filecontents} \begin{document} \begin{tikzpicture} \begin{axis}[ybar,xtick={0,1,2},xticklabels={a,b,c},point meta=explicit symbolic,nodes near coords=$\pgfplotspointmeta\%$] \addplot table [x={id}, y={b},,meta expr=\thisrow{id}] {\donnees}; \end{axis} \end{tikzpicture} \end{document} You can use nodes near coords={\pgfmathprintnumber[fixed, precision=0]{\pgfplotspointmeta}\%} to format the numbers using PGF's very flexible number formatting macro. Note that you have to use point meta=explicit instead of point meta=explicit symbolic if your meta data is numeric, otherwise the number parser will fail. \documentclass[border=5mm]{standalone} \usepackage{filecontents,pgfplots,pgfplotstable} \begin{filecontents}{data.dat} id a b name 0 1.1 20 Name1 1 2.2 10 Name2 2 3.3 15 Name3 \end{filecontents} \begin{document} \begin{tikzpicture} \begin{axis}[ ybar, xtick={0,1,2}, xticklabels={a,b,c}, point meta=explicit, nodes near coords={\pgfmathprintnumber[fixed, precision=0]{\pgfplotspointmeta}\%} ] \addplot table [x={id}, y={b},meta expr=\thisrow{id}] {\donnees}; \end{axis} \end{tikzpicture} \end{document} • I get '0%' for each bar, is there an error in the code ? – Laurent Dudok de Wit Sep 6 '13 at 20:38 • @user81566: I get the result shown in the screenshot when I compile my code. What version of PGFPlots are you using? – Jake Sep 6 '13 at 20:46 • I have the version 1.7, which should be the latest version according to Miktex. Am I right ? – Laurent Dudok de Wit Sep 6 '13 at 20:58 • @user81566: The current version is 1.8, but I wouldn't have thought that there was a change between 1.7 and 1.8 that would affect this. When you copy my code into a new document, you get 0%? Or only when you try to adapt this for your own code? – Jake Sep 6 '13 at 21:09 • Yes but I get 0,1,2 percent somehow. Not 500,1000 as you have. Oops picture is changed nevermind :) – percusse Sep 8 '13 at 12:43
# What are the three options farmers (as price takers) have to manage price risk? Contracts, Negotiation,... ###### Question: What are the three options farmers (as price takers) have to manage price risk? Contracts, Negotiation, Hedging Contracts, Hedging, Insurance Contracts, Branding, Bargaining Look at the graph below. Which of these farmers is the most risk averse? Option 3 Expected return Farmer A Option 2 Option 1 Farmer B Risk Farmer A as, for the same risk, he needs a higher expected return than Farmer B Farmer B as, for the same risk, he needs a higher expected return than Farmer A Look at the graph below. What is the decision of Farmer A regarding the option 2? Option 3 Farmer A Expected return . Option 2 Option 1 Farmer B Risk He does not take it as the expected return is lower than its own minimum return for the risk He takes it as its expected return is higher than the option 1 Look at the graph below. What do you think about the option 3? Option 3 Expected return Farmer A Option 2 Option 1 Farmer B Risk All farmers will take the option 3 Only Farmer A will take the option 3 Only Farmer B will take the option 3 #### Similar Solved Questions ##### Ferric oxide may be reduced to pure iron either with carbon monoxide or with coke (pure carbon). Suppose that 150.0 lb of ferric oxide are available. How many pounds of carbon monoxide would be required to reduce the oxide? How many pounds of coke would be needed? In each case, how many pounds of pure iron would be produced?... ##### What problems arise in an acute care facility in distinguishing between an outpatient and an inpatient? what problems arise in an acute care facility in distinguishing between an outpatient and an inpatient?... ##### Saved Help Required information Problem 8-4A Preparing a bank reconciliation and recording adjustments LO P3 The... Saved Help Required information Problem 8-4A Preparing a bank reconciliation and recording adjustments LO P3 The following information applies to the questions displayed below The following information is available to reconcile Branch Company's book balance of cash with its bank statement cash b... ##### Can someone help with this answer and explain? X 3.4.25 Assigned Media E Question Help Below... can someone help with this answer and explain? X 3.4.25 Assigned Media E Question Help Below are 36 sorted ages of an acting award winner. Find using the method presented in the textbook. 52 71 Pse- (Type an integer or a decimal)... After the sale of the stock on December 2nd, 2018, your cost basis per share is closest to: a. $27.50 b.$28.16 c. $29.29 d.$50.83 e. $52.50 Use the following information from questions 6-12. In both 2018 and 2019 you expect to receive a W2 for$200,000; $65,000 will be already withheld on your W2 ... 1 answer ##### 9 Space? vetween the **9. Steel rails 20 m lo between the rails to allow laid... 9 Space? vetween the **9. Steel rails 20 m lo between the rails to allow laid on a cold winter day at -10° C. How much space must be left to allow for expansion at a summer temperature of 40°C the aluminum cap is too hard to unscrew on a new illeri *+10. If the aluminum a... 1 answer ##### How do you find the non differentiable points for a function? How do you find the non differentiable points for a function?... 1 answer ##### Loci close to the centromere have: a. No recombination between them b. Lots of recombination between... Loci close to the centromere have: a. No recombination between them b. Lots of recombination between them c. Some degree of recombination between them d. Independent assortment e. Independent segregation... 1 answer ##### Should the respirator sustaining Mr. Lee’s life be removed, understanding the reason that Mr. Lee wants... Should the respirator sustaining Mr. Lee’s life be removed, understanding the reason that Mr. Lee wants to remain on life support?... 1 answer ##### One objective of financial accounting is providing relevant information about the operating success or failure of... One objective of financial accounting is providing relevant information about the operating success or failure of companies. In this case, the measurement of LinkedIn’s expenses was very different before and after it was acquired by Microsoft, due principally to differences in the amount of am... 1 answer ##### You have been hired as a consultant for Pristine Urban-Tech Zither, Inc. (PUTZ), manufacturers of fine... You have been hired as a consultant for Pristine Urban-Tech Zither, Inc. (PUTZ), manufacturers of fine zithers. The market for zithers is growing quickly. The company bought some land three years ago for$2.4 million in anticipation of using it as a toxic waste dump site but has recently hired anoth... Required information (The following information applies to the questions displayed below.) Cane Company manufactures two products called Alpha and Beta that sell for $150 and$110, respectively. Each product uses only one type of raw material that costs \$5 per pound. The company has the capacity to ...
## An extremum problem for polynomials.(English)Zbl 0097.04902 ### Keywords: approximation and series expansion Full Text: ### References: [1] L. Brickman , [1] A new generalization of a problem of F. Lukács . Compositio Mathematica, Volumen XIV, Fasc. 3. · Zbl 0097.05001 [2] F. Lukács , [2] Verschärfung des ersten Mittelwertsatzes der Integralrechnung , Math. Zeitschrift 2 (1918), 295-305. · JFM 46.0414.01 [3] G. Szegö , [3] Orthogonal Polynomials , Revised edition, New York, 1959. · Zbl 0089.27501 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
• Go To • Notes • Practice and Assignment problems are not yet written. As time permits I am working on them, however I don't have the amount of free time that I used to so it will take a while before anything shows up here. • Show/Hide • Show all Solutions/Steps/etc. • Hide all Solutions/Steps/etc. Paul's Online Notes Home / Differential Equations / Laplace Transforms / Inverse Laplace Transforms Show Mobile Notice Show All Notes Hide All Notes Mobile Notice You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width. ### Section 4-3 : Inverse Laplace Transforms Finding the Laplace transform of a function is not terribly difficult if we’ve got a table of transforms in front of us to use as we saw in the last section. What we would like to do now is go the other way. We are going to be given a transform, $$F(s)$$, and ask what function (or functions) did we have originally. As you will see this can be a more complicated and lengthy process than taking transforms. In these cases we say that we are finding the Inverse Laplace Transform of $$F(s)$$ and use the following notation. $f\left( t \right) = {\mathcal{L}^{\, - 1}}\left\{ {F\left( s \right)} \right\}$ As with Laplace transforms, we’ve got the following fact to help us take the inverse transform. #### Fact Given the two Laplace transforms $$F(s)$$ and $$G(s)$$ then ${\mathcal{L}^{\, - 1}}\left\{ {aF\left( s \right) + bG\left( s \right)} \right\} = a{\mathcal{L}^{\, - 1}}\left\{ {F\left( s \right)} \right\} + b{\mathcal{L}^{\, - 1}}\left\{ {G\left( s \right)} \right\}$ for any constants $$a$$ and $$b$$. So, we take the inverse transform of the individual transforms, put any constants back in and then add or subtract the results back up. Let’s take a look at a couple of fairly simple inverse transforms. Example 1 Find the inverse transform of each of the following. 1. $$\displaystyle F\left( s \right) = \frac{6}{s} - \frac{1}{{s - 8}} + \frac{4}{{s - 3}}$$ 2. $$\displaystyle H\left( s \right) = \frac{{19}}{{s + 2}} - \frac{1}{{3s - 5}} + \frac{7}{{{s^5}}}$$ 3. $$\displaystyle F\left( s \right) = \frac{{6s}}{{{s^2} + 25}} + \frac{3}{{{s^2} + 25}}$$ 4. $$\displaystyle G\left( s \right) = \frac{8}{{3{s^2} + 12}} + \frac{3}{{{s^2} - 49}}$$ Show All Solutions Hide All Solutions Show Discussion We’ve always felt that the key to doing inverse transforms is to look at the denominator and try to identify what you’ve got based on that. If there is only one entry in the table that has that particular denominator, the next step is to make sure the numerator is correctly set up for the inverse transform process. If it isn’t, correct it (this is always easy to do) and then take the inverse transform. If there is more than one entry in the table that has a particular denominator, then the numerators of each will be different, so go up to the numerator and see which one you’ve got. If you need to correct the numerator to get it into the correct form and then take the inverse transform. So, with this advice in mind let’s see if we can take some inverse transforms. a $$\displaystyle F\left( s \right) = \frac{6}{s} - \frac{1}{{s - 8}} + \frac{4}{{s - 3}}$$ Show Solution From the denominator of the first term it looks like the first term is just a constant. The correct numerator for this term is a “1” so we’ll just factor the 6 out before taking the inverse transform. The second term appears to be an exponential with $$a = 8$$ and the numerator is exactly what it needs to be. The third term also appears to be an exponential, only this time $$a = 3$$ and we’ll need to factor the 4 out before taking the inverse transforms. So, with a little more detail than we’ll usually put into these, \begin{align*}F\left( s \right) & = 6\,\,\frac{1}{s} - \frac{1}{{s - 8}} + 4\frac{1}{{s - 3}}\\ & f\left( t \right) = 6\left( 1 \right) - {{\bf{e}}^{8t}} + 4\left( {{{\bf{e}}^{3t}}} \right)\\ & = 6 - {{\bf{e}}^{8t}} + 4{{\bf{e}}^{3t}}\end{align*} b $$\displaystyle H\left( s \right) = \frac{{19}}{{s + 2}} - \frac{1}{{3s - 5}} + \frac{7}{{{s^5}}}$$ Show Solution The first term in this case looks like an exponential with $$a = - 2$$ and we’ll need to factor out the 19. Be careful with negative signs in these problems, it’s very easy to lose track of them. The second term almost looks like an exponential, except that it’s got a $$3s$$ instead of just an $$s$$ in the denominator. It is an exponential, but in this case, we’ll need to factor a 3 out of the denominator before taking the inverse transform. The denominator of the third term appears to be #3 in the table with $$n = 4$$. The numerator however, is not correct for this. There is currently a 7 in the numerator and we need a $$4! = 24$$ in the numerator. This is very easy to fix. Whenever a numerator is off by a multiplicative constant, as in this case, all we need to do is put the constant that we need in the numerator. We will just need to remember to take it back out by dividing by the same constant. So, let’s first rewrite the transform. \begin{align*}H\left( s \right) & = \frac{{19}}{{s - \left( { - 2} \right)}} - \frac{1}{{3\left( {s - \frac{5}{3}} \right)}} + \frac{{7\frac{{4!}}{{4!}}}}{{{s^{4 + 1}}}}\\ & = 19\frac{1}{{s - \left( { - 2} \right)}} - \frac{1}{3}\frac{1}{{s - \frac{5}{3}}} + \frac{7}{{4!}}\frac{{4!}}{{{s^{4 + 1}}}}\end{align*} So, what did we do here? We factored the 19 out of the first term. We factored the 3 out of the denominator of the second term since it can’t be there for the inverse transform and in the third term we factored everything out of the numerator except the 4! since that is the portion that we need in the numerator for the inverse transform process. Let’s now take the inverse transform. $h\left( t \right) = 19{{\bf{e}}^{ - 2t}} - \frac{1}{3}{{\bf{e}}^{\frac{{5t}}{3}}} + \frac{7}{{24}}{t^4}$ c $$\displaystyle F\left( s \right) = \frac{{6s}}{{{s^2} + 25}} + \frac{3}{{{s^2} + 25}}$$ Show Solution In this part we’ve got the same denominator in both terms and our table tells us that we’ve either got #7 or #8. The numerators will tell us which we’ve actually got. The first one has an $$s$$ in the numerator and so this means that the first term must be #8 and we’ll need to factor the 6 out of the numerator in this case. The second term has only a constant in the numerator and so this term must be #7, however, in order for this to be exactly #7 we’ll need to multiply/divide a 5 in the numerator to get it correct for the table. The transform becomes, \begin{align*}F\left( s \right) & = 6\frac{s}{{{s^2} + {{\left( 5 \right)}^2}}} + \frac{{3\frac{5}{5}}}{{{s^2} + {{\left( 5 \right)}^2}}}\\ & = 6\frac{s}{{{s^2} + {{\left( 5 \right)}^2}}} + \frac{3}{5}\frac{5}{{{s^2} + {{\left( 5 \right)}^2}}}\end{align*} Taking the inverse transform gives, $f\left( t \right) = 6\cos \left( {5t} \right) + \frac{3}{5}\sin \left( {5t} \right)$ d $$\displaystyle G\left( s \right) = \frac{8}{{3{s^2} + 12}} + \frac{3}{{{s^2} - 49}}$$ Show Solution In this case the first term will be a sine once we factor a 3 out of the denominator, while the second term appears to be a hyperbolic sine (#17). Again, be careful with the difference between these two. Both of the terms will also need to have their numerators fixed up. Here is the transform once we’re done rewriting it. \begin{align*}G\left( s \right) & = \frac{1}{3}\frac{8}{{{s^2} + 4}} + \frac{3}{{{s^2} - 49}}\\ & = \frac{1}{3}\frac{{\left( 4 \right)\left( 2 \right)}}{{{s^2} + {{\left( 2 \right)}^2}}} + \frac{{3\frac{7}{7}}}{{{s^2} - {{\left( 7 \right)}^2}}}\end{align*} Notice that in the first term we took advantage of the fact that we could get the 2 in the numerator that we needed by factoring the 8. The inverse transform is then, $g\left( t \right) = \frac{4}{3}\sin \left( {2t} \right) + \frac{3}{7}\sinh \left( {7t} \right)$ So, probably the best way to identify the transform is by looking at the denominator. If there is more than one possibility use the numerator to identify the correct one. Fix up the numerator if needed to get it into the form needed for the inverse transform process. Finally, take the inverse transform. Let’s do some slightly harder problems. These are a little more involved than the first set. Example 2 Find the inverse transform of each of the following. 1. $$\displaystyle F\left( s \right) = \frac{{6s - 5}}{{{s^2} + 7}}$$ 2. $$\displaystyle F\left( s \right) = \frac{{1 - 3s}}{{{s^2} + 8s + 21}}$$ 3. $$\displaystyle G\left( s \right) = \frac{{3s - 2}}{{2{s^2} - 6s - 2}}$$ 4. $$\displaystyle H\left( s \right) = \frac{{s + 7}}{{{s^2} - 3s - 10}}$$ Show All Solutions Hide All Solutions a $$\displaystyle F\left( s \right) = \frac{{6s - 5}}{{{s^2} + 7}}$$ Show Solution From the denominator of this one it appears that it is either a sine or a cosine. However, the numerator doesn’t match up to either of these in the table. A cosine wants just an $$s$$ in the numerator with at most a multiplicative constant, while a sine wants only a constant and no $$s$$ in the numerator. We’ve got both in the numerator. This is easy to fix however. We will just split up the transform into two terms and then do inverse transforms. \begin{align*}F\left( s \right) & = \frac{{6s}}{{{s^2} + 7}} - \frac{{5\frac{{\sqrt 7 }}{{\sqrt 7 }}}}{{{s^2} + 7}}\\ f\left( t \right) & = 6\cos \left( {\sqrt 7 t} \right) - \frac{5}{{\sqrt 7 }}\sin \left( {\sqrt 7 t} \right)\end{align*} Do not get too used to always getting the perfect squares in sines and cosines that we saw in the first set of examples. More often than not (at least in my class) they won’t be perfect squares! b $$\displaystyle F\left( s \right) = \frac{{1 - 3s}}{{{s^2} + 8s + 21}}$$ Show Solution In this case there are no denominators in our table that look like this. We can however make the denominator look like one of the denominators in the table by completing the square on the denominator. So, let’s do that first. \begin{align*}{s^2} + 8s + 21 & = {s^2} + 8s + 16 - 16 + 21\\ & = {s^2} + 8s + 16 + 5\\ & = {\left( {s + 4} \right)^2} + 5\end{align*} Recall that in completing the square you take half the coefficient of the $$s$$, square this, and then add and subtract the result to the polynomial. After doing this the first three terms should factor as a perfect square. So, the transform can be written as the following. $F\left( s \right) = \frac{{1 - 3s}}{{{{\left( {s + 4} \right)}^2} + 5}}$ Okay, with this rewrite it looks like we’ve got #19 and/or #20’s from our table of transforms. However, note that in order for it to be a #19 we want just a constant in the numerator and in order to be a #20 we need an $$s – a$$ in the numerator. We’ve got neither of these, so we’ll have to correct the numerator to get it into proper form. In correcting the numerator always get the $$s – a$$ first. This is the important part. We will also need to be careful of the 3 that sits in front of the $$s$$. One way to take care of this is to break the term into two pieces, factor the 3 out of the second and then fix up the numerator of this term. This will work; however, it will put three terms into our answer and there are really only two terms. So, we will leave the transform as a single term and correct it as follows, \begin{align*}F\left( s \right) & = \frac{{1 - 3\left( {s + 4 - 4} \right)}}{{{{\left( {s + 4} \right)}^2} + 5}}\\ & = \frac{{1 - 3\left( {s + 4} \right) + 12}}{{{{\left( {s + 4} \right)}^2} + 5}}\\ & = \frac{{ - 3\left( {s + 4} \right) + 13}}{{{{\left( {s + 4} \right)}^2} + 5}}\end{align*} We needed an $$s + 4$$ in the numerator, so we put that in. We just needed to make sure and take the 4 back out by subtracting it back out. Also, because of the 3 multiplying the $$s$$ we needed to do all this inside a set of parenthesis. Then we partially multiplied the 3 through the second term and combined the constants. With the transform in this form, we can break it up into two transforms each of which are in the tables and so we can do inverse transforms on them, \begin{align*}F\left( s \right) & = - 3\frac{{s + 4}}{{{{\left( {s + 4} \right)}^2} + 5}} + \frac{{13\frac{{\sqrt 5 }}{{\sqrt 5 }}}}{{{{\left( {s + 4} \right)}^2} + 5}}\\ f\left( t \right) & = - 3{{\bf{e}}^{ - 4t}}\cos \left( {\sqrt 5 t} \right) + \frac{{13}}{{\sqrt 5 }}{{\bf{e}}^{ - 4t}}\sin \left( {\sqrt 5 t} \right)\end{align*} c $$\displaystyle G\left( s \right) = \frac{{3s - 2}}{{2{s^2} - 6s - 2}}$$ Show Solution This one is similar to the last one. We just need to be careful with the completing the square however. The first thing that we should do is factor a 2 out of the denominator, then complete the square. Remember that when completing the square a coefficient of 1 on the $$s^{2}$$ term is needed! So, here’s the work for this transform. \begin{align*}G\left( s \right) & = \frac{{3s - 2}}{{2\left( {{s^2} - 3s - 1} \right)}}\\ & = \frac{1}{2}\frac{{3s - 2}}{{{s^2} - 3s + \frac{9}{4} - \frac{9}{4} - 1}}\\ & = \frac{1}{2}\frac{{3s - 2}}{{{{\left( {s - \frac{3}{2}} \right)}^2} - \frac{{13}}{4}}}\end{align*} So, it looks like we’ve got #21 and #22 with a corrected numerator. Here’s the work for that and the inverse transform. \begin{align*}G\left( s \right) & = \frac{1}{2}\frac{{3\left( {s - \frac{3}{2} + \frac{3}{2}} \right) - 2}}{{{{\left( {s - \frac{3}{2}} \right)}^2} - \frac{{13}}{4}}}\\ & = \frac{1}{2}\frac{{3\left( {s - \frac{3}{2}} \right) + \frac{5}{2}}}{{{{\left( {s - \frac{3}{2}} \right)}^2} - \frac{{13}}{4}}}\\ & = \frac{1}{2}\left( {\frac{{3\left( {s - \frac{3}{2}} \right)}}{{{{\left( {s - \frac{3}{2}} \right)}^2} - \frac{{13}}{4}}} + \frac{{\frac{5}{2}\frac{{\sqrt {13} }}{{\sqrt {13} }}}}{{{{\left( {s - \frac{3}{2}} \right)}^2} - \frac{{13}}{4}}}} \right)\\ g\left( t \right) & = \frac{1}{2}\left( {3{{\bf{e}}^{\frac{{3\,t}}{2}}}\cosh \left( {\frac{{\sqrt {13} }}{2}t} \right) + \frac{5}{{\sqrt {13} }}{{\bf{e}}^{\frac{{3\,t}}{2}}}\sinh \left( {\frac{{\sqrt {13} }}{2}t} \right)} \right)\end{align*} In correcting the numerator of the second term, notice that I only put in the square root since we already had the “over 2” part of the fraction that we needed in the numerator. d $$\displaystyle H\left( s \right) = \frac{{s + 7}}{{{s^2} - 3s - 10}}$$ Show Solution This one appears to be similar to the previous two, but it actually isn’t. The denominators in the previous two couldn’t be easily factored. In this case the denominator does factor and so we need to deal with it differently. Here is the transform with the factored denominator. $H\left( s \right) = \frac{{s + 7}}{{\left( {s + 2} \right)\left( {s - 5} \right)}}$ The denominator of this transform seems to suggest that we’ve got a couple of exponentials, however in order to be exponentials there can only be a single term in the denominator and no $$s$$’s in the numerator. To fix this we will need to do partial fractions on this transform. In this case the partial fraction decomposition will be $H\left( s \right) = \frac{A}{{s + 2}} + \frac{B}{{s - 5}}$ Don’t remember how to do partial fractions? In this example we’ll show you one way of getting the values of the constants and after this example we’ll review how to get the correct form of the partial fraction decomposition. Okay, so let’s get the constants. There is a method for finding the constants that will always work, however it can lead to more work than is sometimes required. Eventually, we will need that method, however in this case there is an easier way to find the constants. Regardless of the method used, the first step is to actually add the two terms back up. This gives the following. $\frac{{s + 7}}{{\left( {s + 2} \right)\left( {s - 5} \right)}} = \frac{{A\left( {s - 5} \right) + B\left( {s + 2} \right)}}{{\left( {s + 2} \right)\left( {s - 5} \right)}}$ Now, this needs to be true for any $$s$$ that we should choose to put in. So, since the denominators are the same we just need to get the numerators equal. Therefore, set the numerators equal. $s + 7 = A\left( {s - 5} \right) + B\left( {s + 2} \right)$ Again, this must be true for ANY value of $$s$$ that we want to put in. So, let’s take advantage of that. If it must be true for any value of $$s$$ then it must be true for $$s = - 2$$, to pick a value at random. In this case we get, $5 = A\left( { - 7} \right) + B\left( 0 \right)\hspace{0.25in} \Rightarrow \hspace{0.25in} A = - \frac{5}{7}$ We found $$A$$ by appropriately picking $$s$$. We can $$B$$ in the same way if we chose $$s = 5$$. $12 = A\left( 0 \right) + B\left( 7 \right)\hspace{0.25in} \Rightarrow \hspace{0.25in}B = \frac{{12}}{7}$ This will not always work, but when it does it will usually simplify the work considerably. So, with these constants the transform becomes, $H\left( s \right) = \frac{{ - \frac{5}{7}}}{{s + 2}} + \frac{{\frac{{12}}{7}}}{{s - 5}}$ We can now easily do the inverse transform to get, $h\left( t \right) = - \frac{5}{7}{{\bf{e}}^{ - 2t}} + \frac{{12}}{7}{{\bf{e}}^{5t}}$ The last part of this example needed partial fractions to get the inverse transform. When we finally get back to differential equations and we start using Laplace transforms to solve them, you will quickly come to understand that partial fractions are a fact of life in these problems. Almost every problem will require partial fractions to one degree or another. Note that we could have done the last part of this example as we had done the previous two parts. If we had we would have gotten hyperbolic functions. However, recalling the definition of the hyperbolic functions we could have written the result in the form we got from the way we worked our problem. However, most students have a better feel for exponentials than they do for hyperbolic functions and so it’s usually best to just use partial fractions and get the answer in terms of exponentials. It may be a little more work, but it will give a nicer (and easier to work with) form of the answer. Be warned that in my class I’ve got a rule that if the denominator can be factored with integer coefficients then it must be. So, let’s remind you how to get the correct partial fraction decomposition. The first step is to factor the denominator as much as possible. Then for each term in the denominator we will use the following table to get a term or terms for our partial fraction decomposition. Factor in denominator Term in partial fraction decomposition $$ax + b$$ $$\displaystyle \frac{A}{{ax + b}}$$ $${\left( {ax + b} \right)^k}$$ $$\displaystyle \frac{{{A_1}}}{{ax + b}} + \frac{{{A_2}}}{{{{\left( {ax + b} \right)}^2}}} + \cdots + \frac{{{A_k}}}{{{{\left( {ax + b} \right)}^k}}}$$ $$a{x^2} + bx + c$$ $$\displaystyle \frac{{Ax + B}}{{a{x^2} + bx + c}}$$ $${\left( {a{x^2} + bx + c} \right)^k}$$ $$\displaystyle \frac{{{A_1}x + {B_1}}}{{a{x^2} + bx + c}} + \frac{{{A_2}x + {B_2}}}{{{{\left( {a{x^2} + bx + c} \right)}^2}}} + \cdots + \frac{{{A_k}x + {B_k}}}{{{{\left( {a{x^2} + bx + c} \right)}^k}}}$$ Notice that the first and third cases are really special cases of the second and fourth cases respectively. So, let’s do a couple more examples to remind you how to do partial fractions. Example 3 Find the inverse transform of each of the following. 1. $$\displaystyle G\left( s \right) = \frac{{86s - 78}}{{\left( {s + 3} \right)\left( {s - 4} \right)\left( {5s - 1} \right)}}$$ 2. $$\displaystyle F\left( s \right) = \frac{{2 - 5s}}{{\left( {s - 6} \right)\left( {{s^2} + 11} \right)}}$$ 3. $$\displaystyle G\left( s \right) = \frac{{25}}{{{s^3}\left( {{s^2} + 4s + 5} \right)}}$$ Show All Solutions Hide All Solutions a $$\displaystyle G\left( s \right) = \frac{{86s - 78}}{{\left( {s + 3} \right)\left( {s - 4} \right)\left( {5s - 1} \right)}}$$ Show Solution Here’s the partial fraction decomposition for this part. $G\left( s \right) = \frac{A}{{s + 3}} + \frac{B}{{s - 4}} + \frac{C}{{5s - 1}}$ Now, this time we won’t go into quite the detail as we did in the last example. We are after the numerator of the partial fraction decomposition and this is usually easy enough to do in our heads. Therefore, we will go straight to setting numerators equal. $86s - 78 = A\left( {s - 4} \right)\left( {5s - 1} \right) + B\left( {s + 3} \right)\left( {5s - 1} \right) + C\left( {s + 3} \right)\left( {s - 4} \right)$ As with the last example, we can easily get the constants by correctly picking values of $$s$$. \begin{align*} & s = - 3 & - 336 & = A\left( { - 7} \right)\left( { - 16} \right) & \Rightarrow \hspace{0.25in}A & = - 3\\ & s = \frac{1}{5}& - \frac{{304}}{5} & = C\left( {\frac{{16}}{5}} \right)\left( { - \frac{{19}}{5}} \right) & \Rightarrow \hspace{0.25in}C & = 5\\ & s = 4 & 266 & = B\left( 7 \right)\left( {19} \right) & \Rightarrow \hspace{0.25in}B & = 2\end{align*} So, the partial fraction decomposition for this transform is, $G\left( s \right) = - \frac{3}{{s + 3}} + \frac{2}{{s - 4}} + \frac{5}{{5s - 1}}$ Now, in order to actually take the inverse transform we will need to factor a 5 out of the denominator of the last term. The corrected transform as well as its inverse transform is. \begin{align*}G\left( s \right) & = - \frac{3}{{s + 3}} + \frac{2}{{s - 4}} + \frac{1}{{s - \frac{1}{5}}}\\ g\left( t \right) & = - 3{{\bf{e}}^{ - 3t}} + 2{{\bf{e}}^{4t}} + {{\bf{e}}^{\frac{t}{5}}}\end{align*} b $$\displaystyle F\left( s \right) = \frac{{2 - 5s}}{{\left( {s - 6} \right)\left( {{s^2} + 11} \right)}}$$ Show Solution So, for the first time we’ve got a quadratic in the denominator. Here’s the decomposition for this part. $F\left( s \right) = \frac{A}{{s - 6}} + \frac{{Bs + C}}{{{s^2} + 11}}$ Setting numerators equal gives, $2 - 5s = A\left( {{s^2} + 11} \right) + \left( {Bs + C} \right)\left( {s - 6} \right)$ Okay, in this case we could use $$s = 6$$ to quickly find $$A$$, but that’s all it would give. In this case we will need to go the “long” way around to getting the constants. Note that this way will always work but is sometimes more work than is required. The “long” way is to completely multiply out the right side and collect like terms. \begin{align*}2 - 5s & = A\left( {{s^2} + 11} \right) + \left( {Bs + C} \right)\left( {s - 6} \right)\\ & = A{s^2} + 11A + B{s^2} - 6Bs + Cs - 6C\\ & = \left( {A + B} \right){s^2} + \left( { - 6B + C} \right)s + 11A - 6C\end{align*} In order for these two to be equal the coefficients of the $$s^{2}$$, $$s$$ and the constants must all be equal. So, setting coefficients equal gives the following system of equations that can be solved. \left. {\begin{aligned} & {s^2}: & A + B & = 0\\ & {s^1}: & - 6B + C & = - 5\\ & {s^0}: & 11A - 6C & = 2\end{aligned}} \right\}\hspace{0.25in} \Rightarrow \hspace{0.25in}A = - \frac{{28}}{{47}},\,\,\,\,B = \frac{{28}}{{47}},\,\,\,\,\,C = - \frac{{67}}{{47}} Notice that we used $$s^{0}$$ to denote the constants. This is habit on my part and isn’t really required, it’s just what I’m used to doing. Also, the coefficients are fairly messy fractions in this case. Get used to that. They will often be like this when we get back into solving differential equations. There is a way to make our life a little easier as well with this. Since all of the fractions have a denominator of 47 we’ll factor that out as we plug them back into the decomposition. This will make dealing with them much easier. The partial fraction decomposition is then, \begin{align*}F\left( s \right) & = \frac{1}{{47}}\left( { - \frac{{28}}{{s - 6}} + \frac{{28s - 67}}{{{s^2} + 11}}} \right)\\ & = \frac{1}{{47}}\left( { - \frac{{28}}{{s - 6}} + \frac{{28s}}{{{s^2} + 11}} - \frac{{67\frac{{\sqrt {11} }}{{\sqrt {11} }}}}{{{s^2} + 11}}} \right)\end{align*} The inverse transform is then. $f\left( t \right) = \frac{1}{{47}}\left( { - 28{{\bf{e}}^{6t}} + 28\cos \left( {\sqrt {11} t} \right) - \frac{{67}}{{\sqrt {11} }}\sin \left( {\sqrt {11} t} \right)} \right)$ c $$\displaystyle G\left( s \right) = \frac{{25}}{{{s^3}\left( {{s^2} + 4s + 5} \right)}}$$ Show Solution With this last part do not get excited about the $$s^{3}$$. We can think of this term as ${s^3} = {\left( {s - 0} \right)^3}$ and it becomes a linear term to a power. So, the partial fraction decomposition is $G\left( s \right) = \frac{A}{s} + \frac{B}{{{s^2}}} + \frac{C}{{{s^3}}} + \frac{{Ds + E}}{{{s^2} + 4s + 5}}$ Setting numerators equal and multiplying out gives. \begin{align*}25 & = A{s^2}\left( {{s^2} + 4s + 5} \right) + Bs\left( {{s^2} + 4s + 5} \right) + C\left( {{s^2} + 4s + 5} \right) + \left( {Ds + E} \right){s^3}\\ & = \left( {A + D} \right){s^4} + \left( {4A + B + E} \right){s^3} + \left( {5A + 4B + C} \right){s^2} + \left( {5B + 4C} \right)s + 5C\end{align*} Setting coefficients equal gives the following system. \left. {\begin{aligned}& {s^4}: & A + D & = 0\\ & {s^3}: & 4A + B + E & = 0\\ & {s^2}: & 5A + 4B + C & = 0\\ & {s^1}: & 5B + 4C & = 0\\ & {s^0}: & 5C & = 25\end{aligned}} \right\}\hspace{0.25in} \Rightarrow \hspace{0.25in}A = \frac{{11}}{5},B = - 4,C = 5,D = - \frac{{11}}{5},E = - \frac{{24}}{5} This system looks messy, but it’s easier to solve than it might look. First, we get $$C$$ for free from the last equation. We can then use the fourth equation to find $$B$$. The third equation will then give $$A$$, etc. When plugging into the decomposition we’ll get everything with a denominator of 5, then factor that out as we did in the previous part in order to make things easier to deal with. $G\left( s \right) = \frac{1}{5}\left( {\frac{{11}}{s} - \frac{{20}}{{{s^2}}} + \frac{{25}}{{{s^3}}} - \frac{{11s + 24}}{{{s^2} + 4s + 5}}} \right)$ Note that we also factored a minus sign out of the last two terms. To complete this part we’ll need to complete the square on the later term and fix up a couple of numerators. Here’s that work. \begin{align*}G\left( s \right) & = \frac{1}{5}\left( {\frac{{11}}{s} - \frac{{20}}{{{s^2}}} + \frac{{25}}{{{s^3}}} - \frac{{11s + 24}}{{{s^2} + 4s + 5}}} \right)\\ & = \frac{1}{5}\left( {\frac{{11}}{s} - \frac{{20}}{{{s^2}}} + \frac{{25}}{{{s^3}}} - \frac{{11\left( {s + 2 - 2} \right) + 24}}{{{{\left( {s + 2} \right)}^2} + 1}}} \right)\\ & = \frac{1}{5}\left( {\frac{{11}}{s} - \frac{{20}}{{{s^2}}} + \frac{{25\frac{{2!}}{{2!}}}}{{{s^3}}} - \frac{{11\left( {s + 2} \right)}}{{{{\left( {s + 2} \right)}^2} + 1}} - \frac{2}{{{{\left( {s + 2} \right)}^2} + 1}}} \right)\end{align*} The inverse transform is then. $g\left( t \right) = \frac{1}{5}\left( {11 - 20t + \frac{{25}}{2}{t^2} - 11{{\bf{e}}^{ - 2t}}\cos \left( t \right) - 2{{\bf{e}}^{ - 2t}}\sin \left( t \right)} \right)$ So, one final time. Partial fractions are a fact of life when using Laplace transforms to solve differential equations. Make sure that you can deal with them.
## Latent Space Models for Neural Data Many scientific fields involve the study of network data, including social networks, networks in statistical physics, biological networks, and information networks (Goldenberg, Zheng, Fienberg, & Airoldi, 2010; Newman, 2010). What we can learn about nodes in a network from their connectivity patterns? We can begin to study this using a latent space model (Hoff, Raftery, & Handcock, 2002). Latent space models embed nodes in the network in a latent space, where the likelihood of forming an edge between two nodes depends on their distance in the latent space. We will analyze network data from neuroscience. An interactive version with Jupyter notebook is available here. ### Data The data comes from Mark Newman’s repository. It is a weighted, directed network representing the neural network of the nematode C. Elegans compiled by Watts & Strogatz (1998) using experimental data by White, Southgate, Thomson, & Brenner (1986). The neural network consists of around $$300$$ neurons. Each connection between neurons is associated with a weight (positive integer) capturing the strength of the connection. from observations import celegans x_train = celegans("~/data") ### Model What can we learn about the neurons from their connectivity patterns? Using a latent space model (Hoff et al., 2002), we will learn a latent embedding for each neuron to capture the similarities between them. Each neuron $$n$$ is a node in the network and is associated with a latent position $$z_n\in\mathbb{R}^K$$. We place a Gaussian prior on each of the latent positions. The log-odds of an edge between node $$i$$ and $$j$$ is proportional to the Euclidean distance between the latent representations of the nodes $$|z_i- z_j|$$. Here, we model the weights ($$Y_{ij}$$) of the edges with a Poisson likelihood. The rate is the reciprocal of the distance in latent space. The generative process is as follows: 1. For each node $$n=1,\ldots,N$$, \begin{aligned} z_n \sim N(0,I).\end{aligned} 2. For each edge $$(i,j)\in\{1,\ldots,N\}\times\{1,\ldots,N\}$$, \begin{aligned} Y_{ij} \sim \text{Poisson}\Bigg(\frac{1}{|z_i - z_j|}\Bigg).\end{aligned} In Edward, we write the model as follows. from edward.models import Normal, Poisson N = x_train.shape[0] # number of data points K = 3 # latent dimensionality z = Normal(loc=tf.zeros([N, K]), scale=tf.ones([N, K])) # Calculate N x N distance matrix. # 1. Create a vector, [||z_1||^2, ||z_2||^2, ..., ||z_N||^2], and tile # it to create N identical rows. xp = tf.tile(tf.reduce_sum(tf.pow(z, 2), 1, keep_dims=True), [1, N]) # 2. Create a N x N matrix where entry (i, j) is ||z_i||^2 + ||z_j||^2 # - 2 z_i^T z_j. xp = xp + tf.transpose(xp) - 2 * tf.matmul(z, z, transpose_b=True) # 3. Invert the pairwise distances and make rate along diagonals to # be close to zero. xp = 1.0 / tf.sqrt(xp + tf.diag(tf.zeros(N) + 1e3)) x = Poisson(rate=xp) ### Inference Maximum a posteriori (MAP) estimation is simple in Edward. Two lines are required: Instantiating inference and running it. inference = ed.MAP([z], data={x: x_train}) See this extended tutorial about MAP estimation in Edward. One could instead run variational inference. This requires specifying a variational model and instantiating KLqp. qz = Normal(loc=tf.get_variable("qz/loc", [N * K]), scale=tf.nn.softplus(tf.get_variable("qz/scale", [N * K]))) inference = ed.KLqp({z: qz}, data={x: x_train}) See this extended tutorial about variational inference in Edward. Finally, the following line runs the inference procedure for 2500 iterations. inference.run(n_iter=2500) ### Acknowledgments We thank Maja Rudolph for writing the initial version of this tutorial. ### References Goldenberg, A., Zheng, A. X., Fienberg, S. E., & Airoldi, E. M. (2010). A survey of statistical network models. Foundations and Trends in Machine Learning. Hoff, P. D., Raftery, A. E., & Handcock, M. S. (2002). Latent space approaches to social network analysis. Journal of the American Statistical Association, 97(460), 1090–1098. Newman, M. (2010). Networks: An introduction. Oxford University Press. Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of ‘small-world’networks. Nature, 393(6684), 440–442. White, J. G., Southgate, E., Thomson, J. N., & Brenner, S. (1986). The structure of the nervous system of the nematode caenorhabditis elegans. Philos Trans R Soc Lond B Biol Sci, 314(1165), 1–340.
Section 8.2 #11: $\displaystyle\int x e^{-4x} \mathrm{d}x$ Solution: Let $u=x$ and $\mathrm{d}v=e^{-4x}$. Then $\mathrm{d}u=\mathrm{d}x$ and $v = -\dfrac{1}{4} e^{-4x}$. Therefore using integration by parts we compute $$\begin{array}{ll} \displaystyle\int xe^{-4x} \mathrm{d}x &= -\dfrac{1}{4}xe^{-4x} + \dfrac{1}{4} \displaystyle\int e^{-4x} \mathrm{d}x \\ &= -\dfrac{1}{4}xe^{-4x} - \dfrac{1}{16} e^{-4x} + C. \end{array}$$ Section 8.2 #13: $\displaystyle\int x^3 e^x \mathrm{d}x$ Solution: Let $u_1=x^3$ and $\mathrm{d}v_1=e^x$. Then $\mathrm{d}u_1 = 3x^2 \mathrm{d}x$ and $v_1 = e^x$. Therefore one application of integration by parts yields $$(*) \hspace{35pt} \displaystyle\int x^3 e^x \mathrm{d}x = x^3 e^x - 3\displaystyle\int x^2 e^x \mathrm{d}x.$$ Now we will compute that integral. Let $u_2=x^2$ and $\mathrm{d}v_2=e^x$. Then $\mathrm{d}u_2 = 2x \mathrm{d}x$ and $v_2=e^x$ yielding $$(**) \hspace{35pt} \displaystyle\int x^2 e^x \mathrm{d}x = x^2 e^x - 2 \displaystyle\int x e^x \mathrm{d}x.$$ Now we will compute that integral. Let $u_3=x$ and $\mathrm{d}v_3=e^x$. Then $\mathrm{d}u_3 = \mathrm{d}x$ and $v_3=e^x$ yielding $$(***) \hspace{35pt} \displaystyle\int xe^x \mathrm{d}x = xe^x - \displaystyle\int e^x \mathrm{d}x = xe^x - e^x + C.$$ Combining $(*)$, $(**)$, and $(***)$ allows us to compute $$\begin{array}{ll} \displaystyle\int x^3 e^x \mathrm{d}x &= x^3e^x - 3 \left( \displaystyle\int x^2 e^x \mathrm{d}x \right) \\ &= x^3 e^x - 3 \left( x^2e^x - 2 \left( \displaystyle\int xe^x \mathrm{d}x \right) \right) \\ &= x^3 e^x - 3 \left( x^2 e^x - 2 \left( xe^x -e^x \right) \right) + C \\ &= x^3e^x - 3x^2e^x + 6xe^x - 6e^x + C. \end{array}$$ Section 8.2 #26: $\displaystyle\int x^2 \cos(x) \mathrm{d}x$ Solution: Let $u_1=x^2$ and $\mathrm{d}v_1=\cos(x)$. Then $\mathrm{d}u_1=2x\mathrm{d}x$ and $v_1=\sin(x)$ and so $$(*) \hspace{35pt} \displaystyle\int x^2 \cos(x) \mathrm{d}x = x^2 \sin(x) - 2\displaystyle\int x\sin(x).$$ Now we compute that integral. Let $u_2=x$ and $\mathrm{d}v_2=\sin(x)$. Then $\mathrm{d}u_2=\mathrm{d}x$ and $v_2=-\cos(x)$ and so $$(**) \hspace{35pt} \displaystyle\int x \sin(x) \mathrm{d}x = -x\cos(x) + \displaystyle\int \cos(x) \mathrm{d}x=-x\cos(x) + \sin(x) + C.$$ Combining $(*)$ and $(**)$ allows us to compute $$\begin{array}{ll} \displaystyle\int x^2 \cos(x) \mathrm{d}x &= x^2 \sin(x) - 2 \left( \displaystyle\int x \sin(x) \mathrm{d}x \right) \\ &= x^2 \sin(x) - 2 \left( -x \cos(x) + \sin(x) \right) + C \\ &= x^2\sin(x) - 2\sin(x) + 2x \cos(x) + C. \end{array}$$ Section 8.2 #45: $\displaystyle\int_0^1 e^x \sin(x) \mathrm{d}x$ Solution: First we will find the antiderivative of $e^x \sin(x)$. To do this, first let $u_1=e^x$ and $\mathrm{d}v_1=\sin(x)$. Then $\mathrm{d}u_1=e^x \mathrm{d}x$ and $v_1=-\cos(x)$ and so $$(*) \hspace{35pt} \displaystyle\int e^x \sin(x) \mathrm{d}x = -e^x \cos(x) + \displaystyle\int e^x \cos(x) \mathrm{d}x.$$ Now we compute that integral. Let $u_2=e^x$ and $\mathrm{d}v_2=\cos(x) \mathrm{d}x$. Then $\mathrm{d}u_2=e^x \mathrm{d}x$ and $v_2 = \sin(x)$ so we compute $$(**) \hspace{35pt} \displaystyle\int e^x \cos(x) \mathrm{d}x = e^x \sin(x) - \displaystyle\int e^x \sin(x) \mathrm{d}x.$$ Therefore we may use $(*)$ and $(**)$ to compute $$\begin{array}{ll} \displaystyle\int e^x \sin(x) \mathrm{d}x &= -e^x \cos(x) + \displaystyle\int e^x \cos(x) \mathrm{d}x \\ &= -e^x \cos(x) + e^x \sin(x) - \displaystyle\int e^x \sin(x) \mathrm{d}x. \end{array}$$ Therefore add $\displaystyle\int e^x \sin(x) \mathrm{d}x$ to both sides to get $$2 \displaystyle\int e^x \sin(x) \mathrm{d}x = e^x \sin(x) - e^x \cos(x).$$ Dividing by $2$ yields the antiderivative $$\displaystyle\int e^x \sin(x) \mathrm{d}x = \dfrac{e^x \sin(x) - e^x \cos(x)}{2}.$$ Now we apply this antiderivative to find the definite integral in question: compute $$\begin{array}{ll} \displaystyle\int_0^1 e^x \sin(x) \mathrm{d}x &= \left. \dfrac{e^x \sin(x) - e^x \cos(x)}{2} \right|_0^1 \\ &= \dfrac{e \sin(1) - e\cos(1)}{2} - \dfrac{0 -1}{2} \\ &=\dfrac{e\sin(1) -e \cos(1) + 1}{2}. \end{array}$$
Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here. Bug 3842 - latex is not working at all. latex is not working at all. Keywords: CLOSED DUPLICATE of bug 2727 None Red Hat Linux Retired tetex --- 6.0 i386 Linux high high --- Jay Turner depends on / blocked Reported: 1999-06-30 21:50 UTC by dumas 2015-01-07 23:37 UTC (History) 1 user (show) srevivo Bug Fix 1999-06-30 22:40:18 UTC dumas 1999-06-30 21:50:50 UTC It looks like there is a problem with the TEXINPUTS path. The machine used is a regular update from 5.2 to 6.0. To reproduce... try the following code. letter.cls will never be found. \documentclass[12pt]{letter} \begin{document} \begin{letter} foo \end{letter} \end{document} Jeff Johnson 1999-06-30 22:40:59 UTC *** This bug has been marked as a duplicate of 2727 *** Note You need to log in before you can comment on or make changes to this bug.
# Downloadable Data Management (astropy.utils.data)¶ ## Introduction¶ A number of Astropy’s tools work with data sets that are either awkwardly large (e.g., solar_system_ephemeris) or regularly updated (e.g., IERS_B) or both (e.g., IERS_A). This kind of data - authoritative data made available on the Web, and possibly updated from time to time - is reasonably common in astronomy. The Astropy Project therefore provides some tools for working with such data. The primary tool for this is the astropy cache. This is a repository of downloaded data, indexed by the URL where it was obtained. The tool download_file and various other things built upon it can use this cache to request the contents of a URL, and (if they choose to use the cache) the data will only be downloaded if it is not already present in the cache. The tools can be instructed to obtain a new copy of data that is in the cache but has been updated online. The astropy cache is stored in a centralized place (on Linux machines by default it is \$HOME/.astropy/cache; see Configuration System (astropy.config) for more details). You can check its location on your machine: >>> import astropy.config.paths >>> astropy.config.paths.get_cache_dir() '/home/burnell/.astropy/cache' This centralization means that the cache is persistent and shared between all astropy runs in any virtualenv by one user on one machine (possibly more if your home directory is shared between multiple machines). This can dramatically accelerate astropy operations and reduce the load on servers, like those of the IERS, that were not designed for heavy Web traffic. If you find the cache has corrupted or outdated data in it, you can remove an entry or clear the whole thing with clear_download_cache. The files in the cache directory are named according to a cryptographic hash of their contents (currently MD5, so in principle malevolent entities can cause collisions, though the security risks this poses are marginal at most). Thus files with the same content share storage. The modification times on these files normally indicate when they were last downloaded from the Internet. ## Usage Within Astropy¶ For the most part, you can ignore the caching mechanism and rely on astropy to have the correct data when you need it. For example, precise time conversions and sky locations need measured tables of the Earth’s rotation from the IERS. The table IERS_Auto provides the infrastructure for many of these calculations. It makes available Earth rotation parameters, and if you request them for a time more recent than its tables cover, it will download updated tables from the IERS. So for example asking what time it is in UT1 (a timescale that reflects the irregularity of the Earth’s rotation) probably triggers a download of the IERS data: >>> from astropy.time import Time >>> Time.now().ut1 |============================================| 3.2M/3.2M (100.00%) 1s <Time object: scale='ut1' format='datetime' value=2019-09-22 08:39:03.812731> But running it a second time does not require any new download: >>> Time.now().ut1 <Time object: scale='ut1' format='datetime' value=2019-09-22 08:41:21.588836> Some data is also made available from the Astropy data server either for use within astropy or for your convenience. These are available more conveniently with the get_pkg_data_* functions: >>> from astropy.utils.data import get_pkg_data_contents >>> print(get_pkg_data_contents("coordinates/sites-un-ascii")) # these are all mappings from the name in sites.json (which is ASCII-only) to the "true" unicode names TUBITAK->TÜBİTAK ## Usage From Outside Astropy¶ Users of astropy can also make use of astropy’s caching and downloading mechanism. In its simplest form, this amounts to using download_file with the cache=True argument to obtain their data, from the cache if the data is there: >>> from astropy.utils.iers import IERS_B_URL, IERS_B <IERS_B length=3> year month day int64 int64 int64 ----- ----- ----- 2019 8 4 2019 8 5 2019 8 6 If users want to update the cache to a newer version of the data (note that here the data was already up to date; users will have to decide for themselves when to obtain new versions), they can use the update_cache argument: >>> IERS_B.open(download_file(IERS_B_URL, ... cache=True, ... update_cache=True) ... )["year","month","day"][-3:] |=========================================| 3.2M/3.2M (100.00%) 0s <IERS_B length=3> year month day int64 int64 int64 ----- ----- ----- 2019 8 18 2019 8 19 2019 8 20 If they are concerned that the primary source of the data may be overloaded or unavailable, they can use the sources argument to provide a list of sources to attempt downloading from, in order. This need not include the original source. Regardless, the data will be stored in the cache under the original URL requested: >>> f = download_file("ftp://ssd.jpl.nasa.gov/pub/eph/planets/bsp/de405.bsp", ... cache=True, ... sources=['https://data.nanograv.org/static/data/ephem/de405.bsp', ... 'ftp://ssd.jpl.nasa.gov/pub/eph/planets/bsp/de405.bsp']) |========================================| 65M/ 65M (100.00%) 19s ## Cache Management¶ Because the cache is persistent, it is possible for it to become inconveniently large, or become filled with irrelevant data. While it is simply a directory on disk, each file is supposed to represent the contents of a URL, and many URLs do not make acceptable on-disk filenames (for example, containing troublesome characters like “:” and “~”). There is reason to worry that multiple astropy processes accessing the cache simultaneously might lead to cache corruption. The cache is therefore protected by a lock and indexed by a persistent dictionary mapping URLs to hashes of the file contents, while the file contents are stored in files named by their hashes. So access to the cache is more convenient with a few helpers provided by data. If your cache starts behaving oddly you can use check_download_cache to examine your cache contents and raise an exception if it finds any anomalies. If a single file is undesired or damaged, it can be removed by calling clear_download_cache with an argument that is the URL it was obtained from, the filename of the downloaded file, or the hash of its contents. Should the cache ever become badly corrupted, clear_download_cache with no arguments will simply delete the whole directory, freeing the space and removing any inconsistent data. Of course, if you remove data using either of these tools, any processes currently using that data may be disrupted (or, under Windows, deleting the cache may not be possible until those processes terminate). So use clear_download_cache with care. To check the total space occupied by the cache, use cache_total_size. The contents of the cache can be listed with get_cached_urls, and the presence of a particular URL in the cache can be tested with is_url_in_cache. More general manipulations can be carried out using cache_contents, which returns a dict mapping URLs to on-disk filenames of their contents. If you want to transfer the cache to another computer, or preserve its contents for later use, you can use the functions export_download_cache to produce a ZIP file listing some or all of the cache contents, and import_download_cache to load the astropy cache from such a ZIP file. ## Using Astropy With Limited or No Internet Access¶ You might want to use astropy on a telescope control machine behind a strict firewall. Or you might be running continuous integration (CI) on your astropy server and want to avoid hammering astronomy servers on every pull request for every architecture. Or you might not have access to US government or military web servers. Whichever is the case, you may need to avoid astropy needing data from the Internet. There is no simple and complete solution to this problem at the moment, but there are tools that can help. Exactly which external data your project depends on will depend on what parts of astropy you use and how. The most general solution is to use a computer that can access the Internet to run a version of your calculation that pulls in all of the data files you will require, including sufficiently up-to-date versions of files like the IERS data that update regularly. Then once the cache on this connected machine is loaded with everything necessary, transport the cache contents to your target machine by whatever means you have available, whether by copying via an intermediate machine, portable disk drive, or some other tool. The cache directory itself is somewhat portable between machines of the same UNIX flavour; this may be sufficient if you can persuade your CI system to cache the directory between runs. For greater portability, though, you can simply use export_download_cache and import_download_cache, which are portable and will allow adding files to an existing cache directory. If your application needs IERS data specifically, you can download the appropriate IERS table, covering the appropriate time span, by any means you find convenient. You can then load this file into your application and use the resulting table rather than IERS_Auto. In fact, the IERS B table is small enough that a version (not necessarily recent) is bundled with astropy as astropy.utils.iers.IERS_B_FILE. Using a specific non-automatic table also has the advantage of giving you control over exactly which version of the IERS data your application is using. See also Working offline. If your issue is with certain specific servers, even if they are the ones astropy normally uses, if you can anticipate exactly which files will be needed (or just pick up after astropy fails to obtain them) and make those files available somewhere else, you can request they be downloaded to the cache using download_file with the sources argument set to locations you know do work. You can also set sources to an empty list to ensure that download_file does not attempt to use the Internet at all. If you have a particular URL that is giving you trouble, you can download it using some other tool (e.g., wget), possibly on another machine, and then use import_file_to_cache.
# Criteria for Aut(G) to be simple It is well known that the automorphisms of a group $G$ form a group under composition, and that the group of inner automorphisms $\phi (x)=gxg^{-1}$ forms a normal subgroup of $\mbox{Aut}(G)$. Thus, $\mbox{Aut}(G)$ is simple if and only if either $\mbox{Inn}(G)=\mbox{Aut}(G)$ or $\mbox{Inn}(G)$ is trivial. In the second case, since $G/Z(G)=\mbox{Inn}(G)$, $G$ must be abelian. My question is, when does $\mbox{Inn}(G)=\mbox{Aut}(G)$? Or, as it is unlikely that the general case is not fully understood, are there nice classes of groups for which there are a nice set of criteria for $\mbox{Inn}(G)=\mbox{Aut}(G)$. - There are some examples at en.wikipedia.org/wiki/Outer_automorphism_group . – darij grinberg Aug 20 '10 at 20:25 One more remark: "if and only if" is wrong. $G=S_n$ for $n\neq 6$ is not simple, yet Inn = Aut. – darij grinberg Aug 20 '10 at 20:39 Inn(G) = Aut(G) does not imply Aut(G) is simple. For instance G nonabelian of order 6 is not simple, but Inn(G) = Aut(G). If G is centerless, then Inn(G) = Aut(G) is called being a complete group. If Aut(G) is simple, then Inn(G) = Aut(G) is simple, so G/Z(G) is simple. Roughly speaking G is quasi-simple and G/Z(G) is simple and complete. Modulo some A5 x 2 silliness, this is more or less a classification of G with Aut(G) simple. – Jack Schmidt Aug 20 '10 at 20:42 @Jack: how do you know G/Z(G) is complete? – Steve D Aug 20 '10 at 21:01 @Steve D: In point of fact 2.Sz(8) has Sz(8) as its automorphism group, and Sz(8) has Sz(8):3 as its, so no G/Z(G) need not be complete. – Jack Schmidt Aug 20 '10 at 23:15 Here is an approximation of an answer to "For what finite groups is Aut(G) simple?" As Daniel Miller mentioned, Inn(G) is a normal subgroup of Aut(G), so for Aut(G) to be simple either Inn(G) = 1, in which case G is abelian, or Inn(G) = Aut(G) is simple. The former case should be somewhat easy to handle assuming G is finite. In the latter case, we have that G/Z(G) is simple. If G is also perfect, then G is called quasi-simple. Of course, G need not be perfect as G ≅ A5 × 2 shows. However, I believe this is the only obstruction, so ignoring a possible cyclic direct factor of order 2, G/Z(G) is simple, and G is quasi-simple. The finite quasi-simple groups and their automorphism groups are classified, but the classification is a bit long. For a fixed simple group, X = G/Z(G), there are only finitely many isomorphism classes of quasi-simple groups D such that D/Z(D) = X. In fact there is a unique largest one called the Schur cover, that I'll call D. If Z(D) is cyclic, then in fact Aut(G) = Aut(X) = Aut(D) does not pay any attention to the center. So all we need to do is find all X with Aut(X) = X [and each one works], and all X with Z(D) non-cyclic [and check which ones work]. Having done most, but not all, of that, I thought it might help to record the basic result: If G = H×T where T=1 if H is abelian and T is cyclic of order dividing 2 otherwise, and where H is on the following list, then Aut(G) is simple: • cyclic of order 3, 4, or 6 • elementary abelian of order 2n for n ≥ 3 • M11, 2.Sz(8), J1, 2.Sp(6,2), M23, M24, Ru, 2.Ru, Co3, Co2, Ly, Th, Fi23, Co1, 2.Co1, J4, B, 2.B, E7(2), M • Ω(2n+1,2) for n ≥ 3 • Sp(2n,2) for n ≥ 3 • E8(p) for any prime p • F4(p) for any prime p • G2(p) for any prime p ≥ 5 Additionally if Aut(G) is simple, then G = H×T as above, except possibly H/Z(H) is on the following list: • L3(4), U4(3), U6(2), 2E6(2) • Ω+(4n,q) for certain q These are groups with non-cyclic multiplier other than Sz(8) [definitely an example] and Ω+(8,2) [not an example]. The Ω+(4n,q) case should be mostly easy, as there are too many automorphisms to kill. The others would be easy in an ideal world, but as far as I know our computational knowledge of these groups is limited and/or flawed. Of course, I also need to check the abelian case carefully, but I think 3,4,6 and 2^n are the only abelian examples. It would make another good answer: For what torsion abelian groups G is Aut(G) simple? This would handle the abelian groups here, as well as some of the original poster's interest, without delving into the nastier aspects of abelian groups. - Torsion abelian groups split into p-components, so you're really asking about abelian p-groups (ignoring a C_2 factor). But if the group has exponent higher than 2, inversion is a central automorphism. So we really only care about elementary abelian 2-groups. In other words, you got all of them. – Steve D Aug 21 '10 at 12:42 Thanks! I had only been considering multiplication on one factor (a "diagonal" automorphism) and so missed the key property of central inversion. – Jack Schmidt Aug 21 '10 at 15:56 $M_{24}$ is also a sporadic group with trivial outer automorphism group, so it needs to be added (without any covers, since it has trivial Schur multiplier) to your third family of groups. – DavidLHarden Jun 17 '13 at 16:44 Obraztsov has shown that if $p$ is a sufficiently large prime, then there exists a finitely generated infinite simple complete group $G$, all of whose proper subgroups are cyclic of order $p$. In particular, $G$ is an example of a group such that $Aut(G)$ is an infinite simple group. The relevant reference is: V. N. OBRAZTSOV, `On infinite complete groups', Comm. Algebra 22 (1994) 5875--5887 - This is not an answer to your exact question (which I interpreted to be 'When does $\mathrm{Inn}(G)=\mathrm{Aut}(G)$?'---as pointed out in the comments, this is not the same as asking for $\mathrm{Aut}G$ to be simple), and is only really interesting if you care about examples where $G$ is infinite. If you do care about $G$ infinite, then a natural slight weakening is to ask for criteria for $\mathrm{Out}(G)$ to be finite. One such criterion is provided by Paulin's Theorem. Theorem. If $G$ is word-hyperbolic and $\mathrm{Out}(G)$ is infinite then $G$ splits (as an amalgamated free product or HNN extension) over a virtually cyclic subgroup. It is known that, using some suitable definition of 'randomly chosen', a randomly chosen finitely presented group is torsion-free, word-hyperbolic and does not split. So one can conclude that a 'randomly chosen' finitely presented group $G$ is of finite index in its automorphism group. - Just to clarify: in these examples, $G$ (and hence $\mathrm{Aut}(G)$), is never simple. – HJRW Aug 20 '10 at 20:57 Does anyone know of any infinite simple groups which are the Automorphism groups of other groups? E.g. some $G$ with $Aut(G)$ infinite and simple? - You should post this as a question, not as an answer here. – Arturo Magidin Aug 26 '10 at 20:39 I don't know about Aut(G), but I can do Out(G). Now Bumagina and Wise proved that every countable group arises as the outer automorphism group of some finitely generated group. – HJRW Aug 26 '10 at 21:56 let $G$ be non ableian group and $A$ be set of all groups including $Z(G)$. for All H in A,send H to Z(H)(Notice that this is a map from A to A). notice if Z(H)=Z(G) all H in A, it cause a contradiction(easy to show) if Inn(G)=Aut(G) then there is a uniqe proper group with Z(H)=Z(G) in A. M.Y.K - I'm having a hard time reading this and figuring out exactly what you are saying. When you say "set of all groups", do you mean subgroups of $G$? Also, you may want to use dollar signs (like in LaTeX) around your math phrases. I've done it for your first sentence. – Karl Schwede Jul 11 '13 at 16:10 A={H<G|Z(G)<H} i.e A is the set of all subgroups of G including in Z(G). and first notice that Z(H)£A for all H in A. so, let f:A->A be function which send H to Z(H). first show that if f(H)=Z(G) for all H, it cause a contradiction since center of G is properly contained in a abelian subgroup of G if G is nonabelian. claim: if Inn(G)=Aut(G) then f(H)=Z(G) for only a uniqe element (G,Z(G) is trivially satisfy this,I mean except them ) – mesel Jul 11 '13 at 22:44
# Game-theoretical control with continuous action sets Motivated by the recent applications of game-theoretical learning techniques to the design of distributed control systems, we study a class of control problems that can be formulated as potential games with continuous action sets, and we propose an actor-critic reinforcement learning algorithm that provably converges to equilibrium in this class of problems. The method employed is to analyse the learning process under study through a mean-field dynamical system that evolves in an infinite-dimensional function space (the space of probability distributions over the players' continuous controls). To do so, we extend the theory of finite-dimensional two-timescale stochastic approximation to an infinite-dimensional, Banach space setting, and we prove that the continuous dynamics of the process converge to equilibrium in the case of potential games. These results combine to give a provably-convergent learning algorithm in which players do not need to keep track of the controls selected by the other agents. ## Authors • 2 publications • 33 publications • 19 publications • ### Actor-Critic Provably Finds Nash Equilibria of Linear-Quadratic Mean-Field Games We study discrete-time mean-field Markov games with infinite numbers of ... 10/16/2019 ∙ by Zuyue Fu, et al. ∙ 0 • ### Learning with minimal information in continuous games We introduce a stochastic learning process called the dampened gradient ... 06/29/2018 ∙ by Sebastian Bervoets, et al. ∙ 0 • ### Q-Learning for Mean-Field Controls Multi-agent reinforcement learning (MARL) has been applied to many chall... 02/10/2020 ∙ by Haotian Gu, et al. ∙ 0 • ### Reinforcement Learning for Mean Field Games, with Applications to Economics Mean field games (MFG) and mean field control problems (MFC) are framewo... 06/25/2021 ∙ by Andrea Angiuli, et al. ∙ 0 • ### Nonsmooth Aggregative Games with Coupling Constraints and Infinitely Many Classes of Players After defining a pure-action profile in a nonatomic aggregative game, wh... 06/16/2018 ∙ by Paulin Jacquot, et al. ∙ 0 • ### Continuous Control for Searching and Planning with a Learned Model Decision-making agents with planning capabilities have achieved huge suc... 06/12/2020 ∙ by Xuxi Yang, et al. ∙ 0 • ### Learning Parametric Closed-Loop Policies for Markov Potential Games Multiagent systems where the agents interact among themselves and with a... 02/03/2018 ∙ by Sergio Valcarcel Macua, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## I Introduction There has been much recent activity in using techniques of learning in games to design distributed control systems. This research traverses from utility function design [1, 2, 3], through analysis of potential suboptimalities due to the use of distributed selfish controllers [4] to the design and analysis of game-theoretical learning algorithms with specific control-inspired objectives (reaching a global optimum, fast convergence, etc.) [5, 6]. In this context, considerable interest has arisen from the approach of [1, 2] in which the independent controls available to a system are distributed among a set of agents, henceforth called “players”. To complete the game-theoretical analogy, the controls available to a player are called “actions”, and each player is assigned a utility function which depends on the actions of all players (as does the global system-level utility). As such, a player’s utility in a particular play of the game could be set to be the global utility of the joint action selected by all players. However, a more learnable choice is the so-called Wonderful Life Utility (WLU) [1, 2], in which the utility of any particular player is given by how much better the system is doing as a result of that player’s action (compared to the situation where no other player changes their action but the focal player uses a baseline action instead). A fundamental result in this domain is that setting the players’ utilities using WLUs results in a potential game [7] (see Section II below). There are alternative methods for converting a system-level utility function into individual utilities, such as Shapley value utility [8]; however, most of these also boil down to a potential game (possibly in the extended sense of [3]) where the optimal system control is a Nash equilibrium of the game. Thus, by representing a control problem as a potential game, the controllers’ main objective amounts to reaching a Nash equilibrium of the resulting game. On the other hand, like much of the economic literature on learning in games [9, 10], the vast majority of this corpus of research has focused almost exclusively on situations where each player’s controls comprise a finite set. This allows results from the theory of learning in games to be applied directly, resulting in learning algorithms that converge to the set of equilibria – and hence system optima. However, the assumption of discrete action sets is frequently anomalous in control, engineering and economics: after all, prices are not discrete, and neither are the controls in a large number of engineering systems. For instance, in massively parallel grid computing networks (such as the Berkeley Open Infrastructure for Network Computing – BOINC) [11], the decision granularity of “bag-of-tasks” application scheduling gives rise to a potential game with continuous action sets [7]. A similar situation is encountered in the case of energy-efficient power control and power allocation in large wireless networks [12, 13]: mobile wireless users can transmit at different power levels (or split their power across different subcarriers [14]), and their throughput is a continuous function of their chosen transmit power profiles (which have to be optimized unilaterally and without recourse to user coordination or cooperation). Finally, decision-making in the emerging “smart grid” paradigm for power generation and management in electricity grids also revolves around continuous variables (such as the amount of power to generate, or when to power down during the day), leading again to game-theoretical model formulations with continuous action sets [15]. In this paper, we focus squarely on control problems (presented as potential games) with continuous action sets and we propose an actor-critic reinforcement learning algorithm that provably converges to equilibrium. To address this problem in an economic setting, very recent work by Perkins and Leslie [16] extended the theory of learning in games to zero-sum games with continuous action sets (see also [17, 18]); however, from a control-theoretical point of view, zero-sum games are of limited practical relevance because they only capture adversarial interactions between two players. Owing to this fundamental difference between zero-sum and potential games, the two-player analysis of [16] no longer applies to our case, so a completely different approach is required to obtain convergence in the context of many-player potential games. To accomplish this, our analysis relies on two theoretical contributions of independent interest. The first is the extension of stochastic approximation techniques for Banach spaces (otherwise known as “abstract stochastic approximation” [19, 20, 21, 22, 23, 24]) to the so-called “two-timescales” framework originally introduced in standard (finite-dimensional space) stochastic approximation by [25]. This allows us to consider interdependent strategies and value functions evolving as a stochastic process in a Banach space (the space of signed measures over the players’ continuous action sets and the space of continuous functions from action space to respectively, both endowed with appropriate norms). Our second contribution is the asymptotic analysis of the mean field dynamics of this process on the space of probability measures on the action space; our analysis reveals that the dynamics’ rest points in potential games are globally attracting, so, combined with our stochastic approximation results, we obtain the convergence of our actor-critic reinforcement learning algorithm to equilibrium. In Section II we introduce the framework and notation, and introduce our actor–critic learning algorithm. Following that, in Section III we introduce two-timescales stochastic approximation in Banach spaces, and prove our general result. Section IV applies the stochastic approximation theory to the actor–critic algorithm to show that it can be studied via a mean field dynamical system. Section V then analyses the convergence of the mean field dynamical system in potential games, a result which allows us to prove the convergence of the actor–critic process in this context. ## Ii Actor–critic learning with continuous action spaces Throughout this paper, we will focus on control problems presented as potential games with finitely many players and continuous action spaces. Such a game comprises a finite set of players labelled . For each there exists an action set which is a compact interval;111We are only making this assumption for convenience; our analysis carries through to higher-dimensional convex bodies with minimal hassle. when each player selects an action , this results in a joint action We will frequently use the notation to refer to the joint action in which Player uses action and all other players use the joint action . Each player is also associated with a bounded and continuous utility function . For the game to be a potential game, there must exist a potential function such that ui(ai,a−i)−ui(~ai,a−i)=ϕ(ai,a−i)−ϕ(~ai,a−i) for all , for all and for all , . Thus if any player changes their action while the others do not, the change in utility for the player that changes their action is equal to the change in value of the potential function of the game. Methods for constructing potential games from system utility functions [1, 2, 3] usually ensure that the potential corresponds to the system utility, so maximising the potential function corresponds to maximising the system utility. Game-theoretical analyses usually focus on mixed strategies where a player selects an action to play randomly. A mixed strategy for Player is defined to be a probability distribution over the action space . This is a simple concept when is finite, but for the continuous action spaces considered in this paper more care is required. Specifically, let be the Borel sigma-algebra on and let denote the set of all probability measures on . Throughout this article we endow with the weak topology, metrized by the bounded Lipschitz norm (see Section IV; also [26, 27, 16]). A mixed strategy is then an element ; for we have that is the probability that Player selects an action in the Borel set . Note that a mixed strategy under this definition need not admit a density with respect to Lebesgue measure, and in particular may contain an atom at a particular action . Returning to our game-theoretical considerations, we extend the definition of utilities to the space of mixed strategy profiles. In particular, let be a mixed strategy profile, and define ui(π––)=∫A1⋯∫ANui(a––)π1(da1)⋯πN(daN). As before we use the notation to refer to the mixed strategy profile in which Player uses and all other players use . In further abuse of notation, we write for the mixed strategy profile , where is the Dirac measure at (meaning that Player selects action with probability ). Hence is the utility to Player for selecting when all other players use strategy . A central concept in game theory is the best response correspondence of Player , i.e. the set of mixed strategies that maximise Player ’s utility given any particular opponent mixed strategy . A Nash equilibrium is a fixed point of this correspondence, in which all players are playing a best response to all other players. In a learning context however, the discontinuities that appear in best response correspondences can cause great difficulties [28]. We focus instead on a smoothing of the best response. For a fixed , the logit best response with noise level of Player to strategy is defined to be the mixed strategy such that Liη(π−i)(Bi)=∫Biexp{η−1ui(ai,π−i)}dai∫Aiexp{η−1ui(bi,π−i)}dbi (1) for each . In [18] it is shown that is absolutely continuous (with respect to Lebesgue measure), with density given by liη(π––−i)(ai)=exp{η−1ui(ai,π−i)}∫Aiexp{η−1ui(bi,π−i)}dbi. (2) To ease notation in what follows, we let The existence of fixed points of is shown in [18] and [16]; such a fixed point is a joint strategy such that for each , and so is a mixed strategy profile such that every player is playing a smooth best response to the strategies of the other players. Such profiles are called logit equilibria and the set of all such fixed points will be denoted by . A logit equilibrium is thus an approximation of a local maximizer of the potential function of the game in the sense that for small a logit equilibrium places most of the probability mass in areas where the joint action results in a high potential function value; in particular, logit equilibria approximate Nash equilibria when the noise level is sufficiently small.222We note here that the notion of a logit equilibrium is a special case of the more general concept of quantal response equilibrium introduced in [29]. Smooth best responses also play an important part in discrete action games, particularly when learning is considered. In this domain they were introduced in stochastic fictitious play by [30], and later studied by, among others, [31, 32, 33] to ensure the played mixed strategies in a fictitious play process converge to logit equilibrium. This is in contrast to classical fictitious play in which the beliefs of players converge, but the played strategies are (almost) always pure. The technique was also required by [34, 35, 36] to allow simple reinforcement learners to converge to logit equilibria: as discussed in [34], players whose strategies are a function of the expected value of their actions cannot converge to a Nash equilibrium because, at equilibrium, all actions in the support of the equilibrium mixed strategies will receive the same expected reward. Recently [18] developed the dynamical systems tools necessary to consider whether the smooth best response dynamics converge to logit equilibria in the infinite-dimensional setting. This was extended to learning systems in [16], where it was shown that stochastic fictitious play converges to logit equilibrium in two-player zero-sum games with compact continuous action sets. One of the main requirements for efficient learning in a control setting is that the full utility functions of the game need not be known in advance, and players may not be able to observe the actions of all other players. Using fictitious play (or, indeed, many of the other standard game-theoretical tools) does not satisfy this requirement because they assume full knowledge and observability of payoff functions and opponent actions. This is what motivates the simple reinforcement learning approaches discussed previously [34, 35, 36], and also the actor-critic reinforcement learning approach of [37], which we extend in this article to the continuous action space setting. The idea is to learn both a value function that estimates the function for the current value of , while also maintaining a separate mixed strategy . The critic, , informs the update of the actor, . In turn the observed utilities received by the actor, , inform the update of the critic . In the continuous action space setting of this paper, we implement the actor-critic algorithm as the following iterative process (for a pseudo-code implementation, see Algorithm 1): 1. At the -th stage of the process, each player selects an action by sampling from the distribution and uses to play the game. 2. Players update their critics using the update equation Qin+1=Qin+γn⋅(ui(⋅,a−in)−Qin) (3a) 3. Each player samples and updates their actor using the update equation πin+1=πin+αn⋅(δbin−πin). (3b) The algorithm above is the main focus of our paper, so some remarks are in order: ###### Remark 1. In (3a), it is assumed that a player can access , so they can calculate how much they would have received for each of their actions in response to the joint action that was selected by the other players. Even though this assumption restricts the applicability of our method somewhat, it is relatively harmless in many settings — for instance, in congestion games such estimates can be calculated simply by observing the utilization level of the system’s facilities. Note further that to implement this algorithm an individual need not actually observe the action profile , needing only the utility . This means that a player need know nothing at all about the players who don’t directly affect her utility function, which allows a degree of separation and modularisation in large systems, as demonstrated in [38]. ###### Remark 2. The logit response used to sample the used in (3b) is now parameterised by instead of . This is a trivial change in which we use in place of in (1), which represents the fact that now players select smooth best responses to their critic instead of directly to the estimated mixed strategy of the other players. ###### Remark 3. Also in (3b), the players update towards a sampled instead of toward the full function . This is so that the critic can be represented as a collection of weighted atoms, instead of as a complicated and continuous probability measure. Representing as a collection of atoms means that sampling is particularly easy. On the other hand, sampling could be extremely difficult for general . The gradual evolution of the however implies that a sequential Monte Carlo sampler [39] could be used to produce samples according to . The representation of is also potentially troublesome and we do not address it fully here. However one could assume that each can be represented as a finite linear combination of basis functions such as a spline, Fourier or wavelet basis. Another option would be to slowly increase the size of a Fourier or wavelet basis as gets large, resulting in vanishing bias terms which can be easily incorporated in the stochastic approximation framework. ###### Remark 4. Finally, we note that the updates (3a) and (3b) use different step size parameters and . This separation is what allows the algorithm to be a two-timescales procedure, and is discussed at the start of Section III. The remainder of this article works to prove the following theorem, while also providing several auxiliary results of independent interest along the way: ###### Theorem 1. In a continuous-action-set potential game with bounded Lipschitz rewards and isolated equilibrium components, the actor–critic algorithm (3) converges strongly to a component of the equilibrium set (a.s.). ###### Remark. We recall here that the notion of strong convergence of probability measures is defined by asking that for every measurable . As such, this notion of convergence is even stronger than the notion of “convergence in probability” (vague convergence) used in the central limit theorem and other weak-convergence results. ## Iii Two-timescales stochastic approximation in Banach spaces The analysis of systems such as Algorithm 1 is enabled by the use of two-timescales stochastic approximation techniques [25]. By allowing as , the system can be analysed as if the ‘fast’ update (3a), with higher learning parameter , has fully converged to the current value of the ‘slow’ system (3b), with lower learning parameter . Note that it is not the case that we have an outer and inner loop, in which (3a) is run to convergence for every update of (3b): both the actor and the critic are updated on every iteration. It is simply that the two-timescales technique allows us to analyse the system as if there were an inner loop. That being said, the results of [25] are only cast in the framework of finite-dimensional spaces. We have already observed that with continuous action spaces , the mixed strategies are probability measures in the space , and the critics are functions. Placing appropriate norms on these spaces results in Banach spaces, and in this section we combine the two-timescales results of [25] with the Banach space stochastic approximation framework of [16] to develop the tool necessary to analyse the recursion (3). To that end, consider the general two-timescales stochastic approximation system xn+1 =xn+αn+1[F(xn,yn)+Un+1+cn+1], (4a) yn+1 =yn+γn+1[G(xn,yn)+Vn+1+dn+1], (4b) where • and are sequences in the Banach spaces and respectively. • and are the learning rate sequences of the process. • and comprise the mean field of the process. • and are stochastic processes in and respectively. (For a detailed exposition of Banach-valued random variables, see [40].) • and are bias terms that converge almost surely to . We will study this system using the asymptotic pseudotrajectory approach of [41], which is already cast in the language of metric spaces; since Banach spaces are metric, the framework of [41] still applies to our scenario. This modernises the approach of [22] while also introducing the two-timescales technique to ‘abstract stochastic approximation’. To proceed, recall that a semiflow on a metric space, , is a continuous map , , such that, and for all . As in simple Euclidean spaces, well-posed differential equations on Banach spaces induce a semiflow [42]. A continuous function is an asymptotic pseudo-trajectory for if for any , limt→∞sup0≤s≤Td(z(t+s),Φs(x(t)))=0. Properties of asymptotic pseudo-trajectories are discussed in detail in [41]. We will prove that interpolations of the stochastic approximation process ( 4) result in asymptotic pseudotrajectories to flows induced by dynamical systems on and governed by and respectively. To do so, and to allow us to state necessary assumptions on the processes, we define timescales on which we will interpolate the stochastic approximation process. In particular, let (with ), and for let . Similarly let (with ), and for let . With these timescales we define interpolations of the stochastic approximation processes (4). On the slow () timescale we define a continuous-time interpolation of by letting ¯xα(ταn+s)=xn+sxn+1−xnαn+1 (5) for . On the fast () timescale we consider , and define the continuous time interpolation of by letting ¯zγ(τγn+s)=zn+szn+1−znγn+1 (6) for . Our assumptions, which are simple extensions to those of [25] and [41], can now be stated as follows: 1. Noise control. 1. For all , limn→∞supk∈{n+1,…,mα(ταn+T)}⎧⎨⎩∥∥ ∥∥k−1∑j=nαj+1Uj+1∥∥ ∥∥X⎫⎬⎭ =0, limn→∞supk∈{n+1,…,mγ(τγn+T)}⎧⎨⎩∥∥ ∥∥k−1∑j=nγj+1Vj+1∥∥ ∥∥Y⎫⎬⎭ =0. 2. and are bounded sequences such that and as . 2. Boundedness and continuity. 1. There exist compact sets and such that and for all . 2. and are bounded and uniformly continuous on . 3. Learning rates. 1. and with and as . 2. as . 4. Mean field behaviour. 1. For any fixed the differential equation dydt=G(~x,y) (7) has unique solution trajectories that remain in for any initial value . Furthermore the differential equation (7) has a unique globally attracting fixed point , and the function is Lipschitz continuous. 2. The differential equation dxdt=F(x,y∗(x)) (8) has unique solution trajectories that remain in for any initial value . Assumption A1 is the standard assumption for noise control in stochastic approximation. It has traditionally caused difficulty in abstract stochastic approximation, but recent solutions are discussed in the following paragraph. Assumption A2 is simply a boundedness and continuity assumption, but can cause difficulty with some norms in function spaces. Assumption A3 provides the two-timescales nature of the scheme, with both learning rate sequences converging to 0, but becoming much smaller than . Finally Assumption A4 provides both the existence of unique solutions of the relevant mean field differential equations, and the useful separation of timescales in continuous time which is directly analogous to Assumption (A1) of [25]. Note that we do not make the stronger assumption that there exists a unique globally asymptotically stable fixed point in the slow timescale dynamics (8) [25, Assumption A2]; this assumption is not necessary for the theory presented here, and would unnecessarily restrict the applicability of the results. Note that the noise assumption A1(a) has traditionally caused difficulty for stochastic approximation on Banach spaces: [23] considers the simple case where the stochastic terms are independent and identically distributed, whilst [22] prove a very weak convergence result for a particular process which again uses independent noise. However [16] provide criteria analogous to the martingale noise assumptions in which guarantee that the noise condition 1(a) holds in useful Banach spaces. In particular, if is a sequence of martingale differences in Banach space , then limn→∞supk∈{n+1,…,mα(ταn+T)}⎧⎨⎩∥∥ ∥∥k−1∑j=nαj+1Uj+1∥∥ ∥∥X⎫⎬⎭=0 with probability 1 if is: • the space of functions for , is deterministic with , is a martingale difference sequence with respect to some filtration , and (cf. the remark following Proposition A.1 of [16]); • the space of functions on bounded spaces (see [43]); or • the space of finite signed measures on a compact interval of with the bounded Lipschitz norm (see [26, 27, 16] or Section IV below) is deterministic with , where there exists a filtration such that is measurable with respect to , is a bounded absolutely continuous probability measure which is measurable with respect to and has density , and is sampled from the probability distribution (Proposition 3.6 of [16]); Clearly, if similar conditions also hold for then Assumption A1(a) holds. Our first lemma demonstrates that we can analyse the system as if the fast system is fully calibrated to the slow system . By this we mean that, for sufficiently large , is close to the value it would converge to if were fixed and allowed to fully converge. ###### Lemma 2. Under Assumptions A1–A4, ∥yn−y∗(xn)∥Y→0asn→∞. ###### Proof: Let , with the induced product norm from the topologies of and . Under this topology, is a Banach space, and is compact. The updates (4) can be expressed as zn+1=zn+γn+1[H(zn)+Wn+1+κn+1], (9) where is such that , for , and Wn =(αnγnUn,Vn), κn+1 =(αn+1γn+1[F(zn)+dn+1],en+1). Assumptions A1–A4 imply the assumptions of Theorem 3.3 of [16]. Most are direct translations, but the noise must be carefully considered. For any , any , and any , ∥∥ ∥∥k−1∑j=nγj+1(Wn+1+κn+1)∥∥ ∥∥Z ≤∥∥ ∥∥k−1∑j=nγj+1Wn+1∥∥ ∥∥Z+(supk′∈{n+1,…,k}∥κk′∥Z)k−1∑j=nγj+1 ≤∥∥ ∥∥k−1∑j=nγj+1Wn+1∥∥ ∥∥Z +(supk′∈{n+1,…,mγ(τγn+T)}∥κk′∥Z)mγ(τγn+T)−1∑j=nγj+1 ≤∥∥ ∥∥k−1∑j=nγj+1Wn+1∥∥ ∥∥Z+(supk′≥n+1∥κk′∥Z)T Since , the second term converges to 0 as . Hence, using assumption A1 to control the first term, limn→∞supk∈{n+1,…,mγ(τγn+T)}∥∥ ∥∥k−1∑j=nγj+1(Wn+1+κn+1)∥∥ ∥∥Z=0. Therefore , defined in (6), is an asymptotic pseudotrajectory of the flow defined by dzdt=H(z(t)). (10) Assumption A4(a) implies that is globally attracting for (10). Hence Theorem 6.10 of [41] gives that . The result follows by the continuity of assumed in A4(a). ∎ We use this fact to consider the evolution of on the slow timescale. ###### Theorem 3. Under Assumptions A1–A4, the interpolation , defined in (5), is an asymptotic pseudo-trajectory to the flow induced by the differential equation (8). ###### Proof: Rewrite (4a) as xn+1=xn+αn+1[F(xn,y∗(xn))+Un+1+~cn+1], (11) where . We will show that this is a well-behaved stochastic approximation process. In particular, we need to show that can be absorbed into in such a way that the equivalent Assumption A1 of [16] can be applied to . By Lemma 2 we have that . Hence we can define δn=inf{δ>0:∀m≥n,∥ym−y∗(xm)∥Y<δ} with as . By the uniform continuity of , it follows that we can define a sequence such that for all , . From this construction, for any and for any , ≤∥∥ ∥∥k−1∑j=nαj+1εn∥∥ ∥∥X ≤Tεn. As in the proof of Lemma 2, similar arguments can be used for under assumption (A1)(b). Hence for all , limn→∞supk∈{n+1,…,mα(ταn+T)}⎧⎨⎩∥∥ ∥∥k−1∑j=nαj+1~cj+1∥∥ ∥∥X⎫⎬⎭=0. Once again it is straightforward to show that, under (A1)-(A4), the slow timescale stochastic approximation (11) satisfies the assumptions of Theorem 3.3 of [16], and therefore is an asymptotic pseudo-trajectory to the flow induced by the differential equation (8). ∎ While [41] provides several results that can be combined with Theorem 3, we summarise the result used in this paper with the following corollary: ###### Corollary 4. Suppose that Assumptions A1–A4 hold. Then converges to an internally chain transitive set of the flow induced by the mean field differential equation (8). ###### Proof: This is an immediate consequence of Theorem 5 above and Theorem 5.7 of [41], where the definition of internally chain transitive sets can be found. ∎ ## Iv Stochastic approximation of the actor–critic algorithm In this section we demonstrate that the actor–critic algorithm (3) can be analysed using the two-timescales stochastic approximation framework of Section III. Our first task is to define the Banach spaces in which the algorithm evolves. Note that the set of probability distributions on is a subset of the space of finite signed measures on . To turn this space into a Banach space, the most convenient norm for our purposes is the bounded Lipschitz (BL) norm.333For a discussion regarding the appropriateness of this norm for game-theoretical considerations, see [26, 27, 18], and, for stochastic approximation, especially [16]. To define the BL norm, let Gi={g:Ai→R:supa∈Ai|g(a)|+supa,b∈Ai,a≠b|g(a)−g(b)||a−b|≤1}. Then, for we define ∥μ∥BLi=supg∈Gi∣∣∣∫Aig(dμ)∣∣∣. with norm is a Banach space [27], and convergence of a sequence of probability measures under corresponds to weak convergence of the measures [26]. Under the BL norm, is a compact subset of (see Proposition 4.6 of [16]), allowing Assumption A2 to be easily verified. We consider mixed strategy profiles as existing in the subset of the product space We use the max norm to induce the product topology, so that if we define ∥μ∥BL=maxi=1,…,N∥μi∥BLi. (12) Suppose also that utility functions are bounded and Lipschitz continuous. Since their domain is a bounded interval of we can assume that the estimates are in the Banach space of functions with a finite norm, under the norm. Hence we consider the vectors as elements of the Banach space with ###### Theorem 5. Consider the actor–critic algorithm (3). Suppose that for each the action space is a compact interval of , and the utility function is bounded and uniformly Lipschitz continuous. Suppose also that and are chosen to satisfy Assumption A3 as well as and Then, under the bounded Lipschitz norm, converges with probability 1 to an internally chain transitive set of the flow defined by the -player logit best response dynamics dπ––dt=Lη(π––)−π––. (13) ###### Proof: We take , and as above. This allows a direct mapping of the actor–critic algorithm (3) to the stochastic approximation framework (4) by taking xn =π––n, F(π––,Q––)=Lη(Q––)−π––, Un+1=(δb1n,…,δbNn)−Lη(Q––), cn=0 and yn =Q––n, G(π––,Q––)=(G1(π––,Q––),…,GN(π––,Q––)), Gi(π––,Q––)=ui(⋅,π−i)−Qi, Vn+1=(V1n+1,…,VNn+1), Vin+1=ui(⋅,a−in)−ui(⋅,π−in), dn=0. By Corollary 4 we therefore only need to verify Assumptions A1–A4. A1: is of exactly the form studied by [16] and therefore Proposition 3.6 of that paper suffices to prove the condition on the tail behaviour of holds with probability 1. The are martingale difference sequences, since , and the are functions. Hence Proposition A.1 of [16] suffices to prove the condition on the tail behaviour of holds with probability 1 under the norm. Since and are identically zero, we have shown that A1 holds. A2: is a compact subset of under the bounded Lipschitz norm, so taking suffices. Furthermore, with bounded continuous reward functions it follows that the are uniformly bounded and equicontinuous and therefore remain in a compact set . is clearly uniformly continuous on the compact set . The continuity of , and therefore , is shown in Lemma C.2 of [16]. A3: The learning rates are chosen to satisfy this assumption. A4: For fixed , the differential equations ˙Qi=ui(⋅,~π−i)−Qi converge exponentially quickly to . Furthermore is Lipschitz continuous in , so part (a) is satisfied. Equation (8) then becomes ˙πi=Liη(ui(⋅,π−i))−πi,for i=1,…,N. Since we re-wrote to depend on the utility functions instead of directly on , we find that we have recovered the logit best response dynamics of [18] and [16], which those authors show to have unique solution trajectories.∎ ## V Convergence of the logit best response dynamics We have shown in Theorem 5 that the actor–critic algorithm (3) results in joint strategies that converge to an internally chain transitive set of the flow defined by the logit best response dynamics (13) under the bounded Lipschitz norm. It is demonstrated in [16] that in two-player zero-sum continuous action games the set of logit equilibria (the fixed points of the logit best response ) is a global attractor of the flow. Hence, by Corollary 5.4 of [41] we instantly obtain the result that any internally chain transitive set is contained in . However two-player zero-sum games are not particularly relevant for control systems: multiplayer potential games are much more important. The logit best responses in a potential game are identical to the logit best responses in the identical interest game in which the potential function is the global utility function. Hence evolution of strategies under the logit best response dynamics in a potential game is identical to that in the identical interest game in which the potential acts as the global utility. We therefore carry out our convergence analysis for the logit best response dynamics (13) in -player identical interest games with continuous action spaces. See [44] for related issues. For the remainder of this section we work to prove the following theorem: ###### Theorem 6. In a potential game with continuous bounded rewards, in which the connected components of the set of logit equilibria of the game are isolated, any internally chain transitive set of the flow induced by the smooth best response dynamics (13) is contained in a connected component of . Define ΔD={π––∈Δ:\parbox{170.716535pt}{∀i=1,…,N, πi is absolutely continuous with density pi% such that D−1≤pi(xi)≤D for all xi∈Ai and pi is Lipschitz continuous with constant D}}. Appendix C of [16] shows that if the utility functions are bounded and Lipschitz continuous then, for any , there exists a such that for all , and that is forward invariant under the logit best response dynamics. For the remainder of this article, is taken to be sufficiently large for this to be the case. Our method first demonstrates that the set is globally attracting for the flow, so any internally chain transitive set of the flow is contained in . The nice properties of then allow the use of a Lyapunov function argument to show that any internally chain transitive set in is a connected set of logit equilibria. ###### Lemma 7. Let be an internally chain-transitive set. Then . ###### Proof: Consider the trajectory of (13) starting at an arbitrary . We can write as π––(t)=e−tπ––(0)+∫t0es−tLη(π––(s))ds. Defining σ––(t)=∫t0es−tLη(π––(s))ds1−e−t it is immediate both that and ∥π––(t)−σ––(t)∥BL<2e−t. (14) Thus approaches at an exponential rate, uniformly in . Hence is uniformly globally attracting. We would like to invoke Corollary 5.4 of [41], but since may not be invariant it is not an attractor in the terminology of [41] either. We therefore prove directly that . Suppose not, so there exists a point and by the compactness of internally chain transitive sets there exists a such that . There exists a such that for the trajectory with ,
# Beating Brute Force Search for QBF Satisfiability, and Implications for Formula Size Lower Bounds Rahul Santhanam ## Affiliation: University of Edinburgh School of Informatics 10, Crichton Street Edinburgh - EH8 9AB United Kingdom ## Time: Thursday, 17 April 2014, 16:00 to 17:00 • AG-80 ## Organisers: Abstract: We give the first algorithms for the QBF Satisfiability problem beating brute force search. We show that the QBF satisfiability question for CNF instances with $n$ variables, $q$ quantifier blocks and size $poly(n)$ can be solved in time $2^{n-\Omega(n^{1/(q+1)})}$, and that the QBF satisfiability question for circuit instances with $n$ variables, $q$ quantifier blocks and size $poly(n)$ can be solved in time $2^{n-\Omega(q)}$. We also show that improvements on these algorithms would lead to super-polynomial formula size lower bounds for NEXP (this is joint work with Ryan Williams).
## Main Question or Discussion Point I am a laymen. I just thing that if the speed of light is constant then all acceleration should be measured against it. It doesn't move. Everything moves through it. Is that how it works? This has been making me anxious. Related Special and General Relativity News on Phys.org Ibix if the speed of light is constant then all acceleration should be measured against it. I'm not sure what this is supposed to mean. Measuring an acceleration against a speed? It's like comparing the height of your house to the speed limit on the road outside. It doesn't move. Everything moves through it. Is that how it works? Again, I don't think this makes any sense. Light definitely moves. jbriggs444 Homework Helper 2019 Award I just thing that if the speed of light is constant then all acceleration should be measured against it. The constancy of the speed of light makes it useless as a tool for measuring the speed of anything else. No matter how fast the "anything else" is moving or how it accelerates, light will always be moving at light speed relative to it. If the result of a measurement is certain beforehand then taking the measurement yields zero information. FactChecker Gold Member Since the speed of light is, itself, measured with respect to an inertial reference frame, position, velocity, and acceleration are easier to measure with respect to that reference frame. How do we know that light is moving? Mister T Gold Member I am a laymen. I just thing that if the speed of light is constant then all acceleration should be measured against it. Start by understanding why people say that the speed of light is constant. What they mean is that if you see two objects race past you, and only one of them is a beam of light in a vacuum, then that beam of light will always be the faster of the two. This is true no matter how fast that other object moves! Moreover, to someone co-moving with that object, the beam of light will have the same speed as it does for you. That's what they mean by "constant" in this context. There are people do indeed measure all speeds relative to the speed of light in a vacuum. They assign it a value of "1" and all other speeds are less than one but greater than or equal to zero. Start by understanding why people say that the speed of light is constant. What they mean is that if you see two objects race past you, and only one of them is a beam of light in a vacuum, then that beam of light will always be the faster of the two. This is true no matter how fast that other object moves! Moreover, to someone co-moving with that object, the beam of light will have the same speed as it does for you. That's what they mean by "constant" in this context. There are people do indeed measure all speeds relative to the speed of light in a vacuum. They assign it a value of "1" and all other speeds are less than one but greater than or equal to zero. That makes a lot of sense to me. As though things are slowing down and becoming denser from 1? Mister T Gold Member That makes a lot of sense to me. As though things are slowing down and becoming denser from 1? Nothing is becoming denser. A car moving at a speed of 60 mi/h is not any less dense than it would be if it were moving at a speed of 30 mi/h. Nothing is becoming denser. A car moving at a speed of 60 mi/h is not any less dense than it would be if it were moving at a speed of 30 mi/h. Sorry, that was dumb. And it's speed is measured in relation to an observer fixed to the earth. I guess I just think it should be measured against the speed of light. That should be the standard. As though everything is moving through light. It would make things simpler. PeterDonis Mentor 2019 Award I guess I just think it should be measured against the speed of light. That is how we measure speeds. The speed of light is $1$. All other speeds are smaller. However, you can't measure things that aren't speeds against a speed. So your suggestion in the OP to measure accelerations, for example, against the speed of light, makes no sense. Nor would it make any sense to try to measure positions or times against the speed of light. I think you need to take a step back and consider all of this more carefully. FactChecker Gold Member Sorry, that was dumb. And it's speed is measured in relation to an observer fixed to the earth. I guess I just think it should be measured against the speed of light. That should be the standard. As though everything is moving through light. It would make things simpler. There is a big difference between using the speed of light as a unit of measure of velocity versus using any light (going in ?? direction) as a reference frame. You need to be clear regarding what you are talking about. I doubt very much that it would make things simpler. PeterDonis Mentor 2019 Award it's speed is measured in relation to an observer fixed to the earth That's because all speeds are relative. You can't just have a speed; it has to be a speed relative to something. Also, the something that speeds are relative to has to be something that can be at rest. Light can't be at rest, so it can't be a thing that is taken as "fixed" as a frame of reference. You need to consider this carefully as well. Forgive my ignorance. I'm simply troubled by this mystery. As I understand it, it's not known why the speed of light is constant. I will think deeply about the questions you've posed, do some more reading, and reply. I think that space moves and light doesn't. Entertain it as a shift in perspective. PeterDonis Mentor 2019 Award As I understand it, it's not known why the speed of light is constant. While there is a sense in which this is true, it does not mean you can just speculate however you wish. I think that space moves and light doesn't. This doesn't make sense. And you should not be speculating about any of this at your current state of knowledge. You should be working to achieve a better understanding of what we do know first. And we know a lot. Dale Mentor I think that space moves and light doesn't. Entertain it as a shift in perspective. Good luck trying to make that into a self consistent framework that can be used to make experimental predictions. Once you have done that, you should easily be able to publish it in a peer reviewed journal. Then we would be glad to discuss it here. Dale Mentor I actually can speculate however I wish. Yes, but not here. Doubt and asking questions actually makes a mind stronger (stranger as well). I doubt that there is any scientific study which concludes that allowing students to pursue their own personal speculations is an effective way to teach relativity. PeterDonis Mentor 2019 Award Perhaps it will protect you from the poison of arrogance which seems to have prevented you from considering my point. I have considered your points and responded to them. The fact that the responses are apparently not what you wanted to hear does not make them invalid. But it does indicate that further discussion in this thread is pointless.
## Found 2,209 Documents (Results 1–100) 100 MathJax Full Text: ### Thermodynamic and mechanical problems of ice formations: numerical simulation results. (English)Zbl 07588306 MSC:  65Mxx 35Lxx 35Rxx Full Text: MSC:  92-XX Full Text: Full Text: Full Text: MSC:  65Mxx Full Text: ### Numerical approximation of the one-dimensional inverse Cauchy-Stefan problem using heat polynomials methods. (English)Zbl 07562932 MSC:  80A22 80A23 35C11 Full Text: Full Text: ### On the Stefan problem for a system of magnetohydrodynamic boundary layer equations with injection of a medium governed by the Ladyzhenskaya rheological law. (English. Russian original)Zbl 07557350 Dokl. Math. 105, No. 2, 102-105 (2022); translation from Dokl. Ross. Akad. Nauk, Mat. Inform. Protsessy Upr. 503, 59-63 (2022). MSC:  76Wxx 76Dxx 35Qxx Full Text: Full Text: ### On a two-dimensional boundary-value Stefan-type problem arising in cryosurgery. (English. Russian original)Zbl 07542510 J. Math. Sci., New York 260, No. 3, 294-299 (2022); translation from Itogi Nauki Tekh., Ser. Sovrem. Mat. Prilozh., Temat. Obz. 167, 21-26 (2019). MSC:  92C50 35Q91 80A22 Full Text: Full Text: ### A level-set based space-time finite element approach to the modelling of solidification and melting processes. (English)Zbl 07523810 MSC:  76Mxx 80Axx 76Dxx Full Text: Full Text: ### Regularity of the free boundary in the one-phase Stefan problem: a recent approach. (English)Zbl 1487.35460 “Bruno Pini” Mathematical Analysis Seminar 2021. Papers from the seminar, University of Bologna, Bologna, Italy, 2021. Bologna: Università di Bologna, Alma Mater Studiorum. 122-140 (2022). Full Text: Full Text: ### Heterogeneous Stefan problem and permafrost models with P0-P0 finite elements and fully implicit monolithic solver. (English)Zbl 07510645 MSC:  65Mxx 74-XX Full Text: ### The multi-dimensional stochastic Stefan financial model for a portfolio of assets. (English)Zbl 1483.91211 MSC:  91G10 91B70 60H30 Full Text: Full Text: ### A novel supermesh method for computing solutions to the multi-material Stefan problem with complex deforming interfaces and microstructure. (English)Zbl 07488729 MSC:  76Mxx 65Mxx 80Axx Full Text: ### A Fisher-KPP model with a nonlocal weighted free boundary: analysis of how habitat boundaries expand, balance or shrink. (English)Zbl 1485.92175 MSC:  92D40 35R35 35K57 Full Text: ### On the reformulation of the classical Stefan problem as a shape optimization problem. (English)Zbl 1481.49042 MSC:  49Q10 35K05 65K10 Full Text: ### A second order accurate fixed-grid method for multi-dimensional Stefan problem with moving phase change materials. (English)Zbl 07442819 MSC:  65Mxx 80Axx 35Rxx Full Text: ### Solutions of a non-classical Stefan problem with nonlinear thermal coefficients and a Robin boundary condition. (English)Zbl 1484.80006 MSC:  80A22 35R35 Full Text: ### Mathematical modeling of the melting process of silicate materials in a plasma reactor. (English. Russian original)Zbl 07514167 Russ. Phys. J. 64, No. 8, 1443-1450 (2021); translation from Izv. Vyssh. Uchebn. Zaved., Fiz. 64, No. 8, 57-64 (2021). Full Text: ### A stable and accurate scheme for solving the Stefan problem coupled with natural convection using the immersed boundary smooth extension method. (English)Zbl 07511695 MSC:  65Mxx 80Axx 76Dxx Full Text: Full Text: Full Text: ### On a free boundary problem for a Maxwell fluid. (English)Zbl 07473631 MSC:  35K57 35K58 35K20 Full Text: Full Text: ### Determination of the locations of boreholes for temperature measurement during artificial ground freezing. (English. Russian original)Zbl 1483.74066 Mech. Solids 56, No. 7, 1340-1351 (2021); translation from Prikl. Mat. Mekh. 85, No. 2, 257-272 (2021). Full Text: Full Text: ### Computational implementation of a mixed-dimensional model of heat transfer in the soil-pipe system in cryolithic zone. (English. Russian original)Zbl 1480.80013 Comput. Math. Math. Phys. 61, No. 12, 2054-2067 (2021); translation from Zh. Vychisl. Mat. Mat. Fiz. 61, No. 12, 2060-2073 (2021). Full Text: Full Text: Full Text: ### On a free boundary problem for the relaxation transfer equation. (English. Russian original)Zbl 1482.35280 Theor. Math. Phys. 209, No. 1, 1473-1489 (2021); translation from Teor. Mat. Fiz. 209, No. 1, 184-202 (2021). Full Text: Full Text: ### Wavelet based numerical approach of non-classical moving boundary problem with convection effect and variable latent heat under the most generalized boundary conditions. (English)Zbl 07436753 Eur. J. Mech., B, Fluids 87, 1-11 (2021); corrigendum ibid. 90, 63 (2021). Full Text: ### An adaptive boundary algorithm for the reconstruction of boundary and initial data using the method of fundamental solutions for the inverse Cauchy-Stefan problem. (English)Zbl 1476.65274 MSC:  65M80 65M30 65M32 Full Text: Full Text: ### Boundary controllability of phase-transition region of a two-phase Stefan problem. (English)Zbl 1478.93048 MSC:  93B05 93C20 80A22 Full Text: Full Text: ### A short presentation of Emmanuele’s work. (English)Zbl 1473.35010 Vespri, Vincenzo (ed.) et al., Harnack inequalities and nonlinear operators. Proceedings of the INdAM conference to celebrate the 70th birthday of Emmanuele DiBenedetto. Cham: Springer. Springer INdAM Ser. 46, 29-41 (2021). Full Text: ### Approximate solution techniques for the sorption of a finite amount of swelling solvent in a glassy polymer. (English)Zbl 1481.80015 MSC:  80M99 82D60 Full Text: Full Text: Full Text: ### Two equivalent finite volume schemes for Stefan problem on boundary-fitted grids: front-tracking and front-fixing techniques. (English. Russian original)Zbl 07384134 Differ. Equ. 57, No. 7, 876-890 (2021); translation from Differ. Uravn. 57, No. 7, 907-921 (2021). Full Text: Full Text: Full Text: Full Text: ### A self-similar solution to time-fractional Stefan problem. (English)Zbl 1470.35107 MSC:  35C06 35R11 35R37 Full Text: ### Integral formulation for a Stefan problem with spherical symmetry. (English)Zbl 1464.80009 MSC:  80A22 45D05 35K05 Full Text: ### One-phase spherical Stefan problem with temperature dependent coefficients. (English)Zbl 1474.80006 MSC:  80A22 35K05 45D05 Full Text: Full Text: Full Text: Full Text: Full Text: ### Invading and receding sharp-fronted travelling waves. (English)Zbl 1460.92232 MSC:  92D40 35C07 35B20 Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: ### An implicit, sharp numerical treatment of viscous terms at arbitrarily shaped liquid-gas interfaces in evaporative flows. (English)Zbl 07506178 MSC:  76-XX 80-XX Full Text: ### Industrial dry spinning processes: algorithmic for a two-phase fiber model in airflows. (English)Zbl 1472.76117 MSC:  76T30 76M99 80A22 Full Text: Full Text: Full Text: Full Text: ### A freezing problem with varying thermal coefficients and convective boundary condition. (English)Zbl 1465.35418 MSC:  35R35 35K20 80A22 Full Text: ### The application of the homotopy analysis method for solving the two phase inverse Stefan problem with optimisation. (English)Zbl 1464.90117 MSC:  90C90 90C52 MSC:  00-XX Full Text: Full Text: Full Text: Full Text: ### A finite difference discretization method for heat and mass transfer with Robin boundary conditions on irregular domains. (English)Zbl 1453.65211 MSC:  65M06 35Q79 Full Text: ### Propagation, diffusion and free boundaries. (English)Zbl 1465.35004 Reviewer: Yingxin Guo (Qufu) Full Text: Full Text: ### Materials phase change PDE control & estimation. From additive manufacturing to polar ice. (English)Zbl 1473.82004 Systems & Control: Foundations & Applications. Cham: Birkhäuser (ISBN 978-3-030-58489-4/hbk; 978-3-030-58492-4/pbk; 978-3-030-58490-0/ebook). xiii, 352 p. (2020). Full Text: Full Text: Full Text: Full Text: ### Florin-type problem for the parabolic equation with power nonlinearity. (English. Russian original)Zbl 1448.35319 J. Math. Sci., New York 246, No. 3, 429-444 (2020); translation from Neliniĭni Kolyvannya 21, No. 4, 554-566 (2018). Full Text: ### A space-fractional Stefan problem. (English)Zbl 1456.35237 MSC:  35R37 35R11 Full Text: ### On a two-phase Stefan problem with convective boundary condition including a density jump at the free boundary. (English)Zbl 1446.35270 MSC:  35R35 80A22 35C05 Full Text: ### Forward invariance and Wong-Zakai approximation for stochastic moving boundary problems. (English)Zbl 1473.60098 MSC:  60H15 35R60 Full Text: Full Text: Full Text: ### Old problems, classical methods, new solutions. (English)Zbl 1447.91178 MSC:  91G20 60G40 35Q91 Full Text: Full Text: MSC:  65M99 Full Text: Full Text: Full Text: ### Modeling and simulation of keyhole-based welding as multi-domain problem using the extended finite element method. (English)Zbl 1481.74137 MSC:  74F05 74S05 Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: all top 5 all top 5 all top 5 all top 3 all top 3
# How to set up new types for pattern matching strings? Consider the following toy example: I have a set of language sounds, which I partition into two exclusive subsets, consonants and vowels. I want to set up string patterns for e.g. StringMatchQ that may contain restrictions based on the sound specification. For sake of simplicity, I used letters instead of sounds, as for now it does not matter. (* Define domains *) vowels = {"a", "e", "i", "o", "u"}; consonants = {"b", "c", "d", "f", "g"}; (* Define predicates *) VowelQ[x_] := MemberQ[vowels, x]; ConsonantQ[x_] := MemberQ[consonants, x]; (* Use predicates in pattern matcher *) Shortest[pre__] ~~ c : __?ConsonantQ ~~ v_?VowelQ ~~ EndOfString :> pre <> "-[" <> c <> "]-[" <> v <> "]"] "ba-[dg]-[e]" My question is: How to set up domains that can be used like e.g. DigitCharacter in the pattern matcher, that is, how to define Consonant and Vowel in the following (putative) application to yield the same result as the above code? StringReplace["badge", Shortest[pre__] ~~ c : Consonant .. ~~ v : Vowel ~~ EndOfString :> pre <> "-[" <> c <> "]-[" <> v <> "]"] I can define new data structures, like Consonant["c"], that is displayed as "c", but is nevertheless interpreted as Consonant["c"], though I have the feeling that this is not the way to do it. - I suggest to use lexical environments, in particular the function makeCustomEnvironment which I posted in this answer ClearAll[makeCustomEnvironment]; SetAttributes[makeCustomEnvironment, HoldAll]; makeCustomEnvironment[values : (_Symbol = _) ..] := Function[code, With @@ Hold[{values}, code], HoldAll]; now, define a custom environment: env = makeCustomEnvironment[Consonant = _?ConsonantQ, Vowel = _?VowelQ] And use it: env@StringReplace["badge", Shortest[pre__]~~c:Consonant..~~v:Vowel~~EndOfString:>pre<>"-["<>c<>"]-["<>v<>"]"] (* ==> "ba-[dg]-[e]" *) Note that the literals Consonant and Vowel don't acquire any global values, and have special meaning only for the code literally present inside the env environment. You can view this solution as a simple example of Mathematica meta-programming. EDIT Another alternative (which I like less, but still) is to use UpValues: ClearAll[Consonant, Vowel] Consonant /: f_[left___, Consonant, right___] := f[left, _?ConsonantQ, right]; Vowel /: f_[left___, Vowel, right___] := f[left, _?VowelQ, right]; In this case, you can run your code without any wrappers, but I personally would still go for local environments (note that I defined the above pretty carelessly for any f - for example, you won't be able to Clear Consonant or Vowel easily now. If you choose this method, you may want to narrow the set of f-s in the above). And of course, you can just simply use assignments ClearAll[Consonant, Vowel] Consonant = _?ConsonantQ; Vowel = _?VowelQ; but this will likely exclude the use of symbols Consonant and Vowel in other capacities in some other pieces of your code, since they now have global values. This may also not work if they are used inside functions which hold their arguments, since they are replaced by their values as a result of their evaluation, in this method. - Very useful, as usual! Now since I want to package the whole Consonant/Vowel thing with a lot more, it seems reasonable to use UpValues instead of the environment wrapper, for deployment. What are the drawbacks of UpValues compared to makeCustomEnvironment? –  István Zachar Mar 6 '12 at 12:02 @Istvan The main drawback is that UpValues create global effects (although localized in certain sense, but still), so they may fire in some unforeseen cases. This will be less of an issue if you put your Consonant and Vowel into some package (namespace). But, you may have some unexpected behavior in your own code, like e.g. above, where you can't easily clear them. If you package this stuff with a lot more, you probably can use env (local environments) internally, hiding them behind some interface function, so the user won't see them (and I'd think this is cleaner than UpValues). –  Leonid Shifrin Mar 6 '12 at 12:12
# The Future Might Not Be So Great post by Jacy · 2022-06-30T13:01:21.617Z · EA · GW · 118 comments This is a link post for https://www.sentienceinstitute.org/blog/the-future-might-not-be-so-great ## Contents Summary Arguments on the Expected Value (EV) of Human Expansion Argument Name Description Arguments for Positive Expected Value (EV) Historical Progress Value Through Intent Value Through Evolution Convergence of Patiency and Agency Reasoned Cooperation Discoverable Moral Reality Arguments for Negative Expected Value (EV) Historical Harms Disvalue Through Intent Disvalue Through Evolution Divergence of Patiency and Agency Threats Arguments that May Increase or Decrease Expected Value (EV) Conceptual Utility Asymmetry Empirical Utility Asymmetry Complexity Asymmetry Procreation Asymmetry EV of the Counterfactual The Nature of Digital Minds, People, and Sentience Life Despite Suffering The Nature of Value Refinement Scaling of Value and Disvalue EV of Human Expansion after Near-Extinction or Other Events The Zero Point of Value Related Work Terminology What Does the EV Need to be to Prioritize Extinction Risks? Time-Sensitivity Biases Future Research on the EV of Human Expansion References None Many thanks for feedback and insight from Kelly Anthis, Tobias Baumann, Jan Brauner, Max Carpendale, Sasha Cooper, Sandro Del Rivo, Michael Dello-Iacovo, Michael Dickens, Anthony DiGiovanni, Marius Hobbhahn, Ali Ladak, Simon Knutsson, Greg Lewis, Kelly McNamara, John Mori, Thomas Moynihan, Caleb Ontiveros, Sean Richardson, Zachary Rudolph, Manny Rutinel, Stefan Schubert, Michael St. Jules, Nell Watson, Peter Wildeford, and Miranda Zhang. This essay is in part an early draft of an upcoming book chapter on the topic, and I will add the citation here when it is available. Our lives are not our own. From womb to tomb, we are bound to others, past and present. And by each crime and every kindness, we birth our future. ⸻ Cloud Atlas (2012) # Summary The prioritization of extinction risk reduction depends on an assumption that the expected value (EV)[1] of human survival and interstellar colonization is highly positive. In the feather-ruffling spirit of EA Criticism and Red Teaming [? · GW], this essay lays out many arguments for a positive EV and a negative EV. This matters because, insofar as the EV is lower than we previously believed, we should shift some longtermist [? · GW] resources away from the current focus on extinction risk reduction. Extinction risks are the most extreme category of population risks, which are risks to the number of individuals in the long-term future. We could shift resources towards the other type of long-term risk, quality risks, which are risks to the moral value of individuals in the long-term future, such as whether they experience suffering or happiness [? · GW].[2] Promising approaches to improve the quality of the long-term future include some forms of AI safety [? · GW], moral circle expansion [? · GW], cooperative [? · GW] game theory [? · GW], digital minds [? · GW], and global priorities [? · GW] research. There may be substantial overlap with extinction risk reduction approaches, but in this case and in general, much more research is needed. I think that the effective altruism (EA) emphasis on existential risk could be replaced by a mindset of existential pragmatism: Rather than ensuring humanity expands its reach throughout the universe, we must ensure that the universe will be better for it. I have spoken to many longtermist EAs about this crucial consideration [? · GW], and for most of them, that was their first time explicitly considering the EV of human expansion.[3] My sense is that many more are considering it now, and the community is growing more skeptical of highly positive EV as the correct estimate. I’m eager to hear more people’s thoughts on the all-things-considered estimate of EV, and I discuss the limited work done on this topic to date in the “Related Work” section. In the following table, I lay out the object-level arguments on the EV of human expansion, and the rest of the essay details meta-considerations (e.g., option value). The table also includes the strongest supporting arguments that increase the evidential weight of their corresponding argument and the strongest counterarguments that reduce the weight. The arguments are not mutually exclusive and are merely intended as broad categories that reflect the most common and compelling arguments for at least some people (not necessarily me) on this topic. For example, Historical Progress and Value Through Intent have been intertwined insofar as humans intentionally create progress, so users of this table should be mindful that they do not overcount (e.g., double count) the same evidence. I handle this in my own thinking by splitting an overlapping piece of evidence among its categories in proportion to a rough sense of fit in those categories.[4] In the associated spreadsheet, I list my own subjective evidential weight scores where positive numbers indicate evidence for +EV and negative numbers indicate evidence for -EV. It is helpful to think through these arguments with different assignment and aggregation methods, such as linear or logarithmic scaling. With different methodologies to aggregate my own estimates or those of others, the total estimate is highly negative around 30% of the time, weakly negative 40%, and weakly positive 30%. It is almost never highly positive. I encourage people to make their own estimates, and while I think such quantifications are usually better than intuitive gestalts, all such estimates should be taken with golf balls of salt.[5] This is an atypical structure for an argumentative essay—laying out all the arguments, for and against, instead of laying out arguments for my position and rebutting the objections—but I think that we should detach argumentation from evaluation. I’m not aiming for maximum persuasiveness. Indeed, the thrust of my critique is that EAs have failed to consider these arguments in such a systematic way, either neglecting the assumption entirely or selecting only a handful of the multitude of evidence and reason we have available. Overall, my current thinking (primarily an average of several aggregations of quantified estimates and Aumann updating on others’ views) is that the EV of human expansion is not highly positive. For this and other reasons [EA(p) · GW(p)], I prioritize improving the quality of the long-term future rather than increasing its expected population. # Related Work Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists. ⸻ Derek Parfit (2017) The field of existential risk has intellectual roots as deep as human history in notions of “apocalypse” such as the end of the Mayan calendar. Thomas Moynihan (2020) distinguishes apocalypse as having a sense to it or a justification, such as the actions of a supernatural deity, while “extinction” entails “the ending of sense” entirely. This notion of human extinction is traced back only to the Enlightenment beginning in the 1600s, and its most well-known articulation in the 21st century is under the category of existential risks (also known as x-risks), a term coined in 2002 by philosopher Nick Bostrom for risks “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” The most famous essay on existential risk is “Astronomical Waste” (Bostrom 2003), in which Bostrom argues that if humanity could colonize the Virgo supercluster, the massive concentration of galaxies that includes our own Milky Way and 47,000 of its neighbors, then we could sustain approximately 10^38 human beings, an intuitively inconceivably large number. Bostrom argues that the priority of utilitarians should be to reduce existential risk and ensure we seize this cosmic endowment, though the leap from the importance of the long-term future to existential risk reduction is contentious (e.g., Beckstead 2013b [? · GW]). The field of existential risk studies has risen at pace with the growth of effective altruism (EA), with a number of seminal works summarizing and advancing the field (Matheny 2007; Bostrom 2012; Beckstead 2013a; Bostrom 2013; 2014; Tegmark 2017; Russell 2019; Moynihan 2020; Ord 2020; MacAskill forthcoming). Among existential risks, EAs have largely focused on population risks (particularly extinction risks); the term “x-risk,” which canonically refers to existential risk, is often interpreted as extinction risk (see Aird 2020a [EA · GW]). A critical assumption underlying this focus has been that the expected value of humanity’s survival and interstellar colonization is very high.. This assumption largely goes unstated, but it was briefly acknowledged in Beckstead (2013a): Is the expected value of the future negative? Some serious people—including Parfit (2011, Volume 2, chapter 36), Williams (2006), and Schopenhauer (1942)—have wondered whether all of the suffering and injustice in the world outweigh all of the good that we've had. I tend to think that our history has been worth it, that human well-being has increased for centuries, and that the expected value of the future is positive. But this is an open question, and stronger arguments pointing in either direction would be welcome. Christiano (2013) asked, “Why might the future be good?” though, as I understood it, that essay did not mention the possibility of a negative future. I had also implicitly accepted the assumption of a good future until 2014, when I thought through the evidence and decided to prioritize moral circle expansion at the intersection of animal advocacy and longtermism (Anthis 2014). I brought it up on the old EA Forum in Anthis (2016a) [EA · GW], and West (2017) [EA · GW] detailed a version of the “Value Through Intent” argument. I also remember extensive Facebook threads around this time, though I do not have links to share. I finally wrote up my thoughts on the topic in detail in Anthis (2018b) [EA · GW] as part of a prioritization argument for moral circle expansion over decreasing extinction risk through AI alignment, and this essay is a follow-up to and refinement of those ideas. Later in 2018, Brauner and Grosse-Holz (2018) [EA · GW] published an EA Forum essay arguing that the expected value of extinction risk reduction is positive. In my opinion, it failed to consider many of the arguments on the topic, as discussed in EA Forum comments and a rebuttal, also on the EA Forum, DiGiovanni (2021) [EA · GW]. There is also a chapter in MacAskill (forthcoming) covering similar ground as Brauner and Grosse-Holz, with similar arguments missing, in my opinion. Overall, these writings primarily focus on three arguments: 1. the “Value Through Intent” or “will” argument [EA(p) · GW(p)], that insofar as humanity exerts its will, we tend to produce value rather than disvalue; 2. the likelihood that factory farming and wild animal suffering, the largest types of suffering today, will persist into the far future; and 3. axiological considerations, particularly the population ethics question of whether creating additional beings with positive welfare is morally good. This has been the main argument against increasing population from some negative utilitarians and other “suffering-focused” EAs, such as the Center on Long-Term Risk (CLR) [? · GW] and Center for Reducing Suffering (CRS) [? · GW], since Tomasik (2006). These are three important considerations, but they cover only a small portion of the total landscape of evidence and reason that we have available for estimating the EV of human expansion. For transparency, I should flag that at least some of the authors would disagree with me about this critique of their work. Overall, I think the arguments against a highly positive EV of human expansion have been the most important blindspot of the EA community to date. This is the only major dissenting opinion I have with the core of the EA memeplex. I would guess over 90% of longtermist EAs with whom I have raised this topic have never considered it before, despite acknowledging during our conversation that the expected value being highly positive is a crucial assumption for prioritizing extinction risk and that it is on shaky ground—if not deciding that it is altogether mistaken. (Of course, this is not meant as a representative sample of all longtermist EAs.) While examining this assumption and deciding that the far future is not highly positive would not completely overhaul longtermist EA priorities, I tentatively think that it would significantly change our focus. In particular, we should shift resources away from extinction risk and towards quality risks, including more global priorities research to better understand this and other crucial considerations. I would be eager for more discussion of this topic, and the sort of evidence I expect to most change my mind is the cooperative game theory research done by CLR [? · GW], the Center on Human-Compatible AI (CHAI) [? · GW], and others in AI safety; the moral circle expansion and digital minds research done by Sentience Institute (SI) [? · GW], Future of Humanity Institute (FHI) [? · GW], and others in longtermism and AI safety; and all sorts of exploration of concrete scenarios similar to The Age of Em (Hanson 2016) and AI takeoff “training stories” (Hubinger 2021) [AF · GW]. I expect fewer updates from more conceptual discourse like the works cited above on the EA Forum and this essay, but I still see them as valuable contributions. See further discussion in the “Future Research on the EV of Human Expansion” subsection below. ## Terminology I separate the moral value of the long-term future into two factors: population, the number of individuals at each point in time, and quality, the moral value of each individual’s existence at each point in time. The moral value of the long-term future is thus the double sum of quality across individuals across time. Risks to the number of individuals (living sufficiently positive lives) are population risks, and risks to the quality of each individual life are quality risks. Extinction risks are a particular sort of population risk, those that would “annihilate Earth-originating intelligent life,” though I would also include threats towards populations of non-Earth-originating and non-intelligent (and perhaps even non-living) individuals who matter morally, and I get the sense that others have also favored this more inclusive definition. Non-existential population risks could be a permanent halving of the population or a delay of one-third the universe’s remaining lifetime in humanity’s interstellar expansion, though there is no consensus on where exactly the cutoff is between existential and non-existential, though there does seem to be consensus that extinction of humans (with no creation of post-humans, such as whole brain emulations [? · GW]) is existential. Quality risks are risks to the moral value of individuals who may exist in the long-term future. Existential quality risks are those that “permanently and drastically curtail its potential” moral value, such as all individuals being moved from positive to zero or positive to negative value. Non-existential quality risks may include one-tenth of the future population dropping from highly positive to barely positive quality, one-fourth of the future population dropping from barely positive to barely negative quality, and so on. Again, this may be better understood as a spectrum of existentiality, rather than two neatly separated categories, because it is unclear at what point potential is permanently and drastically curtailed. Quality risks include suffering risks (also known as_ s-risks_), “risks of events that bring about suffering in cosmically significant amounts” (Althaus and Gloor 2016; Tomasik 2011), which was noted as “weirdly sidelined” by total utilitarians in Rowe’s (2022) “Critiques of EA that I Want to Read.” [EA · GW] These categories are not meant to coincide with the existential risk taxonomies of Bostrom (2002) (bangs, crunches, shrieks, whimpers) or Bostrom (2013) (human extinction, permanent stagnation, flawed realization, subsequent ruination), in part because those are worded in terms of positive potential rather than an aggregation of positive and negative outcomes. However, one can reasonably view some of those categories (e.g., shrieks and failed realizations) as including some positive, zero, or negative quality trajectories because they have a failed realization of positive potential. Aird (2020b) [EA · GW] has some useful Venn diagrams of the overlaps of some long-term risks. The term “trajectory change” [? · GW] has variously been used as a category that, from my understanding, includes the mitigation or exacerbation of all of the risks above, such as Beckstead’s (2013a) definition of trajectory changes as actions that “slightly or significantly alter the world’s development trajectory.” # What Does the EV Need to be to Prioritize Extinction Risks? Explosive forces, energy, materials, machinery will be available upon a scale which can annihilate whole nations. Despotisms and tyrannies will be able to prescribe the lives and even the wishes of their subjects in a manner never known since time began. If to these tremendous and awful powers is added the pitiless sub-human wickedness which we now see embodied in one of the most powerful reigning governments, who shall say that the world itself will not be wrecked, or indeed that it ought not to be wrecked? There are nightmares of the future from which a fortunate collision with some wandering star, reducing the earth to incandescent gas, might be a merciful deliverance. ⸻ Winston Churchill (1931) Under the standard definition of utility, you should take actions with positive expected value (EV), not take actions with negative EV, and it doesn’t matter if you take actions with zero EV. However, prioritization is plausibly much more complicated than this. Is the EV of the action higher than counterfactual actions? Is EV the right approach for imperfect individual decision-makers? Is EV the right approach for a group of people working together? What is the track record for EV decision-making relative to other approaches? Etc. There are many different views that a reasonable person can come to on how best to navigate these conceptual and empirical questions, but I believe that the EV needs to be highly positive to prioritize extinction risks. As I discussed in Anthis (2018b) [EA · GW], I think an intuitive but mistaken argument on this topic is that if we are uncertain about the EV or expect it is close to zero, we should favor reducing extinction risk to preserve option value. Fortunately I have heard this argument much less frequently in recent years, but it is still in a drop-down section of 80,000 Hours’ “The Case for Reducing Existential Risks.” This reasoning seems mistaken for two reasons: First, option value is only good insofar as we have control over the exercising of future options or expect those who have control to exercise it well. In the course of human civilization, even the totality of the EA movement has relatively little control over humanity’s actions—though arguably a lot more than most measures would make it appear due to our strategic approach, particularly targeting high-leverage domains such as advanced AI—and it is unclear that EA will retain even this modest level of control. The argument that option value is good because our descendants will use it well is circular because the case against extinction risk reduction is primarily focused on humanity not using its options well (i.e., humanity not using its options well is both the premise and the conclusion). An argument that relies on the claim that is being contested is very limited. However, we have more control if one thinks extinction timelines are very short and, if one survives, they and their colleagues will have substantial control over humanity’s actions; we also may be optimistic about human action despite being pessimistic about the future if we think nonhuman forces such as aliens and evolution are the decisive drivers of long-term disvalue. Second, continued human existence very plausibly limits option value in similar ways to nonexistence. Whether we are in a time of perils or not, there is no easy “off switch” for which humanity can decide to let itself go extinct, especially with advanced technologies (e.g., spreading out through von Neumann probes). It is not as if we can or should reduce extinction risk in the 2020s then easily raise it in the 2030s based on further global priorities research. Still, there is a greater variety of non-extinct than extinct civilizations, so insofar as we want to preserve a wide future of possibilities, that is reason to favor extinction risk reduction. Instead of option value, the more important considerations to me are (i) that we have other promising options with high EV such that extinction risk reduction needs to be more positive than these other options in order to justify prioritization and (ii) that we should have some risk aversion and sandboxing of EV estimates such that we should sometimes treat close-to-zero values as zero. It’s also unclear how to weigh the totality of evidence here, but insofar as it is weak and speculative—as with most questions about the long-term future—one may pull their estimate towards a prior, though it is unclear what that prior should be. If one thinks zero is a particularly common answer in an appropriate reference class, that could be reasonable, but it depends on many factors beyond the scope of this essay. # Time-Sensitivity If we are allocating resources to both population and quality risks, one could argue that we should spend resources on population risks first because the quality of individual lives only matters insofar as those individuals exist. The opposite is true as well: For example, if a quality of zero were locked in for the long-term future, then increasing or decreasing the population would have no moral value or disvalue. Outcomes of exactly zero quality might seem less likely than outcomes of exactly zero population, though this depends on the “EV of the Counterfactual” (e.g., life originating on other planets) and is more contentious for close-to-zero quantities. As with option value, the future depends on the past, so for every year that passes, the future has fewer degrees of freedom. This is most apparent in the development of advanced AI, in which its development may hinge on early-stage choices, such as selecting training regimes that are more likely to lead to its alignment with its designers’ value or selecting those values with which to align the AI (i.e., value lock-in [? · GW]). In general, there are strong arguments for time-sensitivity for both types of trajectory change, especially with advanced technology—also life extension [? · GW] and von Neumann probes in particular. # Biases To our amazement we suddenly exist, after having for countless millennia not existed; in a short while we will again not exist, also for countless millennia. That cannot be right, says the heart. ⸻ Arthur Schopenhauer (1818, translation 2008) We could be biased towards optimism or pessimism. Among the demographics of EA, I think that we should probably be more worried about bias towards optimism. Extreme suffering, as described by Tomasik (2006), is a topic that people are very tempted to ignore, downplay, or rationalize (Cohen 2001). In general, the prospect of future dystopias is uncomfortable and unpleasant to think about. Most of us dread the possibility that our legacy in the universe could be a tragic one, and such a gloomy outlook does not resonate with favored trends of techno-optimism or the heroic notion of saving humanity from extinction. However, the sign of this bias can be flipped, such as in social groups where pessimism and doomsaying is in vogue. My experience is that people in EA and longtermism tend to be much more ready to dismiss pessimism and suffering-focused ethics than optimism and happiness-focused ethics, especially based on superficial claims that pessimism is driven by the personal dispositions and biases of its proponents. For a more detailed discussion on biases related to (not) prioritizing suffering, see Vinding (2020). Additionally, given the default approach to longtermism and existential risk is to reduce extinction risk, and there has already been over a decade of focus on that, we should be very concerned about status quo bias [? · GW] and the incentive structure of EA as it is today. This is one reason to encourage self-critique as individuals and as a community, such as with the Criticism and Red-Teaming Contest [? · GW]. That contest is one reason I wrote this essay, though I had already committed to writing a book chapter on this topic before the contest was announced. I think we should focus more on the object-level arguments than on biases, but given how our answer to this question hinges on our intuitive estimates of extremely complicated figures, bias is probably more important than normal. I further discussed the merits of considering bias and listed many possible biases towards both moral circle expansion and reducing extinction risk through AI alignment in Anthis (2018b) [EA · GW]. One conceptual challenge is that a tendency towards pessimism or optimism could either be accounted for as a bias that needs correction or as a fact about the relative magnitudes of value and disvalue. On one hand, we might say that the importance of disvalue in evolution (e.g. the constant danger of one misstep curtailing all future spread of one’s genes) has made us care more about suffering than we should. On the other hand, we might say that it is a fact about how disvalue tends to be more common, subjectively worse, or objectively worse in the universe. # Future Research on the EV of Human Expansion Because most events in the long-term future entail some sort of value or disvalue, most new information from longtermist research provides some evidence on the EV of human expansion. As stated above, I’m particularly excited about cooperative game theory research (e.g., CLR [? · GW], CHAI [? · GW]), moral circle expansion and digital minds research (e.g., SI [? · GW], FHI [? · GW]), and exploration of concrete trajectories (e.g., Hanson 2016; Hubinger 2021 [AF · GW]). I’m relatively less excited (though still excited!), on the margin, by entirely armchair taxonomization and argumentation like that in this essay. That includes research on axiological asymmetries, such as more debate on suffering-focused ethics [? · GW] or population ethics [? · GW], though these can be more useful for other topics and perhaps other people considering this question. My lack of enthusiasm is largely because in the past 8 years of having this view that the EV of human expansion is not highly positive, very little of the new evidence has come from armchair reasoning and argumentation, despite that being more common (although what sort of research is most common depends on where one draws the boundaries because, again, so much research has implications for EV). In general, this is such an encompassing, big-picture topic that empirical data is extremely limited relative to scope, and it seems necessary to rely on qualitative intuitions, quantitative intuitions, or back-of-the-envelope calculations a la Dickens’ (2016) “A Complete Quantitative Model for Cause Selection” [EA · GW] or Tarsney’s (2022) “The Epistemic Challenge to Longtermism.” I would like to see a more systematic survey of such intuitions, ideally from 5-30 people who have read through this essay and the “Related Work.” Ideally these would be stated as credible intervals or similar probability distributions, such that we can more easily quantify uncertainty in the overall estimate. As with all topics, I think we should Aumann update on each other’s views, a process in which I split the difference between my belief and someone else’s even if I do not know all the prior and posterior evidence on which they base their view. Of course, this is messy in the real world, for instance because we presumably should account not just for the few people with whom we happen to know their beliefs, but also for our expectations of the many people who also have a belief and even hypothetical people who could have a belief (e.g., unbiased versions of real-world people). It is also unclear whether normative (e.g., moral) views constitute the sort of belief that should be updated in this way, such as between people with fundamentally different value trade-offs between happiness and suffering.[7] There are cooperative reasons [? · GW] to deeply account for others’ views, and one may choose to account for moral uncertainty [? · GW].[6] In general, I would be very interested in a survey that just asks for numbers like those in the table above and allows us to aggregate those beliefs in a variety of ways; a more detailed case for how that aggregation should work is beyond the scope of this essay. If you are persuaded by the arguments that the expected value of human expansion is not highly positive or that we should prioritize the quality of the long-term future, promising approaches include research, field-building, and community-building, such as at the Center on Long-Term Risk [? · GW], Center for Reducing Suffering [? · GW], Future of Humanity Institute [? · GW], Global Catastrophic Risk Institute [? · GW], Legal Priorities Project [? · GW], and Open Philanthropy [? · GW], and Sentience Institute [? · GW], as well as working at other AI safety and EA organizations with an eye towards ensuring that, if we survive, the universe is better for it. Some of this work has substantial room for more funding, and related jobs can be found at these organizations’ websites and on the 80,000 Hours job board. # References Aird, Michael. 2020a. “Clarifying Existential Risks and Existential Catastrophes.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/skPFH8LxGdKQsTkJy/clarifying-existential-risks-and-existential-catastrophes [EA · GW]. ———. 2020b. “Venn Diagrams of Existential, Global, and Suffering Catastrophes.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering [EA · GW]. Alighieri, Dante. 1307. Convivo. https://www.loebclassics.com/view/marcus_tullius_cicero-de_finibus_bonorum_et_malorum/1914/pb_LCL040.41.xml. Althaus, David, and Tobias Baumann. 2020. “Reducing Long-Term Risks from Malevolent Actors.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors [EA · GW]. Althaus, David, and Lukas Gloor. 2016. “Reducing Risks of Astronomical Suffering: A Neglected Priority.” Center on Long-Term Risk. https://longtermrisk.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/. Anthis, Jacy Reese. 2014. “How Do We Reliably Impact the Far Future?” The Best We Can. https://web.archive.org/web/20151106103159/http://thebestwecan.org/2014/07/20/how-do-we-reliably-impact-the-far-future/. ———. 2016a. “Some Considerations for Different Ways to Reduce X-Risk.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/NExT987oY5GbYkTiE/some-considerations-for-different-ways-to-reduce-x-risk [EA · GW]. ———. 2016b. “Why Animals Matter for Effective Altruism.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/ch5fq73AFn2Q72AMQ/why-animals-matter-for-effective-altruism [EA · GW]. ———. 2018a. The End of Animal Farming: How Scientists, Entrepreneurs, and Activists Are Building an Animal-Free Food System. Boston: Beacon Press. ———. 2018b. “Why I Prioritize Moral Circle Expansion Over Artificial Intelligence Alignment.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/BY8gXSpGijypbGitT/why-i-prioritize-moral-circle-expansion-over-artificial [EA · GW]. ———. 2018c. “Animals and the Far Future.” EAGxAustralia. https://www.youtube.com/watch?v=NTV81NZSuKw. ———. 2022. “Consciousness Semanticism: A Precise Eliminativist Theory of Consciousness.” In Biologically Inspired Cognitive Architectures 2021, edited by Valentin V. Klimov and David J. Kelley, 1032:20–41. Studies in Computational Intelligence. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-96993-6_3. Anthis, Jacy Reese, and Eze Paez. 2021. “Moral Circle Expansion: A Promising Strategy to Impact the Far Future.” Futures 130: 102756. https://doi.org/10.1016/j.futures.2021.102756. Askell, Amanda, Yuntao Bai, Anna Chen, et al. “A General Language Assistant as a Laboratory for Alignment.” ArXiv. https://arxiv.org/abs/2112.00861. Beckstead, Nick. 2013a. “On the Overwhelming Importance of Shaping the Far Future.” Rutgers University. https://doi.org/10.7282/T35M649T. Benatar, David. 2006. Better Never to Have Been: The Harm of Coming into Existence. New York: Clarendon Press. Bostrom, Nick. 2002. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology 9. https://ora.ox.ac.uk/objects/uuid:827452c3-fcba-41b8-86b0-407293e6617c. ———. 2003. “Astronomical Waste: The Opportunity Cost of Delayed Technological Development.” Utilitas 15 (3): 308–14. https://doi.org/10.1017/S0953820800004076. ———. 2003. “Moral uncertainty – towards a solution?” Overcoming Bias. https://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html. ———. 2012. Global Catastrophic Risks. Repr. Oxford: Oxford University Press. ———. 2013. “Existential Risk Prevention as Global Priority.” Global Policy 4 (1): 15–31. https://doi.org/10.1111/1758-5899.12002. ———. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. Bradbury, Ray. 1979. “Beyond 1984: The People Machines.” In Yestermorrow: Obvious Answers to Impossible Futures. Brauner, Jan M., and Friederike M. Grosse-Holz. 2018. “The Expected Value of Extinction Risk Reduction Is Positive.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/NfkEqssr7qDazTquW/the-expected-value-of-extinction-risk-reduction-is-positive?fbclid=IwAR2Si8qdOEqXdPujDfv6gDGLaTdevs4Tb_CALW0D2MHUC4Ot9evEAoem3Gw [EA · GW]. Christiano, Paul. 2013. “Why Might the Future Be Good?” Rational Altruist. https://rationalaltruist.com/2013/02/27/why-will-they-be-happy/. Churchill, Winston. 1931. “Fifty Years Hence. https://www.nationalchurchillmuseum.org/fifty-years-hence.html. Cowen, Tyler. 2018. Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals. Crootof, Rebecca. 2019. “'Cyborg Justice' and the Risk of Technological-Legal Lock-In.” _119 Columbia Law Review Forum _233. Deutsch, David. 2011. The Beginning of Infinity: Explanations That Transform the World. London: Allen Lane. Dickens, Michael. 2016. “A Complete Quantitative Model for Cause Selection.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/fogJKYXvqzkr9KCud/a-complete-quantitative-model-for-cause-selection [EA · GW]. DiGiovanni, Anthony. 2021. “A Longtermist Critique of ‘The Expected Value of Extinction Risk Reduction Is Positive.’” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/RkPK8rWigSAybgGPe/a-longtermist-critique-of-the-expected-value-of-extinction-2 [EA · GW]. Gloor, Lukas. 2017. “Tranquilism.” Center on Long-Term Risk. https://longtermrisk.org/tranquilism/. ———. 2018. “Cause Prioritization for Downside-Focused Value Systems.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/225Aq4P4jFPoWBrb5/cause-prioritization-for-downside-focused-value-systems [EA · GW]. Greaves, Hilary, and Will MacAskill. 2017. “A Research Agenda for the Global Priorities Institute.” https://globalprioritiesinstitute.org/wp-content/uploads/GPI-Research-Agenda-December-2017.pdf. Hanson, Robin. 2016. The Age of Em: Work, Love, and Life When Robots Rule the Earth. First Edition. Oxford: Oxford University Press. Harris, Jamie. 2019. “How Tractable Is Changing the Course of History?” Sentience Institute. http://www.sentienceinstitute.org/blog/how-tractable-is-changing-the-course-of-history. Hobbhan, Marius, Eric Landgrebe, and Beth Barnes. “Reflection Mechanisms as an Alignment target: A Survey.” LessWrong. https://www.lesswrong.com/posts/XyBWkoaqfnuEyNWXi/reflection-mechanisms-as-an-alignment-target-a-survey-1 [LW · GW]. Hubinger, Evan. 2021. “How Do We Become Confident in the Safety of a Machine Learning System? - AI Alignment Forum.” AI Alignment Forum. https://www.alignmentforum.org/posts/FDJnZt8Ks2djouQTZ/how-do-we-become-confident-in-the-safety-of-a-machine [AF · GW]. Knutsson, Simon. 2017. “Reply to Shulman’s ‘Are Pain and Pleasure Equally Energy-Efficient?’” http://www.simonknutsson.com/reply-to-shulmans-are-pain-and-pleasure-equally-energy-efficient/. MacAskill, William. Forthcoming (2022). What We Owe the Future: A Million-Year View. New York: Basic Books. Matheny, Jason G. 2007. “Reducing the Risk of Human Extinction.” Risk Analysis 27 (5): 1335–44. https://doi.org/10.1111/j.1539-6924.2007.00960.x. Moynihan, Thomas. 2020. X-Risk: How Humanity Discovered Its Own Extinction. Falmouth: Urbanomic. Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. New York: Hachette Books. Parfit, Derek. 2017. On What Matters: Volume Three. Oxford: Oxford University Press. Pinker, Steven. 2012. The Better Angels of Our Nature. New York Toronto London: Penguin Books. ———. 2018. Enlightenment Now. New York, New York: Viking, an imprint of Penguin Random House LLC. Plant, Michael. 2022. “Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/gCDsAj3K5gcZvGgbg/will-faster-economic-growth-make-us-happier-the-relevance-of [EA · GW]. Rowe, Abraham. 2022. “Critiques of EA that I Want to Read.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/n3WwTz4dbktYwNQ2j/critiques-of-ea-that-i-want-to-read [EA · GW]. Russell, Stuart J. 2019. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking. Schopenhauer, Arthur. 2008 [1818]. The World as Will and Representation. New York: Routledge. Shulman, Carl. 2012. “Are Pain and Pleasure Equally Energy-Efficient?” Reflective Disequillibrium. http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html. Smith, Tom W., Peter Marsden, Michael Hout, and Jibum Kim. 2022. “General Social Surveys, 1972-2022.” National Opinion Research Center. https://www.norc.org/PDFs/COVID Response Tracking Study/Historic Shift in Americans Happiness Amid Pandemic.pdf. Tarsney, Christian. 2022. “The Epistemic Challenge to Longtermism.” Global Priorities Institute. https://globalprioritiesinstitute.org/wp-content/uploads/Tarsney-Epistemic-Challenge-to-Longtermism.pdf. Tegmark, Max. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Alfred A. Knopf. Tomasik, Brian. 2006. “On the Seriousness of Suffering.” Essays on Reducing Suffering. https://reducing-suffering.org/on-the-seriousness-of-suffering/. ———. 2011. “Risks of Astronomical Future Suffering.” Foundational Research Institute. https://foundational-research.org/risks-of-astronomical-future-suffering/. ———. 2013a. “The Future of Darwinism.” Essays on Reducing Suffering. https://reducing-suffering.org/the-future-of-darwinism/. ———. 2013b. “Values Spreading Is Often More Important than Extinction Risk.” Essays on Reducing Suffering. https://reducing-suffering.org/values-spreading-often-important-extinction-risk/. ———. 2014. “Why the Modesty Argument for Moral Realism Fails.” Essays on Reducing Suffering. https://reducing-suffering.org/why-the-modesty-argument-for-moral-realism-fails/. ———. 2015. “Artificial Intelligence and Its Implications for Future Suffering.” Center on Long-Term Risk. https://longtermrisk.org/artificial-intelligence-and-its-implications-for-future-suffering/. ———. 2017. “Will Future Civilization Eventually Achieve Goal Preservation?” Essays on Reducing Suffering. https://reducing-suffering.org/will-future-civilization-eventually-achieve-goal-preservation/. Vinding, Magnus. 2020. ​​​​Suffering-Focused Ethics: Defense and Implications. Ratio Ethica. West, Ben. 2017. “An Argument for Why the Future May Be Good.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/kNKpyf4WWdKehgvRt/an-argument-for-why-the-future-may-be-good [EA · GW]. Wolf, Clark. 1997. “Person-Affecting Utilitarianism and Population Policy; or, Sissy Jupe’s Theory of Social Choice.” In Contingent Future Persons, eds. Nick Fotion and Jan C. Heller. Dordrecht: Springer Dordrecht. https://doi.org/10.1007/978-94-011-5566-3_9. Yudkowsky, Eliezer. 2004. “Coherent Extrapolated Volition.” The Singularity Institute. https://intelligence.org/files/CEV.pdf. ———. 2007. “The Hidden Complexity of Wishes.” LessWrong. https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes [LW · GW]. 1. ^ For the sake of brevity, while I have my own views of moral value and disvalue, I don’t tie this essay to any particular view (e.g., utilitarianism). For example, it can include subjective goods (valuable for a person) and objective goods (valuable regardless of people), and it can be understood as estimates or direct observation of realist good (stance-independent) or anti-realist good (stance-dependent). Some may also have moral aims aside from maximizing expected “value” per se, at least for certain senses of “expected” and “value.” There is a substantial philosophical literature on such topics that I will not wade into, and I believe such non-value-based arguments can be mapped onto value-based arguments with minimal loss (e.g., not having a duty to make happy people can be mapped onto there being no value in making happy people). 2. ^ Both population risks and quality risks can be existential risks [? · GW]—though longtermist EAs have usually defaulted to a focus on population risks, particularly extinction risks. 3. ^ For the sake of brevity, I analyze human survival and interstellar colonization together under the label “human expansion.” I gloss over possible futures in which humanity survives but does not colonize space [? · GW]. 4. ^ For example, the portion of historical progress made through market mechanisms is split among Historical Progress insofar as this is a large historical trend, Value Through Intent insofar as humans intentionally progressed in this way, Value Through Evolution insofar as selection increased the prevalence of these mechanisms, and Reasoned Cooperation insofar as the intentional change was through reasoned cooperation. How is this splitting calculated? I punt to future work, but in general, I mean some sort of causal attribution measure. For example, if I grow an apple tree that is caused by both rain and soil nutrients, then I would assign more causal force to rain if and only if reducing rain by one standard deviation would inhibit growth more than reducing soil nutrients by one standard deviation. Related measures include Shapley values and LIME. 5. ^ I do not provide specific explanations for the weights in the spreadsheet because they are meant as intuitive, subjective estimates of the linear weight of the argument as laid out in the description column. As discussed in the “Future Research on the EV of Human Expansion” subsection, unpacking these weights into probability distributions and back-of-the-envelope estimates is a promising direction for better estimating the EV of human expansion. The evaluations rely on a wide range of empirical, conceptual, and intuitive evidence. These numbers should be taken with many grains of salt, but as the “superforecasting” literature evidences, it can be useful to quantify seemingly hard-to-quantify questions. The weights in this table are meant as linear, and the linear sum is -7. There are many approaches we could take to aggregating such evidence, reasoning, and intuitions; we could entirely avoid quantification entirely and take the gestalt of these arguments. If taken as logarithms of 2 (e.g., take 0 as 0, take 1 as 2, take 10 as 2^10=1024) as the prior that EA arguments tend to vary in weight by doubling rather than linear scaling, then the mean is -410. Again, these are just two of the many possible ways to aggregate arguments on this topic. Also, for methodological clarity at the risk of droning, I assign weights constantly across arguments (e.g., 2 arguments of weight +2 are the same evidential weight as 4 arguments of weight +1), though other assignment methods are reasonable, and again, other divisions of the arguments (i.e., other numbers of rows in the table) are reasonable and would make no difference in my own additive total, though they could change the logarithmic total and some other aggregations. 6. ^ While this is a very contentious view among some in EA, I should note that I’m not persuaded by, and I don’t account for, moral uncertainty [? · GW] because I don’t think a “Discoverable Moral Reality” is plausible, and I doubt I would be persuaded to act in accordance with it if it did exist (e.g., to cause suffering if suffering were stance-independently good)—though it is unclear what it would even mean for a vague, stance-independent phenomenon to exist (Anthis 2022). Moreover, I’m not compelled by arguments to account for any sort of anti-realist moral uncertainty, views which are arguably better not even described as “uncertainty” (e.g., weighting my future self’s morals, such as after a personality-altering brain injury or taking rationality- and intelligence-increasing nootropics; across different moral frameworks, such as in a Bostrom's (2009) “moral parliament”). Of course, I still account for moral cooperation [? · GW] and standard empirical uncertainty [? · GW]. 7. ^ There is much more to say about how Aumann’s Agreement Theorem obtains in the real world than what I have room for here. For example, Andrew Critch states [LW · GW] that the “common priors” assumption “seems extremely unrealistic for the real world.” I’m not sure if I disagree with this, but when I describe Aumann updating, I’m not referring to a specific prior-to-posterior Bayesian update; I’m referring to the equal treatment of all the evidence going into my belief with all the evidence going into my interlocutor’s belief. If nothing else, this can be viewed as an aggregation of evidence in which each agent is still left with aggregating their evidence and prior, but I don’t like approaching such questions with a bright line between prior and posterior except in a specific prior-to-posterior Bayesian update (e.g., You believe the sky is blue but then walk outside one day and see it looks red; how should this change your belief?). comment by John G. Halstead (Halstead) · 2022-07-01T08:34:44.790Z · EA(p) · GW(p) My impression was that due to multiple accusations of sexual harassment, Jacy Reese Anthis was stepping back from the community. When and why did this stop? He was evicted from Brown University in 2012 for sexual harassment (as discussed here). And he admitted to several instances of sexual harassment (as discussed here [EA · GW]). He also lied on his website about being a founder of effective altruism. comment by Julia_Wise · 2022-07-04T19:59:29.934Z · EA(p) · GW(p) Some notes from CEA: • Several people have asked me recently whether Jacy is allowed to post on the Forum. He was never banned from the Forum, although CEA told him he would not be allowed in certain CEA-supported events and spaces. • Three years ago, CEA thought a lot about how to cut ties with a person while not totally losing the positive impact they can have. Our take was that it’s still good to be able to read and benefit from someone’s research, even if not interacting with them in other ways. • Someone's presence on the Forum or in most community spaces doesn’t mean they’ve been particularly vetted. • This kind of situation is especially difficult when the full information can’t be public. I’ve heard both from people worried that EA spaces are too unwilling to ban people who make the culture worse, and from people worried that EA spaces are too willing to ban people without good enough reasons or evidence. These are both important concerns. • We’re trying to balance fairness, safety, transparency, and practical considerations. We won’t always get that balance right. You can always pass on feedback to me at [email protected], to my manager Nicole at [email protected], or via our anonymous contact form. Replies from: MichaelStJules, Guy Raveh comment by MichaelStJules · 2022-07-04T20:23:08.032Z · EA(p) · GW(p) Is there more information you can share without risking the anonymity of the complainants or victims? E.g., 1. How many complainants/witnesses were there? 2. How many separate concerning instances were there? 3. Did all of the complaints concern behaviour through text/messaging (or calls), or were some in person, too? 4. Was the issue that he made inappropriate initial advances, or that he continued to make advances after the individuals showed no interest in the initial advance? Both? Or something else? Replies from: Julia_Wise comment by Julia_Wise · 2022-07-05T13:47:05.482Z · EA(p) · GW(p) I can understand why people want more info. Jacy and I agreed three years ago what each of us would say publicly about this, and I think it would be difficult and not particularly helpful to revisit the specifics now. If anyone is making a decision where more info would be helpful, for example you’re deciding whether to have him at an event or you’re running a community space and want to think about good policies in general, please feel free to contact me and I’ll do what I can to help you make a good decision. Replies from: anonymous_ea, Guy Raveh comment by anonymous_ea · 2022-07-05T16:48:09.065Z · EA(p) · GW(p) For convenience, this is CEA's statement from three years ago [EA · GW]: We approached Jacy about our concerns about his behavior after receiving reports from several parties about concerns over several time periods, and we discussed this public statement with him. We have not been able to discuss details of most of these concerns in order to protect the confidentiality of the people who raised them, but we find the reports credible and concerning. It’s very important to CEA that EA be a community where people are treated with fairness and respect. If you’ve experienced problems in the EA community, we want to help. Julia Wise serves as a contact person [EA · GW] for the community, and you can always bring concerns to her confidentially. By my reading, the information about the reports contained in this is: • CEA received reports from several parties about concerns over Jacy's behavior over several time periods • CEA found the reports 'credible and concerning' • CEA cannot discuss details of most of these concerns because the people who raised them want to protect their confidentiality • It also implies that Jacy did not treat people with fairness and respect in the reported incidents • 'It’s very important to CEA that EA be a community where people are treated with fairness and respect' - why say this unless it's applicable to this case? Julia also said [EA(p) · GW(p)] in a comment at the time that the reports were from members of the animal advocacy and EA communities, and CEA decided to approach Jacy primarily because of these rather than the Brown case: The accusation of sexual misconduct at Brown is one of the things that worried us at CEA. But we approached Jacy primarily out of concern about other more recent reports from members of the animal advocacy and EA communities. comment by Guy Raveh · 2022-07-05T18:00:05.120Z · EA(p) · GW(p) Thanks for engaging in this discussion Julia. I'm writing replies that are a bit harsh, but I recognize that I'm likely missing some information about these things, which may even be public and I just don't know where to look for it yet. Jacy and I agreed three years ago what each of us would say publicly about this, and I think it would be difficult and not particularly helpful to revisit the specifics now. However, this sounds... not good, as if the decision on current action is based on Jacy's interests and on honoring a deal with him. I could think of a few possible good reasons for more information to be bad, e.g. that the victims prefer nothing more is said, or that it would harm CEA's ability to act in future cases. But readers can only speculate on what the real reason is and whether they agree with it. Both here and regarding what I asked in my other comment [EA(p) · GW(p)], the reasoning is very opaque. This is a problem, because it means there's no way to scrutinize the decisions, or to know what to expect from the current situation. This is not only important for community organizers, but also for ordinary members of the community. For example, it's not clear to me if CEA has relevant written-out policies regarding this, and what they are. Or who can check if they're followed, and how. Replies from: Khorton comment by Kirsten (Khorton) · 2022-07-05T19:01:32.685Z · EA(p) · GW(p) I would expect CEA's trustees to be scrutinizing how decisions like this are made. Replies from: Guy Raveh comment by Guy Raveh · 2022-07-05T19:39:15.056Z · EA(p) · GW(p) I have a general objection to this, but I want to avoid getting entirely off topic. So I'll just say, this seems to me to only shift the problem further away from the people affected. comment by Guy Raveh · 2022-07-04T21:45:09.674Z · EA(p) · GW(p) Three years ago, CEA thought a lot about how to cut ties with a person while not totally losing the positive impact they can have. Our take was that it’s still good to be able to read and benefit from someone’s research, even if not interacting with them in other ways. For example, does being able to read their research have to mean giving them a stage that will help them get a higher status in the community? How did you balance the possible positive impact of that person with the negative impact that having him around might have on his victims (or their work, or on whether their even then choose to leave the forum themselves)? comment by Kirsten (Khorton) · 2022-07-01T10:05:10.974Z · EA(p) · GW(p) I've also been surprised to see Jacy engaging publicly with the EA community again recently, without any public communication about what's changed. comment by Lizka · 2022-07-04T20:02:57.953Z · EA(p) · GW(p) A comment from the moderation team: This topic is extremely difficult to discuss publicly in a productive way. First, a lot of information isn’t available to everyone — and can’t be made available — so there’s a lot of guesswork involved. Second, there are a number of reasons to be very careful; we want community spaces to be safe for everyone, and we want to make sure that issues with safety can be brought up, but we also require a high level of civility on this Forum. We ask you to keep this in mind if you decide to contribute to this thread. If you’re not sure that you will contribute something useful, you might want to refrain from engaging. Also, please note that you can get in touch with the Community Health team at CEA if you’d like to bring up a specific concern in a less public way. comment by BrownHairedEevee (evelynciara) · 2022-07-03T05:53:05.546Z · EA(p) · GW(p) I downvoted this comment. While I think this discussion is important to have, I do not think that a post about longtermism should be turned into a referendum on Jacy's conduct. I think it would be better to have this discussion on a separate post or the open thread [EA · GW]. comment by Jeff Kaufman (Jeff_Kaufman) · 2022-07-03T11:58:12.354Z · EA(p) · GW(p) We don't have any centralized or formal way of kicking people out of EA. Instead, the closest we have, in cases where someone has done things that are especially egregious, is making sure that everyone who interacts with them is aware. Summarizing the situation in the comments here, on Jacy's first EA forum post in 3 years (Apology, 2019-03 [EA · GW]), accomplishes that much more than posting in the open thread. This is a threaded discussion, so other aspects of the post are still open to anyone interested. Personally, I don't think Jacy should be in the EA movement and won't be engaging in any of the threads below. Replies from: [email protected] comment by Fai ([email protected]) · 2022-07-03T16:37:23.272Z · EA(p) · GW(p) But what about the impact on the topic itself? Having the discussion heavily directed to a largely irrelevant topic, and affecting its down/upvoting situation, doesn't do the original topic justice. And this topic could potentially be very important for the long-term future. Replies from: Jeff_Kaufman, Guy Raveh comment by Jeff Kaufman (Jeff_Kaufman) · 2022-07-03T17:06:10.526Z · EA(p) · GW(p) I think that's a strong reason for people other than Jacy to work on this topic. Replies from: [email protected] comment by Fai ([email protected]) · 2022-07-03T17:16:16.383Z · EA(p) · GW(p) I think that's a strong reason for people other than Jacy to work on this topic. Watching the dynamic here I suspect this might likely be true. But I would still like to point out that there should be a norm about how these situations should be handled. This likely won't be the last EA forum post that goes this way. To be honest I am deeply disappointed and very worried that this post has gone this way. I admit that I might be feeling so because I am very sympathetic to the key views described in this post. But I think one might be able to imagine how they feel if certain monumental posts that are crucial to the causes/worldviews they care dearly about, went this way. comment by Guy Raveh · 2022-07-03T17:15:56.434Z · EA(p) · GW(p) Having the discussion heavily directed to a largely irrelevant topic I think this topic is more relevant than the original one. Ideas, however important to the long-term future, can surface more than once. The stability of the community is also important for the long-term future, but it's probably easier to break it than to bury an idea. affecting its down/upvoting situation I haven't voted on the post either way despite agreeing that the writer should probably not be here. I don't know about anyone else, but I suspect the average person here is even less prone than me to downvote for reasons unrelated to content. Replies from: [email protected] comment by Fai ([email protected]) · 2022-07-03T20:15:09.894Z · EA(p) · GW(p) I think this topic is more relevant than the original one. Relevant with respect to what? For me, the most sensible standard to use here seems to be "whether it is relevant to the original topic of the post (the thesis being brought up, or its antithesis)".  Yes, the topic of personal behavior is relevant to EA's stability and therefore how much good we can do, or even the long-term future. But considering that there are other ways of letting people know what is being communicated here, such as starting a new post, I don't think we should use this criterion of relevance. Ideas, however important to the long-term future, can surface more than once. That's true, logically speaking. But that's also logically true for EA/EA-like communities. In other words, it's always "possible" that if this EA breaks, there "could be" another similar one that will be formed again. But I am guessing not many people would like to take the bet based on the "come again argument". Then what is our reason for being willing to take a similar bet with this potentially important - I believe crucial - topic (or just any topic)? And again, the fact that there are other ways to bring up the topic of personal behavior makes it even less reasonable to use this argument as a justification here.  In other words, there seem to be way better alternatives to "reduce X-risk to EA" than commenting patterns like it's happening here, that might risk "forcing a topic away from the surface". And we cannot say that if something "can surface more than once", then we should expect it to also "surface before it is too late", or "surface with the same influence". Timing matters, and so do the "comment sections"  of all historical discussions on a topic. There are also some even more "down-to-earth" issues, such as the future writers on this topic experiencing difficulties of many sorts. For example, seeing this post went this way, should the writer of a next similar post (TBH, I have long thought of writing a similar post to this) just pretend that this post doesn't exist? This seems to be bad intellectual practice. But if they do cite this post, readers will see the comment section here, and one might worry that readers will be affected. More specifically, what if Jacy got this post exactly spot on? Should people who hold exactly the same view just pretend this post doesn't exist and post almost exactly the same thing? I haven't voted on the post either way despite agreeing that the writer should probably not be here. I am glad you tried to be fair to the topic. But just like to point out that "not voting either way" isn't absolute proof that you haven't been affected - you could have voted positively if not for the extra discussion. I don't know about anyone else, but I suspect the average person here is even less prone than me to downvote for reasons unrelated to content. I have to say I am much more pessimistic than you on this. I think it's psychologically quite natural that with such comments in the comment section, one might find it hard to concentrate through such a long piece, especially if one takes a stance against the writers' behavior. I am mindful of the fact that I am contributing to what I am suspecting to be bad practice here, so I am not going to comment on this direction further than this. Replies from: Guy Raveh comment by Guy Raveh · 2022-07-03T20:59:18.560Z · EA(p) · GW(p) Thanks for the detailed reply. I think you raised good points and I'll only comment on some of them. Mainly, I think raising the issue somewhere else wouldn't be nearly as effective, both in terms of directly engaging Jacy and of making his readers aware. I am glad you tried to be fair to the topic. But just like to point out that "not voting either way" isn't absolute proof that you haven't been affected - you could have voted positively if not for the extra discussion. I noticed the post much before John made his comment. I didn't read it thoroughly or vote then, so I haven't changed my decision - but yes, I guess I'd be very reluctant to upvote now. So my analysis of myself wasn't entirely right. I am mindful of the fact that I am contributing to what I am suspecting to be bad practice here, so I am not going to comment on this direction further than this. Hmm. Should I have not replied then? ... I considered it, but eventually decided some parts of the reply were important enough. comment by John G. Halstead (Halstead) · 2022-07-03T13:44:02.514Z · EA(p) · GW(p) I think it is a good place to have the discussion. Apparently someone has been the subject of numerous sexual harassment allegations throughout his life is turning up at EA events again. This is very concerning. Replies from: [email protected] comment by Fai ([email protected]) · 2022-07-03T20:22:58.311Z · EA(p) · GW(p) But wouldn't a new post on this topic serve the same purpose of expressing and discussing this concern, without having the effects of affecting this topic? comment by DonyChristie · 2022-07-01T22:40:26.927Z · EA(p) · GW(p) I recommend a mediator be hired to work with Jacy and whichever stakeholders are relevant (speaking broadly). This will be more productive than a he-said she-said forum discussion that is very emotionally toxic for many bystanders. comment by Guy Raveh · 2022-07-02T16:46:21.734Z · EA(p) · GW(p) Who do you think the relevant stakeholders are? It seems to me that "having a safe community" is something that's relevant to the entire community. I don't think long, toxic argument threads are necessary as a decision seems to have been made 3 years ago. The only question is what's changed. So I'm hoping we see some comment from CEA staff on the matter. comment by John G. Halstead (Halstead) · 2022-07-03T13:37:58.347Z · EA(p) · GW(p) I imagine Jacy turning up to EA events is more toxic for the women that Jacy has harasssed and for the women he might harass in the future. There is no indication that he has learned his lesson. He is totally incapable of taking moral responsibility for anything. This is not he-said she-said. I have only stated known facts so far and I am surprised to see people dispute them. The guy has been kicked out of university for sexual misconduct and banned from EA events for sexual misconduct. He should not be welcome in the community. Replies from: Davidmanheim comment by Davidmanheim · 2022-07-04T14:33:52.357Z · EA(p) · GW(p) I'm confused that you seem to claim strong evidence on the basis on a variety of things that seem like weak evidence to me. While I am sure details should not be provided, can you clarify whether you have non-public information about what happened post 2016 that contradicts what Kelly and Jacy have said publicly about it? comment by Guy Raveh · 2022-07-01T10:26:42.143Z · EA(p) · GW(p) Thanks for writing this. As everyone here knows, there has been an influx of people into EA and the forum in the last couple years, and it seems probable that most of the people here (including me) wouldn't have known about this if not for this reminder. Replies from: Yitz comment by Yitz · 2022-07-05T23:19:14.935Z · EA(p) · GW(p) I was personally unaware of the situation until reading this comment thread, so can confirm comment by John G. Halstead (Halstead) · 2022-07-01T09:57:37.212Z · EA(p) · GW(p) Jacy Reese claims that the allegations discussed in the Forum post centre on 'clumsy online flirting'. We don't really know what the allegations are, but CEA : • Severed ties with the Sentience Institute • Stopped being their fiscal sponsor • Banned Jacy from all of their events • Made him write an apology post We have zero reason to believe Jacy about the substance of the allegations, given his documented history of lying and incentives to lie in the case. Replies from: Harrison D comment by Harrison Durland (Harrison D) · 2022-07-01T15:21:47.277Z · EA(p) · GW(p) I don’t think (or, you have not convinced me that) it’s appropriate to use CEA’s actions as strong evidence against Jacy. There are many obvious pragmatic justifications to do so that are only slightly related to the factual basis of the allegations—I.e., even if the allegations are unsubstantiated, the safest option for a large organization like CEA would be to cut ties with him regardless. Furthermore, saying someone has “incentives to lie” about their own defense also feels inappropriate (with some exceptions/caveats), since that basically applies to almost every situation where someone has been accused. The main thing that you mentioned which seems relevant is his “documented history of lying,” which (I say this in a neutral rather than accusatory way) I haven’t yet seen documentation of. Ultimately, these accusations are concerning, but I’m also quite concerned of the idea of throwing around seemingly dubious arguments in service of vilifying someone. comment by John G. Halstead (Halstead) · 2022-07-01T16:16:45.655Z · EA(p) · GW(p) It is bizarre to say that the aforementioned evidence is not strong evidence against Jacy. He was thrown out of university for sexual misconduct. CEA then completely disassociated itself from him because of sexual misconduct several years later. Multiple people at multiple different times in his life have accused him of sexual misconduct. I think we are agreed that he has incentives to lie. He has also shown that he is a liar. comment by John G. Halstead (Halstead) · 2022-07-01T16:12:56.091Z · EA(p) · GW(p) on his history of lying. https://nonprofitchronicles.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/ Replies from: Harrison D comment by Harrison Durland (Harrison D) · 2022-07-01T17:14:27.122Z · EA(p) · GW(p) Please provide specific quotes, I spent a few minutes reading the first part of that without seeing what you were referring to Replies from: Harrison D comment by Harrison Durland (Harrison D) · 2022-07-01T17:25:30.075Z · EA(p) · GW(p) If you’re referring to the same point about his claim to be a cofounder, I did just see that. However, unless I see some additional and/or more-egregious quotes from Jacy, I have a fairly negative evaluation of your accusation. Perhaps his claim was a bit exaggerative combined with being easily misinterpreted, but it seems he has walked it back. Ultimately, this really does not qualify in my mind as “a history of lying.” comment by John G. Halstead (Halstead) · 2022-07-03T13:42:58.315Z · EA(p) · GW(p) You could also read the entirety of the research he produced for ACE, which it would be fair to describe as 'comprised entirely of bullshit'. To stress, it is completely ludicrous for him to claim that he is a co-founder of effective altruism, unless he interpreted the claim to be true of like Sasha Cooper or Pablo Stafforini. They would never say that they are founders of effective altruism because it is not true and they are not sociopaths (like Jacy is). Replies from: Lizka comment by Lizka · 2022-07-04T20:04:30.116Z · EA(p) · GW(p) comment by John G. Halstead (Halstead) · 2022-07-03T13:39:16.919Z · EA(p) · GW(p) I'm not vilifying the guy. His actions have done that for him and I have just described his actions. comment by John G. Halstead (Halstead) · 2022-07-01T16:13:37.128Z · EA(p) · GW(p) on his history of lying. https://nonprofitchronicles.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/ comment by sapphire (deluks917) · 2022-07-04T20:06:54.245Z · EA(p) · GW(p) In most cases where I am actually familiar with the facts CEA has behaved very poorly. They have both been way too harsh on good actors and failed to take sufficient action against bad actors (ex Kathy Forth). They did handle some very obvious cases reasonably though (Diego). I don't claim I would do a way better job but I don't trust CEA to make these judgments. comment by Timothy Chan · 2022-07-01T15:57:39.425Z · EA(p) · GW(p) Could you 1.  Quote where in the linked text or elsewhere 'he admitted to several instances of sexual harassment'? 2. As someone asked in another comment, 'provide links or specific quotes regarding his claim of being a founder of EA?' comment by John G. Halstead (Halstead) · 2022-07-01T16:10:57.370Z · EA(p) · GW(p) 1 - CEA says that the complaints relate to inappropriate behaviour in the sexual realm which they found 'credible and concerning' and which he pretends to apologise for in the apology post, presumably to avoid a legal battle 2- https://nonprofitchronicles.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/ comment by Timothy Chan · 2022-07-01T16:39:18.980Z · EA(p) · GW(p) 1 - CEA says that the complaints relate to inappropriate behaviour in the sexual realm which they found 'credible and concerning' and which he pretends to apologise for in the apology post, presumably to avoid a legal battle I still don't see where 'he admitted to several instances of sexual harassment' as you've claimed. comment by John G. Halstead (Halstead) · 2022-07-01T16:56:03.226Z · EA(p) · GW(p) The post is called 'apology' apology usually means you are admitting to wrongdoing. In this case, the wrongdoing was relating to sexual conduct. What do you think it was an apology for? Replies from: Timothy Chan comment by Timothy Chan · 2022-07-01T22:04:07.811Z · EA(p) · GW(p) Then I think it'd be more accurate if you write 'he admitted to several instances of what I consider to be sexual harassment'. At the moment, your claim that 'he admitted to several instances of sexual harassment' seems very misleading. You haven't provided evidence that supports the claim that he confessed to committing such crimes. EDIT: I'm approaching this issue with much less lived experience than some of the other commenters here. There appear to be more individuals than just John who are confident in the allegations, so perhaps 'what [John considers] to be sexual harassment' is not enough, and instead 'what [X, Y and so on...] consider to be sexual harassment' is better. (From what I can tell, the apology [EA · GW] post also features some comments that push back on that confidence, to varying extents, and it may be worth mentioning that too. I'm not following this issue extensively and I don't know if there have been any updates since that post.) I still think John's comment, as it stands, ('he admitted to several instances of sexual harassment') is misleading and harmful to community norms. I think people should point out bad epistemics despite possible social pressures to do otherwise. Replies from: JamesOz comment by James Ozden (JamesOz) · 2022-07-02T10:58:46.299Z · EA(p) · GW(p) Then I think it'd be more accurate if you write 'he admitted to several instances of what I consider to be sexual harassment'. I'm slightly confused about this. Do you believe that Jacy did commit several instance of sexual harassment but he just hasn't admitted it? Or you don't believe Jacy has committed sexual harassment at all? If the latter: These instances aren't what John considers sexual harassment, it's what several women (over 5 at least from my reading of the Apology post [EA · GW]) consider to be sexual harassment. If this reasonably large number of women didn't think it was sexual harassment, they wouldn't have complained to CEA or others within the community. Therefore I think we can be somewhat confident that Jacy has made sexual advances that a non-negligible number of women consider to be sexual harassment. Subsequently, as John stated, he made an apology post saying sorry for these instances of sexual harassment (of course he would never put it like that, but just because you don't specifically say "sorry for sexual harassment" it doesn't mean it doesn't happen). Basically, we have several independent pieces of evidence of Jacy being involved in sexual harassment (reports from 5+ women, being distanced from CEA, being expelled from Brown university) with the only piece of evidence pointing against this being Jacy's own comments, which is of course biased. Given this, I think a claim that Jacy hasn't been involved in sexual harassment seems wrong. If the former: I think this is quite pedantic and it's irrelevant whether Jacy admits to the bad behaviour, if we have enough evidence to be confident it happened. Replies from: Timothy Chan comment by Timothy Chan · 2022-07-02T14:35:34.912Z · EA(p) · GW(p) Honestly, I'm new - I only just became aware of all this. I think I haven't had enough time to make a judgment myself. But what I do know is that John's initial claim that Jacy 'admitted to several instances of sexual harassment' seemed misleading, and I decided to point that out because there was a lack of people who did so, which seems harmful to community norms. comment by James Ozden (JamesOz) · 2022-07-03T19:48:24.447Z · EA(p) · GW(p) First, thanks for being honest and saying you're not particularly well-informed on this, that definitely helped me approach your comment with less judgement. I've also seen that you edited your initial comment so thanks for that. I do however still have some concerns about the comments you made. I agree that the claim "Jacy admitted several instances of sexual harassment" isn't very easily verifiable (e.g. the discussion on what the "apology" is for). However, I think that this is largely irrelevant and begins a semantic discussion that is totally missing the original point, and generally missing the forest for the trees. The main point John was making is that Jacy has been accused and punished (maybe not the right word?) for several instances of sexual harassment over his career. In my opinion it is almost totally irrelevant whether Jacy himself admits this, as I (and I think many others) think there is very reasonable evidence to believe these instances of sexual harassment happened. Launching into a semantics discussion about whether Jacy admitted it seems to detract from the key point in unhelpful ways, although I agree that John's comment might have been better if he had totally excluded the line "Jacy admitted several instances of sexual harassment". Again, I agree there are some epistemic benefits to calling out statements that don't seem correct, but I think there are also some large downsides to the way you did this in this instance. [edited last sentence for clarity]. Then I think it'd be more accurate if you write 'he admitted to several instances of what I consider to be sexual harassment'. is that it brings about an element of questioning of exactly how much Jacy's acts constituted 'sexual harassment'. Women are often accused of making things up, exaggerating claims or otherwise reporting "locker room banter" or "harmless jokes" as sexual harassment, and I felt your comments were adding to this. This feels particularly worrying within the EA movement, which is already only 29% female [EA · GW], as it could show women that EA spaces are not safe for women due to lack of care around sexual harassment issues. For context, I feel personally strongly about this as I've heard from several close women friends of mine who have attended EA events or otherwise met male EAs who have been misogynistic towards them, in ways that have deterred them from becoming more involved in EA. In short, I think EA spaces are already challenging for women to feel comfortable in, without us making comments that seem to trivialise issues of sexual harassment. I think the fact that you also said "I think I haven't had enough time to make a judgment myself." adds to this. I don't think it requires a huge amount of effort to update towards 'Jacy very likely committed instances of sexual harassment' based on several independent reports of sexual harassment, expulsion from university, his apology,  etc.  To not update towards this after even a short consideration again implies to me that you're doubting whether true sexual harassment even occurred (e.g. ignoring reports from several (over 5?) women for comments by 1 man) which would add to the notion of EA spaces not being safe for women. Sorry for the slight rant but these are issues that my friends have been affected by within EA spaces, and something I feel strongly about. Replies from: Khorton, Timothy Chan comment by Kirsten (Khorton) · 2022-07-03T20:04:01.800Z · EA(p) · GW(p) I agree with this comment. I find the implication that Jacy's views deserves equal or greater weight than the testimony of multiple women troubling. Replies from: JamesOz comment by James Ozden (JamesOz) · 2022-07-03T22:57:08.401Z · EA(p) · GW(p) I'm confused why this is getting downvoted, can someone explain? comment by Timothy Chan · 2022-07-04T19:09:02.824Z · EA(p) · GW(p) Thank you for being more charitable after reading my comment, and for your effort in a detailed response. Again, I agree there are some epistemic benefits to calling out statements that don't seem correct, but I think there are also some large downsides to this in this case. I think I still prefer to challenge a claim that quite blatantly (but probably unintentionally) misleads people into thinking that someone confessed to committing a crime, a claim placed in the highest upvoted comment on a post receiving a lot of attention. I think we should be suspicious of thinking ‘let bad arguments persist because criticizing them would be bad’. I lean towards disagreeing with your claim that it’s net negative overall to point out that inaccuracy but caveat that I’m not certain of how confident I should be in that position. One reason I think there are positives is that there are indeed cases in which allegations don’t hold up, and innocent people get hurt (note I’m not saying that this necessarily applies to this case, and from what I can tell it seems to constitute a low percentage of cases). It makes sense to consider the interests of those accused but innocent, in addition to the interests of sexual harassment victims and potential victims. I think ensuring we aren’t overzealous requires us to uphold certain norms, even when it’s challenging to do so socially. For context, I’m not in the Anglosphere at the moment - but I do see some trends there involving strong emotions and accompanying criticisms that do worry me, and I don’t think this community should be overly concerned with potential criticism so as to not speak up to uphold those norms. I had to make several comments following up on the misleading statement because John didn’t deliver on the statement, nor take note and rephrase his writing to be less misleading. Unfortunately, he still hasn’t done so. is that it brings about an element of questioning On how I’ve phrased a possible rephrasing (and the updated possible rephrasing in the edited part of the comment) of John’s statement, to reduce the misleadingness, I wasn’t as aware as you were of your concerns and didn’t know it has risks of making people feel questioned/not taken seriously when I wrote that. Your concerns make sense and I’ll keep them in mind. But I also haven’t made up my mind on the extent to which it’s important to be mindful of how I should present what I consider truthful statements (i.e. we are the ones deciding on what to make of the available evidence - so we are in fact the ones who 'consider' whether it constitutes sexual harassment) - in order to reduce the risk of such feelings. I think the fact that you also said "I think I haven't had enough time to make a judgment myself." adds to this... To not update towards this after even a short consideration... I think we have different understandings of the term ‘judgment’ here. In this quite serious context (which sometimes involves the law), I take ‘judgment’ to mean much more (as in 'pass judgment') than updating beliefs. I didn’t say that the evidence didn’t update my views (actually I think it’ll be absurd if it didn’t), nor did I imply that the views of one accused ‘deserves equal or greater weight’ than the testimony of multiple accusers (as Khorton wrote). That multiple people have made complaints should indeed update us towards thinking that sexual harassment happened. But again, I take ‘judgment’ to mean much more than ‘updating’. When I said that “I think I haven't had enough time to make a judgment myself”, I meant there wasn’t enough time to make a solid conclusion about these especially troubling allegations (edit: time isn't the only thing you need - it also depends on whether there's sufficient information to analyze). This might not be the approach some people take, but there are huge personal costs at stake for the parties involved, and I don’t want to condemn anyone so quickly. Also, realize that I wrote that “I think I haven't had enough time to make a judgment myself” within one day of learning of the allegations. I think it's reasonable to be cautious of confidence. Unfortunately, I won’t be able to comment much more. I’m a slow writer and I’m exhausted from having to follow up so much. I only wanted to make that point about John’s comment and get him to follow better practices - but that’s been unsuccessful. I hope our future interactions could be under better circumstances. Replies from: JamesOz comment by James Ozden (JamesOz) · 2022-07-05T14:30:07.628Z · EA(p) · GW(p) Thanks for the reply Timothy, and I totally appreciate you choosing to not engage again as this can be quite time and energy consuming. There's one thing I wasn't clear enough in my original comment which I've now edited which might mean we're not as misaligned as one might think! Namely, I didn't say (or even necessarily think) that your comment on the truthfulness of John's claim was net negative, as you suggest. I've edited the original comment but in practice what I meant was, I think there's better ways of doing so, without questioning the sexual harassment claims actually made by the women affected in these incidents.  So overall I agree it's important to point about claims that are untruthful, but I also think you did this in a way that a) casted doubted on the actual sexual harassment, which IMO seems very likely so it is insensitive to suggest otherwise and b) is damaging to the EA community as a safe place for women. For reference, this is what I updated my sentence to in the previous comment: Again, I agree there are some epistemic benefits to calling out statements that don't seem correct, but I think there are also some large downsides to the way you did this in this instance. [edited last sentence only] comment by John G. Halstead (Halstead) · 2022-07-03T14:39:15.011Z · EA(p) · GW(p) How would you have described, in plain English, the apology post? I think it is important to read between the lines of his apology post. Having received numerous complaints, CEA made Jacy write an apology post. He claims not to know what the complaints are about, but tries to give the impression that it is because of saying stuff like "hey cutie" to people. As mentioned below, there must be an awful lot of inept flirting in the community given the social awkwardness of EAs and the gender skew of the community. Despite that, to my knowledge Jacy is the only person who has ever been banned from all EA events for sexual misconduct. This suggests that allegations are probably worse than Jacy suggests. Note that CEA cannot reveal the nature of the accusations in order to protect the identity of the complainants. We then also learn that he was thrown out of university for sexual misconduct in 2012. This was in 2012, before the start of metoo.  Someone at Brown at this time told me that no-one was expelled by Brown for sexual misconduct during the whole time they were there. This suggests that the allegations were bad. comment by John G. Halstead (Halstead) · 2022-07-01T16:19:26.299Z · EA(p) · GW(p) How else would you define the apology post other than an apology for sexual harassment? I would have thought the debate would be about an appropriate time for him to rejoin the community not about whether he actually committed sexual harassment. Or whether he was unfortunate enough for multiple women to independently accuse of him sexual harassment throughout his life comment by Jacy · 2022-07-01T16:01:22.680Z · EA(p) · GW(p) - I’ve never harassed anyone, and I’ve never stated or implied that I have.  I have apologized for making some people uncomfortable with “coming on too strong” in my online romantic advances. As I've said before in that Apology [EA · GW], I never intended to cause any discomfort, and I’m sorry that I did so. There have, to my knowledge, been no concerns about my behavior since I was made aware of these concerns in mid-2018. - I didn’t lie on my website. I had (in a few places) described myself as a “co-founder” of EA [Edit: Just for clarity, I think this was only on my website for a few weeks? I think I mentioned it and was called it a few times over the years too, such as when being introduced for a lecture. I co-founded the first dedicated student group network,  helped set up and moderate the first social media discussion groups, and was one of the first volunteers at ACE as  a college student. I always favored a broader-base view of how EA emerged than what many perceived at the time (e.g., more like the founders of a social movement than of a company). Nobody had pushed back against "co-founder" until 2019, and I stopped using the term as soon as there was any pushback.], as I think many who worked to build EA from 2008-2012 could be reasonably described. I’ve stopped using the term because of all the confusion, which I describe a bit in “Some Early History of Effective Altruism.” - Regarding SI, we were already moving on from CEA’s fiscal sponsorship and donation platform once we got our 501c3 certification in February 2019, so “stopped” and “severed ties” seem misleading. - CEA did not make me write an apology. We agreed on both that apology document and me not attending CEA events as being the right response to these concerns. I had already written several apologies that were sent privately to various parties without any involvement from CEA. - There was no discussion of my future posting on the EA Forum, nor to my knowledge any concerns about my behavior on this or other forums. Otherwise, I have said my piece in the two articles you link, and I don’t plan to leave any more comments in this thread. I appreciate everyone’s thoughtful consideration. comment by Kirsten (Khorton) · 2022-07-02T09:18:42.507Z · EA(p) · GW(p) Hi Jacy, you said in your apology "I am also stepping back from the EA community more generally, as I have been planning to since last year in order to focus on my research." I haven't seen you around since then, so was surprised to see you attend an EA university retreat* and start posting more about EA. Would you describe yourself as stepping back into the EA community now? Replies from: Jacy comment by Jacy · 2022-07-02T20:04:41.349Z · EA(p) · GW(p) Hi Khorton, I wouldn't describe it as stepping back into the community, and I don't plan on doing that, regardless of this issue, unless you consider occasional posts and presentations or socializing with my EA friends as such. This post on the EV of the future was just particularly suited for the EA Forum (e.g., previous posts on it), and it's been 3 years since I published that public apology and have done everything asked of me by the concerned parties (around 4 years since I was made aware of the concerns, and I know of no concerns about my behavior since then). I'm not planning to comment more here. This is in my opinion a terrible place to have these conversations, as Dony pointed out as well. comment by John G. Halstead (Halstead) · 2022-07-03T13:50:40.435Z · EA(p) · GW(p) It's a comment that is typical of Jacy - he cannot help but dissemble. "I am also stepping back from the EA community more generally, as I have been planning to since last year in order to focus on my research." It makes it sound like he was going to step back anyway even while he was touting himself as an EA co-founder and was about to promote his book! In fact, if you read between the lines, CEA severed ties between him and the community. He then pretends that he was going to do this anyway. The whole apology is completely pathetic. comment by John G. Halstead (Halstead) · 2022-07-03T15:02:27.493Z · EA(p) · GW(p) Why should we believe that you have in fact changed? You were kicked out of Brown for sexual misconduct. You claim to believe that the allegations at that time were false. Instead of being extra-careful in your sexual conduct following this, at least five women complain to CEA about your sexual sexual misconduct, and CEA calls the complaints 'credible and concerning'. There is zero reason to think you have changed. Plus, you're a documented liar, so we should have no reason to believe you. comment by John G. Halstead (Halstead) · 2022-07-01T17:04:19.795Z · EA(p) · GW(p) • Were you expelled from Brown for sexual harassment? Or was that also for clumsy online flirting? • You did lie on your website. It is false that you are a co-founder of effective altruism. There is not a single person in the world who thinks that is true, and you only said it to further your career. That you can't even acknowledge that that was a lie speaks volumes. • Perhaps CEA can clarify whether there was any connection between the allegations and CEA severing ties with SI. • Were the allegations reported to the Sentience Institute before CEA? Why did you not write a public apology before CEA approached you with the allegations? You agreeing with CEA to being banned from EA events and you being banned from EA events are the same thing. • The issue is how long you should 'step away' from the community for. Replies from: Owen_Cotton-Barratt comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-01T21:29:09.712Z · EA(p) · GW(p) I wouldn't have described Jacy as a co-founder of effective altruism and don't like him having had it on his website, but it definitely doesn't seem like a lie to me (I kind of dislike the term "co-founder of EA" because of how ambiguous it is). Anyway I think calling it a lie is roughly as egregious a stretch of the truth as Jacy's claim to be a co-founder (if less objectionable since it reads less like motivated delusion). In both cases I'm like "seems wrong to me, but if you squint you can see where it's coming from". comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-01T22:37:44.067Z · EA(p) · GW(p) [meta for onlookers: I'm investing more energy into holding John to high standards here than Jacy because I'm more convinced that John is a good faith actor and I care about his standards being high. I don't know where Jacy is on the spectrum from "kind of bad judgement but nothing terrible" to "outright bad actor", but I get a bad smell from the way he seems to consistently turns to present things in a way that puts him in a relatively positive light and ignores hard questions, so absent further evidence I'm just not very interested in engaging] comment by John G. Halstead (Halstead) · 2022-07-03T14:30:20.516Z · EA(p) · GW(p) "I don't know where Jacy is on the spectrum from "kind of bad judgement but nothing terrible" to "outright bad actor"." I don't understand this and claims like it. To recap, he was thrown out of university in 2012 for sexual misconduct. Someone who was at Brown around this time told me that no-one else was expelled from Brown for sexual misconduct the entire they were there. This suggests that his actions were very bad. Despite being expelled from Brown, at least five women in the EA community then complain to CEA because of his sexual misconduct. CEA thinks these actions are bad enough to ban him from all EA events and dissociate from him completely. Despite Jacy giving the impression that was due to clumsy flirting, I strongly doubt that this is true. Clumsy flirting must happen a fair amount in this community given the social awkwardness of EAs, but few people are expelled from the community as a result. This again suggests that the allegations against Jacy are very bad. This should update us towards the view that the Brown allegations were also true (noting that Jacy denies that they are true). In your view he also makes statements that are gross exaggerations/delusional in order to further his career (though I mustn't say that he lied). I think we have enough evidence for the 'bad actor' categorisation. comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-03T16:44:18.941Z · EA(p) · GW(p) It's from "man things in the world are typically complicated, and I haven't spent time digging into this, but although there surface level facts look bad I'm aware that selective quoting of facts can give a misleading impression". I'm not trying to talk you out of the bad actor categorization, just saying that I haven't personally thought it through / investigated enough that I'm confident in that label. (But people shouldn't update on my epistemic state! It might well be I'd agree with you if I spent an hour on it; I just don't care enough to want to spend that hour.) comment by John G. Halstead (Halstead) · 2022-07-03T14:35:54.820Z · EA(p) · GW(p) Here is an interesting post on the strength of the evidence provided by multiple independent accusations of sexual misconduct throughout one's life. http://afro-optimist.blogspot.com/2018/09/why-you-should-probably-believe-ford.html comment by John G. Halstead (Halstead) · 2022-07-03T14:37:38.829Z · EA(p) · GW(p) Isn't the upshot of this that you want to be more critical of good faith actors than bad faith actors? That seems wrong to me. Replies from: Owen_Cotton-Barratt comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-03T16:37:26.742Z · EA(p) · GW(p) Yes, I personally want to do that, because I want to spend time engaging with good faith actors and having them in gated spaces I frequent. In general I have a strong perfectionist streak, which I channel only to try to improve things which are good enough to seem worth the investment of effort to improve further. This is just one case of that. (Criticizing is not itself something that comes with direct negative effects. Of course I'd rather place larger sanctions on bad faith actors than good faith actors, but I don't think criticizing should be understood as a form of sanctioning.) comment by throwaway01 · 2022-07-01T23:47:56.698Z · EA(p) · GW(p) Is Jacy's comment above where he seemed to present things in a way that puts him in a relatively positive light and ignores hard questions? Or the Apology post? I don't really see how you're getting that smell. John wrote a very negative comment, whether or not you think that negativity was justified, so it makes sense for Jacy to reply by pointing out inaccuracies that would make him seem more positive. I think it would take an extremely unusual person to engage in a discussion like this that isn't steering in a more positive direction towards them. I also just took the questions he "ignored" as being ones where he doesn't see them as inaccurate. This is all not even mentioning how absolutely miserable and tired Jacy must be to go through this time and time again, again regardless of what you think of him as a person... comment by John G. Halstead (Halstead) · 2022-07-03T14:02:25.860Z · EA(p) · GW(p) In my opinion, this is a bizarre comment. You seem to have more sympathy with Jacy, who has been accused of sexual harassment at least six times in this life for having to defend himself  than eg the people who are reading this who he has harassed, or the people who are worried that he might harass them in the future as he tries to rejoin the community. comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-02T00:03:22.554Z · EA(p) · GW(p) Actually no I got reasonably good vibes from the comment above. I read it as a bit defensive but it's a fair point that that's quite natural if he's being attacked. I remember feeling bad about the vibes of the Apology post but I haven't gone back and reread it lately. (It's also a few years old, so he may be a meaningfully different person now.) Replies from: Owen_Cotton-Barratt comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-02T00:11:29.084Z · EA(p) · GW(p) I actually didn't mean for any of my comments here to get into attacks on our defence of Jacy. I don't think I have great evidence and don't think I'm a very good person to listen to on this! I just wanted to come and clarify that my criticism of John was supposed to be just that, and not have people read into it a defence of Jacy. (I take it that the bar for deciding personally to disengage is lower than for e.g. recommending others do that. I don't make any recommendations for others. Maybe I'll engage with Jacy later; I do feel happier about recent than old evidence, but it hasn't yet moved me to particularly wanting to engage.) comment by John G. Halstead (Halstead) · 2022-07-02T07:11:30.936Z · EA(p) · GW(p) So, are you saying it is an honest mistake but not a lie? His argument for being a co-founder seems to be that he was involved in the utilitarian forum Felicifia in 2008. He didn't even found it. I know several other people who founded or were involved in that forum and none of them has ever claimed to be a founder of effective altruism on that basis. Jacy is the only person to do that and it is clear he does it in order to advance his claim to be a public intellectual because it suggests to the outside world that he was as influential as Will MacAskill, Toby Ord, Elie Hassenfeld, and Holden Karnofsky, which he wasn't and he knows he wasn't. The dissembling in the post is typical of him. He never takes responsibility for anything unless forced to do so. Replies from: Owen_Cotton-Barratt comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-02T08:00:44.300Z · EA(p) · GW(p) I'm saying it's a gross exaggeration not a lie. I can imagine someone disinterested saying "ok but can we present a democratic vision of EA where we talk about the hundred founders?" and then looking for people who put energy early into building up the thing, and Jacy would be on that list. (I think this is pretty bad, but that outright lying is worse, and I want to protect language to talk about that.) comment by Lukas_Gloor · 2022-07-02T12:57:06.933Z · EA(p) · GW(p) I want to flag that something like "same intention as outright lying, but doing it in a way to maximize plausible deniability" would be just as bad as outright lying. (It is basically "outright lying" but in a not stupid fashion.) However, the problem is that sometimes people exaggerate or get things wrong for more innocuous reasons like exaggerated or hyperbolic speech or having an inflated sense of one's importance in what's happening. Those cases are indeed different and deserve to be treated very different from lying (since we'd expect people to self-correct when they get the feedback, and avoid mistakes in the future). So, I agree with the point about protecting language. I don't agree with the implicit message "it's never as bad as outright lying when there's an almost-defensible interpretation somewhere." I think protecting the language is important for reasons of legibility and epistemic transparency, not so much because the moral distinction is always clean-cut. Replies from: Owen_Cotton-Barratt comment by James Ozden (JamesOz) · 2022-07-02T10:41:56.356Z · EA(p) · GW(p) This feels off to me. It seems like Jacy deliberately misled people to think that he was a co-founder of EA, to likely further his own career. This feels like a core element of lying, to deceive people for personal gain, which I think is the main reason one would claim they're the co-founder of EA when almost no one else would say this about them. Sure I think it can also be called "gross exaggeration" but where do you think the line is between "gross exaggeration" and "lying"? For me, lying means you say something that isn't true (in the eyes of most people) for significant personal gain (i.e. status) whereas gross exaggeration is a smaller embellishment and/or isn't done for large personal gain. comment by John G. Halstead (Halstead) · 2022-07-03T14:10:06.248Z · EA(p) · GW(p) You are taking charitable interpretations to an absolute limit here. You seem to be saying "maybe Jacy was endorsing a highly expansive conception of 'founding' which implies that EA has hundreds of founders'". This is indeed a logical possibility. But I think the correct credence to have in this possibility is ~0.  Instead, we should have ~1 credence in  the following "he said it knowing it is not true in order to further his career". And by 'founding' he meant, "I'm in the same bracket as Will MacAskill". Otherwise, why put it on your website and in your bio? Replies from: Owen_Cotton-Barratt comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-03T16:40:17.725Z · EA(p) · GW(p) I don't think it's like "Jacy had an interpretation in mind and then chose statements". I think it's more like "Jacy wanted to say things that made himself look impressive, then with motivated reasoning talked himself into thinking it was reasonable to call himself a founder of EA, because that sounded cool". (Within this there's a spectrum of more and less blameworthy versions, as well as the possibility of the straight-out lying version. My best guess is towards the blameworthy end of the not-lying versions, but I don't really know.) comment by John G. Halstead (Halstead) · 2022-07-03T14:34:50.524Z · EA(p) · GW(p) So rather than a lie, you think it might be a motivated delusion. Motivated delusions are obviously false. But then at the end you say it is not obviously false. This is inconsistent. Replies from: Owen_Cotton-Barratt comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-10T14:32:03.052Z · EA(p) · GW(p) True/false isn't a dichotomy. The statement here was obviously a stretch / not entirely true. I'd guess it had hundreds of thousands of microlies ( https://forum.effectivealtruism.org/posts/SGFRneArKi93qbrRG/truthful-ai?commentId=KdG4kZEu9GA4324AE [EA(p) · GW(p)] ) But I think it's important to reserve terms like "lie" for "completely false", because otherwise you lose the ability to police that boundary (and it's important to police it, even if I also want higher standards enforced around many spaces I interact with). comment by Harrison Durland (Harrison D) · 2022-07-01T15:11:01.739Z · EA(p) · GW(p) Could you provide links or specific quotes regarding his claim of being a founder of EA? Perhaps unlikely, but maybe through web archive? comment by Kirsten (Khorton) · 2022-07-01T15:29:10.095Z · EA(p) · GW(p) It's briefly referenced in this recent post, though I don't think this is what John was talking about. https://jacyanthis.com/some-early-history-of-effective-altruism Replies from: RyanCarey comment by John G. Halstead (Halstead) · 2022-07-01T16:13:16.437Z · EA(p) · GW(p) https://nonprofitchronicles.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/ comment by anonymous_ea · 2022-07-02T21:12:49.203Z · EA(p) · GW(p) From [EA(p) · GW(p)] Jacy: this was only on my website for a few weeks at most... I believe I also casually used the term elsewhere, and it was sometimes used by people in my bio description when introducing me as a speaker. comment by Oliver Sourbut · 2022-06-30T15:42:00.545Z · EA(p) · GW(p) My experience is different, with maybe 70% of AI x-risk researchers I've discussed with being somewhat au fait with the notion that we might not know the sign of future value conditional on survival. But I agree that it seems people (myself included) have a tendency to slide off this consideration or hope to defer its resolution to future generations, and my sample size is quite small (a dozen maybe) and quite correlated. For what it's worth, I recall this question being explicitly posed in at least a few of the EA in-depth fellowship curricula I've consumed or commented on, though I don't recall specifics and when I checked EA Cambridge's most recent curriculum I couldn't find it. Replies from: Ben_West, Jacy comment by Ben_West · 2022-07-06T02:09:43.383Z · EA(p) · GW(p) My anecdata is also that most people have thought about it somewhat, and "maybe it's okay if everyone dies" is one of the more common initial responses I've heard to existential risk. But I agree with OP that I more regularly hear "people are worried about negative outcomes just because they themselves are depressed" than "people assume positive outcomes just because they themselves are manic" (or some other cognitive bias). comment by Jacy · 2022-07-07T13:10:58.455Z · EA(p) · GW(p) This is helpful data. Two important axes of variation here are: - Time, where this has fortunatley become more frequently discussed in recent years - Involvement, where I speak a lot with artificial intelligence and machine learning researches who work on AI safety but not global priorities research; often their motivation was just reading something like Life 3.0. I think these people tend to have thought through crucial considerations less than, say, people on this forum. comment by DonyChristie · 2022-06-30T18:53:02.269Z · EA(p) · GW(p) I like "quality risks" (q-risks?) and think this is more broadly appealing to people who don't want to think about suffering-reduction as the dominantly guiding frame for whatever reason. Moral trade can be done with people concerned with other qualities, such as worries about global totalitarianism due to reasons independent of suffering such as freedom and diversity. It's also relatively more neglected than the standard extinction risks, which I am worried we are collectively Goodharting on as our focus (and to a lesser extent, focus on classical suffering risks may fall into this as well). For instance, nuclear war or climate change are blatant and obvious scary problems that memetically propagate well, whereas there may be many q-risks to future value that are more subtle and yet to be evinced. Tangentially, this gets into a broader crux I am confused by: should we work on obvious things or nonobvious things? I am disposed towards the latter. comment by Mauricio · 2022-06-30T21:25:30.067Z · EA(p) · GW(p) Thanks for the thoughtful post! I agree this is a very important question, I sympathize [EA · GW] with the view that people overweight some arguments for historical optimism, and I'm mostly on board with the list of considerations. Still, I think your associated EV calculation has significant weaknesses, and correcting for these seems to produce much more optimistic results. • You put the most weight on historical harms, and you also put a lot of weight on empirical utility asymmetry. But arguably, the future will be deeply different from the past (including through reduced influence of biological evolution), so simple extrapolation from the past or present should not receive very high weight. (For the same reason, we should also downweight historical progress.) • Arguably, historical harms have occurred largely through the divergence of agency and patiency, so counting both is mostly double-counting. (Similarly, historical progress has largely occurred through the other mechanisms that are already covered.) So we should further downweight these. • I don't see why we should put negative weight on "The Nature of Digital Minds, People, and Sentience." • Reasoned cooperation should arguably receive significantly more weight as an argument for optimism; moral trade should allow altruists to substantially mitigate suffering and increase well-being, especially as human tools for shaping the world and efficiently coordinating continue to improve. • Editing the calculation to account for all of the above (and ignoring my other, more minor quibbles), we reach a fairly optimistic result (especially if we're looking at the more relevant logarithmic sum). (Edited to account for how downweighting history should mean downweighting historical progress, not just downweighting historical harms, and for phrasing tweaks.) Replies from: Jacy comment by Jacy · 2022-07-01T00:10:29.618Z · EA(p) · GW(p) It's great to know where your specific weights differ! I agree that each of the arguments you put forth are important. Some specifics: • I agree that differences in the future (especially the weird possibilities like digital minds and acausal trade) is a big reason to discount historical evidence. Also, by these lights, some historical evidence (e.g., relations across huge gulfs of understanding and ability like from humans to insects) seems a lot more important than others (e.g., the fact that animal muscle and fat happens to be an evolutionarily advantageous food source). • I'm not sure if I'd agree that historical harms have occurred largely through divergence; there are many historical counterfactuals that could have prevented many harms: the nonexistence of humans, an expansion of the moral circle, better cooperation, discovery of a moral reality, etc.. In many cases, a positive leap in any of these would have prevented the atrocity.  What makes divergence more important? I would make the case based on something like "maximum value impact from one standard deviation change" or "number of cases where harm seemed likely but this factor prevented it." You could write an EA Forum post going into more detail on that. I would be especially excited for you to go through specific historical events and do some reading to estimate the role of (small changes in) each of these forces. • As I mention in the post, reasons to put negative weight on DMPS include the vulnerability of digital minds to intrusion, copying, etc., the likelihood of their instrumental usefulness in various interstellar projects, and the possibility of many nested minds who may be ignored or neglected. • I agree moral trade is an important mechanism of reasoned cooperation. I'm really glad you put your own numbers in the spreadsheet! That's super useful. The ease of flipping the estimates from negative to positive and positive to negative is one reason I only make the conclusion "not highly positive" or "close to zero" and not going with the mean estimate from myself and others (which would probably be best described as moderately negative, e.g., the average at an EA meetup where I presented this work was around -10). I think your analysis is on the right track to getting us better answers to these crucial questions :) Replies from: Mauricio, Jamie_Harris comment by Mauricio · 2022-07-01T00:38:57.939Z · EA(p) · GW(p) Thanks! Responding on the points where we may have different intuitions: • Regarding your second bullet point, I agree there are a bunch of things that we can imagine having gone differently historically, where each would have been enough to make things go better. These other factors are all already accounted for, so putting the weight on historical harms/progress again still seems to be double-counting (even if which thing it's double-counting isn't well-defined). • Regarding your third bullet point, thanks for flagging those points - I don't think I buy that any of them are reasons for negative weight. • Intrusions could be harmful, but there could also be positive analogues. • Duplication, instrumental usefulness, and nested minds are just reasons to think there might be more of these minds, so these considerations only seem net negative if we already have other reasons to assume these minds' well-being would be net negative (we may have such reasons, but I think these are already covered by other factors, so counting them here seems like double-counting) • (As long as we're speculating about nested minds: should we expect them to be especially vulnerable because others wouldn't recognize them as minds? I'm skeptical; it seems odd to assume we'll be at that level of scientific progress without having learned how experiences work.) • On interpretation of the spreadsheet: • I think (as you might agree) that results should be taken as suggestive but far from definitive. Adding things up fails to capture many important dynamics of how these things work (e.g., cooperation might not just create good things but also separately counteract bad things). • Still, insofar as we're looking at these results, I think we should mostly look at the logarithmic sum (because some dynamics of the future could easily be far more important than others). • As I suggested, I have a few smaller quibbles, so these aren't quite my numbers (although these quibbles don't really matter if we're looking at the logarithmic sum). Replies from: Jacy comment by Jacy · 2022-07-07T13:21:02.783Z · EA(p) · GW(p) Thanks for going into the methodological details here. I think we view "double-counting" differently, or I may not be sufficiently clear in how I handle it. If we take a particular war as a piece of evidence, which we think fits into both "Historical Harms" and "Disvalue Through Intent," and it is overall -8 evidence on the EV of the far future, but it seems 75% explained through "Historical Harms" and 25% explained through "Disvalue Through Intent," then I would put -6 weight on the former and -2 weight on the latter. I agree this isn't very precise, and I'd love future work to go into more analytical detail (though as I say in the post, I expect more knowledge per effort from empirical research). I also think we view "reasons for negative weight" differently. To me, the existence of analogues to intrusion does not make intrusion a non-reason. It just means we should also weigh those analogues. Perhaps they are equally likely and equal in absolute value if they obtain, in which case they would cancel, but usually there is some asymmetry. Similarly, duplication and nesting are factors that are more negative than positive to me, such as because we may discount and neglect the interests of these minds because they are more different and more separated from the mainstream (e.g., the nested minds are probably not out in society campaigning for their own interests because they would need to do so through the nest mind—I think you allude to this, but I wouldn't dismiss it merely because we'll learn how experiences work, such as because we have very good neuroscientific and behavioral evidence of animal consciousness in 2022 but still exploit animals). Your points on interaction effects and nonlinear variation are well-taken and good things to account for in future analyses. In a back-of-the-envelope estimate, I think we should just assign values numerically and remember to feel free to widely vary those numbers, but of course there are hard-to-account-for biases in such assignment, and I think the work of GJP, QURI, etc. can lead to better estimation methods. Replies from: Mauricio comment by Mauricio · 2022-07-08T01:15:41.126Z · EA(p) · GW(p) I think we're on a similar page regarding double-counting--the approach you describe seems like roughly what I was going for. (My last comment was admittedly phrased in an overly all-or-nothing way, but I think the numbers I attached suggest that I wasn't totally eliminating the weight on history.) On whether we see "reasons for negative weight" differently, I think that might be semantic--I had in mind the net weight, as you suggest (I was claiming this net weight was 0). The suggestion that digital minds might be affected just by their being different is a good point that I hadn't been thinking about. (I could imagine some people speculating that this won't be much of a problem because influential minds will also eventually tend to be digital.) I tentatively think that does justify a mildly negative weight on digital minds, with the other factors you mention seeming to be fully accounted for in other weights. comment by Jamie_Harris · 2022-07-16T09:02:21.743Z · EA(p) · GW(p) I also put my intuitive scores into a copy of your spreadsheet. In my head, I've tended to simplify the picture into essentially the "Value Through Intent" argument vs the "Historical Harms" argument, since these seem liked the strongest arguments in either direction to me. In that framing, I lean towards the future being weakly positive. But this post is a helpful reminder that there are various other arguments pointing in either direction (which, in my case, overall push me towards a less optimistic view). My overall view still seems pretty close to zero at the moment though. Also interesting how wildly different each of our scores are. Partly I think this might be because I was quite confused/worried about double-counting. Also maybe just not fully grasping some of the points listed in the post. comment by Oliver Sourbut · 2022-06-30T15:46:13.918Z · EA(p) · GW(p) I've considered a possible pithy framing of the Life Despite Suffering question as a grim orthogonality thesis (though I'm not sure how useful it is): We sometimes point to the substantial majority's revealed preference for staying alive as evidence of a 'life worth living'. But perhaps 'staying-aliveness' and 'moral patient value' can vary more independently than that claim assumes. This is the grim orthogonality thesis. An existence proof for the 'high staying-aliveness x low moral patient value' quadrant is the complex of torturer+torturee, which quite clearly can reveal a preference for staying alive, while quite plausibly being net negative value. Can we rescue the correlation of revealed 'staying-aliveness' preference with 'life worth livingness'? We can maybe reason about value from the origin of moral patients we see, without having a physical theory of value. All the patients we see at present are presumably products of natural selection. Let's also assume for now that patienthood comes from consciousness. Two obvious but countervailing observations • to the extent that conscious content is upstream of behaviour but downstream of genetic content, natural selection will operate on conscious content to produce behaviour which is fitness-correlated • if positive conscious content produces attractive behaviour (and vice versa), we might anticipate that an organism 'doing well' according to suitable fitness-correlates would be experiencing positive conscious content • this seems maybe true of humans? • to the extent that behaviour is downstream of non-conscious control processes, natural selection will operate on non-conscious control processes to produce behaviour which is fitness-correlated • we can not rule out experiences 'not worth living' which nevertheless produce net revealed staying-aliveness preference, if the behaviour is sufficiently under non-conscious control, or if the selection for behaviour downstream of negative conscious experience is weak • weak selection is especially likely in novel out-of-distribution situations • in general, organisms which reveal preferences for not staying alive will never be ceteris paribus fitter (though there are special cases of course) For non-naturally-selected moral patients, I think even the above bets are basically off. comment by Metztli · 2022-06-30T19:38:30.999Z · EA(p) · GW(p) comment by abukeki · 2022-06-30T14:34:08.281Z · EA(p) · GW(p) Look into suffering-focused AI safety which I think is extremely important and neglected (and s-risks [? · GW]). Replies from: Mauricio comment by Mauricio · 2022-07-01T01:16:47.634Z · EA(p) · GW(p) More specifically, I think there's a good case to be made* that most of the expected disvalue of suffering risks comes from cooperation failures, so I'd especially encourage people who are interested in suffering risks and AI to look into cooperative AI and cooperation on AI. (These are areas mentioned in the paper you cite and in related writing.) (*Large-scale efforts to create disvalue seem like they would be much more harmful than smaller-scale or unintentional actions, especially as technology advances. And the most plausible reason I've heard for why such efforts might happen is that: various actors might commit to creating disvalue under certain conditions, as a way to coerce other agents, and they would then carry out these threats if the conditions come about. This would leave everyone worse off than they could have been, so it is a sort of cooperation failure. Sadism seems like less of a big deal in expectation, because many agents have incentives to engage in coercion, while relatively few agents are sadists.) (More closely related to my own interest in them, cooperation failures also seem like one of the main types of things that may prevent humanity from creating thriving futures, so this seems like an area that people with a wide range of perspectives on the value of the future can work together on :) comment by seanrson ([email protected]) · 2022-06-30T14:01:45.510Z · EA(p) · GW(p) I think considerations like these are important to challenge the recent emphasis on grounding x-risk (really, extinction risk) in near-term rather than long-term concerns. That perspective seems to assume that the EV of human expansion is pretty much settled, so we don’t have to engage too deeply with more fundamental issues in prioritization, and we can instead just focus on marketing. I’d like to see more written directly comparing the tractability and neglectedness of population risk reduction and quality risk reduction. I wonder if you’ve perhaps overstated things in claiming that a lower EV for human expansion suggests shifting resources to long-term quality risks rather than, say, factory farming. It seems like this claim requires a more detailed comparison between possible interventions. comment by sapphire (deluks917) · 2022-07-05T17:11:39.012Z · EA(p) · GW(p) I find the following simple argument disturbing: P1 - Currently, and historically, low power being (animals, children, old dying people) are treated very cruelly if treating them cruelly benefits the powerful even in minor ways. Weak benefits for the powerful empirically justify cruelty at scale. P2 - There is no good reason to be sure the powerful wont have even minor reasons to be cruel to the powerless (ex: suffering sub-routines, human CEV might include spreading earth like life widely or respect for tradition) P3 - Inequality between agents is likely to become much more extreme as AI develops P4 - The scale of potenital suffering will increase by many orders of magnitude C1 - We are fucked? Personal Note - There is also no reason to assume me or my loved ones will remain relatively powerful beings C2 - Im really fucked! Replies from: Davidmanheim comment by Davidmanheim · 2022-07-06T17:11:12.025Z · EA(p) · GW(p) Currently, and historically, low power being (animals, children, old dying people) are treated very cruelly if treating them cruelly benefits the powerful even in minor ways. This is true, but far less true recently than in the past, and far less true in the near past than in the far past. That trajectory seems between somewhat promising and incredibly good - we don't have certainty, but I think the best guess is that in fact, it's true that the arc of history bends towards justice. comment by Charlie_Guthmann (Charles_Guthmann) · 2022-06-30T17:57:38.584Z · EA(p) · GW(p) Thanks for the post.  I've also been surprised how little this is discussed, even though the value of x-risk reduction is almost totally conditional to the answer to this question (the EV of the future conditional on human/progeny survival). Here are my big points to bring up re this issue, though some might be slight rephrasing of yours. 1. Interpersonal comparisons of utility canonically [EA · GW] have two parts - a definition of utility, of which every sentient being is measured by. Then, to compare and sum utility, one must pick (subjective) weights for each sentient being, scale their utility by the weights, and add everything up. (u_1x_1+.....+u_nx_n). If we don't agree on the weights, it's possible that one person may think the future be in expectation positive while another thinks it will be negative even w/ perfect information of what the future will look like. It could be even harder to agree on the weights of sentient beings when we don't even know what agents are going to be alive. We have obvious candidates for general rules about how to weight utility (brain size, pain receptors, etc.) but who knows how are conceptions of these things will change. 2. Basically repeating your last point in the chart but it's really important so I'll reiterate. Like everything else normative, there is no objective "0" line, no non-arbitrary point at which life is worth living. It is a decision we have to make. Moreover, I don't see any agreement in this community on the specific point where life is worth living. It is pretty obvious that disagreement about this could flip the sign of the EV of the future. 3. "Alien Counterfactuals". I actually mentioned this in a comment to a previous post [EA · GW] where someone said we should mostly just call longtermism x-risk(extremely wrong in my opinion).  First, for simplicity, let's just assume humans become grabby. [? · GW] If we become grabby, a question of specific interest to us should be, what characteristics do our society and species have relative to other grabby societies/species? Are we going to be better or worse gatekeepers of the future than the other gatekeepers of the future?  I'm pretty sure we should take the prior that we display the mean characteristics of a grabby civilization (interested in hearing if others disagree). If this is the case, then, again for simplicity, assuming(for simplicity) that our lightcone will be populated by aliens whether or not we specifically become grabby, x-risk reduction could be argued to have exactly 0 expected value, as we have no reason to believe that we are going to do a better job with the future than aliens. Evidence updating against the prior would probably take the form of arguments about why our specific evolutionary or economic history was a weird way to become grabby, not an easy task. Of course even with all the simplifying assumptions I've made, it's not so simple. Even if we have the mean characteristics of all the other grabby civilizations, adding more civilizations to the mix can change the game theory of space wars and governance. Still, it's not clear if more or less players is better.                                                                                                                                                                                                                                                                                                                                                                                                    I talked to a few people in EA about 'alien counterfactuals', and they all seemed to dismiss the argument, thinking that humans are better than "rolling the dice" on a new grabby civilization. No one provided arguments that were super convincing though. The most convincing counter argument I heard was  that it is very unlikely that grabby aliens will actually end up existing in our lightcone, subverting the whole argument. AI makes this argument significantly more confusing but it's not worth getting into without further ironing out of the initial arguments. 4. And then this is sort of the whole point of your post but I will reiterate - predicting the future is extremely difficult. We should have very little confidence in what it will be like. Predicting whether the future will be good or bad (given that we have already ironed out the normative considerations, which we haven't) is probably easier than predicting the future but still seems really difficult. The burden of evidence is on us to prove the future will be good, not on other people to prove it will be bad. After all, we are pumping huge amounts of money into creating impact which is completely conditional on this information. I've found posts like this one to be the only type of things that even feel tractable, and if that is the level of specificity we are at, it truly does feel like we have been p. wagered on this issue. posts like this one [? · GW] that you mentioned ultimately don't have nearly enough firepower to serve as anything more than an exploration of what a full argument would look like. comment by Ben_West · 2022-07-06T02:21:56.736Z · EA(p) · GW(p) The thing I have most changed my mind about since writing the post of mine you cite is adjacent to the "disvalue through evolution" category: basically, I've become more worried that disvalue is instrumentally useful. E.g. maybe the most efficient paperclip maximizer is one that's really sad about the lack of paperclips. There's some old writing on this by Carl Shulman and Brian Tomasik; I would be excited for someone to do a more thorough write up/literature review for the red teaming contest (or just in general). comment by MikeJohnson · 2022-07-05T11:27:57.209Z · EA(p) · GW(p) As a small comment, I believe discussions of consciousness and moral value tend to downplay the possibility that most consciousness may arise outside of what we consider the biological ecosystem. It feels a bit silly to ask “what does it feel like to be a black hole, or a quasar, or the Big Bang,” but I believe a proper theory of consciousness should have answers to these questions. We don’t have that proper theory. But I think we can all agree that these megaphenomena involve a great deal of matter/negentropy and plausibly some interesting self-organized microstructure- though that’s purely conjecture. If we’re charting out EV, let’s keep the truly big numbers in mind (even if we don’t know how to count them yet). Replies from: Guy Raveh comment by Oliver Sourbut · 2022-06-30T15:13:46.219Z · EA(p) · GW(p) Typo hint: "10<sup>38</sup>" hasn't rendered how you hoped. You can use <dollar>10^{38}<dollar> which renders as Replies from: [email protected], Oliver Sourbut, Jacy comment by Fai ([email protected]) · 2022-07-01T17:25:14.219Z · EA(p) · GW(p) Maybe another typo? : "Bostrom argues that if humanizes could colonize the Virgo supercluster", should that be "humanity" or "humans"? Replies from: Jacy comment by Jacy · 2022-07-01T17:49:10.119Z · EA(p) · GW(p) Good catch! comment by Oliver Sourbut · 2022-07-04T08:39:27.694Z · EA(p) · GW(p) It looks like I got at least one downvote on this comment. Should I be providing tips of this kind in a different way? comment by Jacy · 2022-06-30T15:20:21.409Z · EA(p) · GW(p) Whoops! Thanks! comment by Harrison Durland (Harrison D) · 2022-07-01T15:24:06.933Z · EA(p) · GW(p) I think that this large argument / counterargument table is a great example of how using a platform like Kialo to better structure debates could be valuable. comment by ElliotJDavies · 2022-07-01T10:31:39.468Z · EA(p) · GW(p) I think you have undervalued optionality value. Using Ctrl + F I have tried to find and summarise your claims against optionality value: • EA only has a modest amount of "control" [ I assuming control = optionality ] •  EA won't retain much "control" over the future • The argument for option value is based on circular logic •  Counterpoint, short x-risk timelines would be good from the POV of someone making an optionality value argument • Counterpoint, optionality would be more important if alien's exist and propagate negative value •  humans existing limits option value similar [question, by similar do you mean equal to?] to that of non-existence • We can't raise x-risk after we've lowered it Without having thought about this for very long, I think the argument against optionality needs to be really really strong. Since you essentially need to demonstrate we have equal or better decision making abilities right now, than at any point in the future. One of the reasons optionality seems like an exceptionally good argument, is that uncertainty exists both inside and outside EV models (i.e. you can model EV, and include some uncertainty, but then you need to account for uncertainty around the entire EV model because you've likely made a ton of assumptions during the process). And it's extremely likely this uncertainty would remain constant overtime. One way we try to improve our models of the world is by making predictions and seeing if we were correct. The two reasons we do this are: making predictions is hard (so it's test for a model that's hard to pass) , and we have more information in the future. The argument against optionality seems borderline tautological, because you essentially have to round all optionality value to 0, meaning the value of making predictions (and all over science, philosophy ect.) is also 0. I am basically making a fanatical argument here for optionality, whereby the only  consideration that trumps it is opportunity cost. comment by Bridges · 2022-06-30T17:57:46.838Z · EA(p) · GW(p) Thanks for doing this work but I dont have the patience to read entirely. What is it you found exactly? please put at the top of the summary Replies from: MichaelStJules comment by MichaelStJules · 2022-06-30T18:03:16.310Z · EA(p) · GW(p) In the associated spreadsheet, I list my own subjective evidential weight scores where positive numbers indicate evidence for +EV and negative numbers indicate evidence for -EV. It is helpful to think through these arguments with different assignment and aggregation methods, such as linear or logarithmic scaling. With different methodologies to aggregate my own estimates or those of others, the total estimate is highly negative around 30% of the time, weakly negative 40%, and weakly positive 30%. It is almost never highly positive. I encourage people to make their own estimates, and while I think such quantifications are usually better than intuitive gestalts, all such estimates should be taken with golf balls of salt.[5] [EA(p) · GW(p)] Replies from: Bridges comment by Bridges · 2022-07-06T01:52:57.515Z · EA(p) · GW(p) yeah I still don't know what this means. What is the granny version pitch? comment by Ben_West · 2022-07-01T04:23:38.147Z · EA(p) · GW(p) Minor technical comment: the links to subsections in the topmost table link to the Google docs version of the article, and I think it would be slightly nicer if they linked to the forum post version. Replies from: Jacy comment by Jacy · 2022-07-01T04:59:27.061Z · EA(p) · GW(p) Thanks! Fixed, I think. comment by Noah Scales · 2022-06-30T21:23:02.304Z · EA(p) · GW(p) You wrote "There is a substantial philosophical literature on such topics that I will not wade into, and I believe such non-value-based arguments can be mapped onto value-based arguments with minimal loss (e.g., not having a duty to make happy people can be mapped onto there being no value in making happy people)." Duty to accomplish X implies much more than an assessment of the value of X. To lack the (moral, legal, or ethical) obligation to bring about a state of affairs does not imply a sense that the state of affairs has no value to you or others.
This is an old revision of the document! # Query language Query languages are used to query database systems in information technologies; every system uses a query language with precisely defined syntax. For work with language corpora, the query language is used for inputting queries into corpus managers, concordance programs etc. Even here the individual languages usually differ, despite the fact that they are all based on regular expressions, which they then expand on and adapt to fit their individual needs. # Query language used in ČNK The query language used in the ČNK corpora operating on the corpus manager Manatee is called CQL (corpus query language) and is in fact a modified version of the original CQL created for the corpus manager CWB. Its cornerstone is a query for a single position (word) in the corpus: [attribute="value"] where the attribute is positional (word, lemma, tag etc.), the value is the search term itself, or a pattern specified with the help of regular expressions. The query can also include limitations on structural attributes (sentence, doc, opus), where it is also possible to specify other values (e.g. for opuses it is the publication year, genre, author etc.). Limitations for structural attributes are, unlike those for positional attributes, written in in pointed brackets (e.g. <s id="10"/>); see a more detailed and complete description of the CQL . CQL is a formal language which has a precise (and finite) definition. CQL supports some elements of traditional regular languages 1), but it also supports expanded, specifically corpus-related commands such as within, meet, union or containing, which work with the structure of the corpus. A simultaneous query for more than one position (i.e. word sequence or wider context) is formed simply by the concatenation of the individual queries for each successive position. E.g. the query [lemma="have"][][lemma="heart"] searches for all occurrences of the lemmas have and heart, in between which there is one position (i.e. word or punctuation). The following example of the Manatee corpus manager's query language will find all instances of the construction type „neither woman nor man“, „neither man nor beast“ etc. occurring in the corpus within one sentence (structure<s/>, see structural attributes): [lemma="neither"] [tag="N.*"] []{0,1} [lemma="nor"] [tag="N.*"] within <s/> Each position in the sequence is represented by one pair of square brackets, possibly accompanied by a quantifier in curly brackets. The first position represents all words lemmatized as “neither”, the second position represents all nouns (word forms containing a morphological tag beginning with the letter “N”, followed by an arbitrary sequence of arbitrary characters), the third position is occupied by any one word (or none), the fourth position is limited to the lemma “nor”, and the fifth position once again contains the morphological tag for nouns. The directive “within” limits the entire query within the scope of one structural attribute “<s/>” (i.e. one sentence). It is also possible to use the directive containing for this particular purpose. For work with a corpus manager it is advisable to know the query language used and the possibilities it offers. Although some user interfaces make it possible to input queries without knowledge of the specific query language, the possibilities of working with such an interface tend to be somewhat limited. This is a result of the effort to make the interface user-friendly and as comprehensible as possible, which is always achieved at the expense of the possibilities and combinations available to the user. 1) E.g. quantificators, round brackets and logical operators.
# The solar test of the equivalence principle Redirect: RIT Scholars content from RIT Digital Media Library has moved from http://ritdml.rit.edu/handle/1850/2114 to RIT Scholar Works http://scholarworks.rit.edu/article/1498, please update your feeds & links! Title: The solar test of the equivalence principle Author: Anderson, John; Gross, Mark; Nordtvedt, Kenneth; Turyshev, Slava Abstract: The Earth, Mars, Sun, Jupiter system allows for a sensitive test of the strong equivalence principle (SEP) which is qualitatively different from that provided by Lunar Laser Ranging. Using analytic and numerical methods we demonstrate that Earth-Mars ranging can provide a useful estimate of the SEP parameter $\eta$. Two estimates of the predicted accuracy are derived and quoted, one based on conventional covariance analysis, and another (called modified worst case'' analysis) which assumes that systematic errors dominate the experiment. If future Mars missions provide ranging measurements with an accuracy of $\sigma$ meters, after ten years of ranging the expected accuracy for the SEP parameter $\eta$ will be of order $(1-12)\times 10^{-4}\sigma$. These ranging measurements will also provide the most accurate determination of the mass of Jupiter, independent of the SEP effect test. (Refer to PDF file for exact formulas.) Description: Also archived at: arXiv:gr-qc/9510029 v1 16 Oct 1995 Record URI: http://hdl.handle.net/1850/2114 Date: 1996-03-01 ## Files in this item Files Size Format View JAndersonArticle03-01-1996.pdf 130.1Kb PDF View/Open The following license files are associated with this item:
# Windows Forms Add Configuration Element The <add> element adds a predefined key that specifies whether your Windows Form app supports features added to Windows Forms apps in the .NET Framework 4.7 or later. ## Syntax <System.Windows.Forms.ApplicationConfigurationSection> </System.Windows.Forms.ApplicationConfigurationSection> ## Attributes and elements The following sections describe attributes, child elements, and parent elements. ### Attributes Attribute Description key Required attribute. A predefined key name that corresponds to a particular Windows Forms customizable feature. value Required attribute. The value to assign to key. ### key attribute names and associated values key name Values Description "AnchorLayout.DisableSinglePassControlScaling" "true"|"false" Indicates whether anchored controls are scaled in a single pass. "true" to disable single pass scaling; otherwise, false. See the "Single pass scaling" section in the Remarks for more information. "DpiAwareness" "PerMonitorV2"|"false" Indicates whether an application is DPI-aware. Set the key to "PerMonitorV2" to support Dpi awareness; otherwise, set it to "false". DPI awareness is an opt-in feature; to take advantage of Windows Forms' high DPI support, you should set its value to "PerMonitorV2". See the Remarks section for more information. "CheckedListBox.DisableHighDpiImprovements" "true"|"false" Indicates whether the CheckedListBox control takes advantage of scaling and layout improvements introduced in .NET Framework 4.7. "true" to opt out of scaling and layout improvements; otherwise, "false". "DataGridView.DisableHighDpiImprovements" "true"|"false" Indicates whether the DataGridView control scaling and layout improvements introduced in .NET Framework 4.7. "true" to opt out of DPI awareness; "false" otherwise. "DisableDpiChangedMessageHandling" "true"|"false" "true" to opt out of receiving messages related to DPI scaling changes; "false" otherwise. See the Remarks section for more information. "EnableWindowsFormsHighDpiAutoResizing" "true"|"false" Indicates whether a Windows Forms application is automatically resized due to DPI scaling changes. "true" to enable automatic resizing; otherwise, false. "Form.DisableSinglePassControlScaling" "true"|"false" Indicates whether the Form is scaled in a single pass. "true" to disable single-pass scaling; otherwise, false. See the "Single pass scaling" section in the Remarks for more information. "MonthCalendar.DisableSinglePassControlScaling" "true"|"false" Indicates whether the MonthCalendar control is scaled in a single pass. "true" to disable single-pass scaling; otherwise, false. See the "Single pass scaling" section in the Remarks for more information. "Toolstrip.DisableHighDpiImprovements" "true"|"false" Indicates whether the ToolStrip control takes advantage of scaling and layout improvements introduced in .NET Framework 4.7. "true" to opt out of DPI awareness; "false" otherwise. None. ### Parent elements Element Description <System.Windows.Forms.ApplicationConfigurationSection> Configures support for new Windows Forms application features. ## Remarks Starting with .NET Framework 4.7, the <System.Windows.Forms.ApplicationConfigurationSection> element allows you to configure Windows Forms applications to take advantage of features added in recent releases of the .NET Framework. The <System.Windows.Forms.ApplicationConfigurationSection> element allows you to add one or more child <add> elements, each of which defines a specific configuration setting. For an overview of Windows Forms High DPI support, see High DPI Support in Windows Forms. ### DpiAwareness Windows Forms apps that run under Windows versions starting with Windows 10 Creators Edition and target versions of the .NET Framework starting with the .NET Framework 4.7 can be configured to take advantage of high DPI improvements introduced in .NET Framework 4.7. These include: • Support for dynamic DPI scenarios in which the user changes the DPI or scale factor after a Windows Forms application has been launched. • Improvements in the scaling and layout of a number of Windows Forms controls, such as the MonthCalendar control and the CheckedListBox control. High DPI awareness is an opt-in feature; by default, the value of DpiAwareness is false. You can opt into Windows Forms' support for DPI awareness by setting the value of this key to PerMonitorV2 in the application configuration file. If DPI awareness is enabled, all individual DPI features are also enabled. These include: • DPI changed messages, which are controlled by the DisableDpiChangedMessageHandling key. • Dynamic DPI support, which is controlled by the EnableWindowsFormsHighDpiAutoResizing key. • Single-pass control scaling, which is controlled by the Form.DisableSinglePassControlScaling for individual Form controls, by the AnchorLayout.DisableSinglePassControlScaling key for anchored controls, and by the MonthCalendar.DisableSinglePassControlScaling key for the MonthCalendar control • High DPI scaling and layout improvements, which is controlled by the CheckListBox.DisableHighDpiImprovements key for the CheckedListBox control, by the DataGridView.DisableHighDpiImprovements key for the DataGridView control, and by the Toolstrip.DisableHighDpiImprovements key for the ToolStrip control. The single default opt-in setting provided by setting DpiAwareness to PerMonitorV2 is generally adequate for new Windows Forms applications. However, You can then opt out of individual high DPI improvements by adding the corresponding key to the application configuration file. For example, to take advantage of all the new DPI features except for dynamic DPI support, you would add the following to your application configuration file: <System.Windows.Forms.ApplicationConfigurationSection> <!-- Disable dynamic DPI support --> </System.Windows.Forms.ApplicationConfigurationSection> Typically, you opt out of a particular feature because you've chosen to handle it programmatically. For more information on taking advantage of High DPI support in Windows Forms applications, see High DPI Support in Windows Forms. ### DisableDpiChangedMessageHandling Starting with .NET Framework 4.7, Windows Forms controls raise a number of events related to changes in DPI scaling. These include the DpiChangedAfterParent, DpiChangedBeforeParent, and DpiChanged events. The value of the DisableDpiChangedMessageHandling key determines whether these events are raised in a Windows Forms application. ### Single-pass scaling Single or multi-pass scaling influences the perceived responsiveness of the user interface and the visual appearance of user interface elements as they are scaled. Starting with .NET Framework 4.7, Windows Forms uses single pass scaling. In previous versions of the .NET Framework, scaling was performed through multiple passes, which caused some controls to be scaled more than was necessary. Single-pass scaling should only be disabled if your app depends on the old behavior.