text
stringlengths 104
605k
|
---|
## Algebra, Combinatorics and Geometry Seminar
### This Weeks Lecture
April 18, 2013
12:00pm
427 Thackeray Hall
Prof. Greg Constantine, Univ. of Pittsburgh
" One error for the price of existence"
Abstract: Existence of optimal nonlinear codes, such as Hadamard codes, is difficult to establish. I show existence of codes that correct just one less error than the (possibly nonexistent) optimal codes, irrespective of dimension.
### Fall 2012/Spring 2013 Schedule
September 20 and 27, 2012
12:00pm
427 Thackeray Hall
Sevak Mkrtchyan, Carnegie Mellon University
"Asymptotic representation theory of symmetric groups"
Abstract: We will study local and global statistical properties of Young diagrams with respect to a Plancherel-type family of measures called Schur-Weyl measures and use the results to answer a question from asymptotic representation theory. More precisely, we will solve a variational problem to prove a limit-shape result for random Young diagrams with respect to the Schur-Weyl measures and apply the results to obtain logarithmic, order-sharp bounds for the dimensions of certain representations of finite symmetric groups. By studying the local fluctuations of the underlying point processes via the saddle point method we will prove that the Schur-Weyl measures have the asymptotic equipartition property. We will mention connections to combinatorics and random matrix theory.
October 18, 2012
12:00pm
427 Thackeray Hall
Prof. Armin Gholampour, Univ. of Maryland
"Donaldson-Thomas invariants of 2-dimensional sheaves and modular forms"
Abstract: We define the Donaldson-Thomas invariants associated to the moduli space of stable 2-dimensional sheaves on a smooth threefold X. If X is a smooth K3 fibration over a curve, we express the DT invariants of X in terms of the Euler characteristics of the moduli spaces of stable torsion free sheaves on a K3 surface and the Noether-Lefschetz numbers of the fibration. From this we conclude that the generating functions of the DT invariants of X are modular. We extend this to the case that the K3 fibration has finitely many fibers with nodal singularities. Finally, we sketch a method to compute the DT invariants of the Calabi-Yau complete intersections such as Fermat quintic in P^4.
October 25, 2012
12:00pm
427 Thackeray Hall
Prof. Howard Garland, Yale University
"Eisenstein series on loop groups"
Abstract: We will discuss the existence and meromorphic continuation of Eisenstein series on loop groups and the the possible application of this theory to automorphic L-functions associated to cusp forms on finite-dimensional groups.
November 1 and 8, 2012, 2012
12:00pm
427 Thackeray Hall
Dr. Rina Anno, University of Pittsburgh
"Fourier-Mukai transforms for enhanced triangulated categories"
Abstract: In algebraic geometry, the study of derived categories of sheaves is stumbled by the fact that the category of functors between triangulated categories is not triangulated. The usual way to treat this is to only consider Fourier-Mukai (integral) functors, replacing the category of functors by the category of their Fourier-Mukai kernels. The problem however is that this correspondence between kernels and transforms is neither full nor faithful. I will talk about a different idea: consider enhancement of derived categories of sheaves by DG categories of modules instead, and instead of Fourier-Mukai kernels get a DG category of bimodules that behaves much better.
November 15, 2012
12:00pm
427 Thackeray Hall
Dr. Chirs Manon, George Mason University
"The combinatorial commutative algebra of conformal blocks"
Abstract: Toric degenerations of schemes are a way to replace geometric or algebraic questions with questions about polyhedral geometry. In this talk we discuss how the combinatorics of objects from mathematical physics, the conformal blocks, can be used to construct flat degenerations of the Cox ring of the moduli of quasi-parabolic principal bundles on an $n-$marked curve of genus $g.$ We will discuss when these degenerations are toric, and how the resulting combinatorial pictures can be used to prove structural theorems about this ring.
November 29, 2012
12:00pm
427 Thackeray Hall
Alexei Davydov, Ohio University
"Structure of braided tensor categories"
Abstract: A certain equivalence relation (Witt equivalence) allows one to organise braided tensor categories into a manageable set of classes. For example under this equivalence symmetric tensor categories form just two classes (Tannakian and super-Tannakian). In the fusion braided case the equivalence classes form a group (a generalisation of the Witt group). The structure of this group is related to a conjecture of Moore and Seiberg about chiral algebras of rational conformal field theories.
December 6, 2012
12:00pm
427 Thackeray Hall
Prof. Greg Constantine, University of Pittsburgh
"Design pairs"
Abstract: The general theme is that of constructing symmetric designs. It is often possible to accomplish this by working with two combinatorial structures, neither of which is a 2-design, but which act jointly to allow the construction of a 2-design.
Such design pairs offer an expression of certain integers that are equal to 3 mod 4 as a sum of two square moduli of cyclotomic integers. It is on occasion also possible to use the Seidel-Goethals' method to construct Hadamard matrices, with such design pairs playing a central role.
February 28, 2012
12:00pm
427 Thackeray Hall
Prof. Bogdan Ion, University of Pittsburgh
"BGG reciprocity for current algebras
Abstract: Current algebras are special maximal parabolic subalgebras of affine Lie algebras. In the case of untwisted affine Lie algebras they are isomorphic to the tensor product of a finite dimensional simple Lie algebra and the ring pf polynomials in one variable. The category of finite dimensional representations of current algebras is not semisimple. Recently Chari and collaborators have conjectured a version of the BGG reciprocity in this context, which connects simple finite dimensional representations, their projective covers, and standard modules. I will present a proof of this conjecture.
March 21, 2012
12:00pm
427 Thackeray Hall
Mr. Takuya Murata, University of Pittsburgh
"Asymptotic of multiplicities of the reductive group action: torus case"
Abstract: The Okounkov body of a projective variety is a compact convex set that encodes geometric invariants of the variety; e.g., the degree (i.e., the 1st Chern class of O(1)) of the variety. In the talk, we are interested in the case when there is an action of a complex connected reductive group on the variety. Okounkov studied the asymptotic of the multiplicity m_{k, k \lambda} as k goes infinity. We study that of m_{k, lambda}. The latter case goes back to Howe (finite group) and Brion (reductive group). The current result I and Kaveh have is when the group is a torus. The proof of the general case is also in preparation. In the talk, the symplectic approach (Riemann-Roch) may also be mentioned.
March 28, 2013
12:00pm
427 Thackeray Hall
Prof. Kuimars Kaveh, Univ. of Pittsburgh
"Geometric inequalities for multiplicities of ideals"
Abstract:
April 11, 2013
12:00pm
427 Thackeray Hall
Dr. Rina Anno, Univ. of Pittsburgh
"Braiding conditions for spherical twists"
Abstract:
A spherical functor between two triangulated categories is a functor with left and right adjoint for which cones of the four possible adjunction and coadjunction maps
are autoequivalences of categories. These autoequivalences are called spherical twists (or cotwists). In a number of situations, certain spherical twists generate a weak braid group or affine braid group action on a category. We are looking for a simplest possible condition on two spherical functors that ensures that their twists commute, or satisfy the braid relation. For the twists that are induced by a single object (or rather a functor from D^b(pt) to D^b(X)), these conditions are that Ext^*(E_1,E_2)=0 for commutation, and Ext^*(E_1,E_2) is 1-dimensional in degree one, for braiding. We seek to establish similar cohomologic criteria for spherical twists induced by certain fibrations, but first we need an abstract criterion in a verifiable form.
April 18, 2013
12:00pm
427 Thackeray Hall
Prof. Greg Constantine, Univ. of Pittsburgh
" One error for the price of existence"
Abstract: Existence of optimal nonlinear codes, such as Hadamard codes, is difficult to establish. I show existence of codes that correct just one less error than the (possibly nonexistent) optimal codes, irrespective of dimension.
### Fall 2011 Schedule
September 15, 2011
12:00pm
703 Thackeray Hall
Prof. Thomas Hales, Univ. of Pittsburgh
"Mathematics in the Age of the Turing Machine"
Abstract: Next year we celebrate the centennial of Alan Turing's birth. This will be a talk for a general audience about some of the ways that computers shape mathematical research. I will give examples both of "computer proofs" that make computation part of the proof and of "formal proofs" that use computers to check the logical reasoning behind proofs.
September 29, 2011
12:00pm
703 Thackeray Hall
Prof. Urs Schreiber, Utrecht University
"Differential characters from higher Lie integration"
Abstract: The process of integrating a Lie algebra to a Lie group can be generalized to give a canonical way of integrating an L-infinity algebra to a higher stack. This extends to L-infinity cocycles. I discuss how, in joint work with Fiorenza, Sati, and Stasheff, we used this to construct smooth differential cocycle-refinements of the first and second fractional Pontrjagin class, or of the second and fourth Chern classes in the complex case. These lead to differential and twisted refinements of higher notions of Spin structures, known as string-structures and fivebrane-structures, and are motivated by an obstruction problem in the quantization of string theory.
October 13 and 20, 2011
12:00pm
703 Thackeray Hall
Prof. Hiham Sati, Univ. of Pittsburgh
"Topological modular forms"
Abstract: Topological modular forms (TMF) is a generalized cohomology theory characterized by the fact that its coefficient ring is essentially the graded ring of integral modular forms. I will explain what "essentially" means, why elliptic curves and their moduli stacks appear, and what they have to do with topology. I will also explain how this theory can be thought of as interpolating between number theory and homotopy theory. Generalizations and recent applications will be presented as time permits.
October 27 and November 3, 2011
12:00pm
703 Thackeray Hall
Prof. Alexander Borisov, Univ. of Pittsburgh
"Geometric approach to the two-dimensional Jacobian Conjecture"
Abstract: I will describe my approach to the two-dimensional Jacobian Conjecture using some ideas of birational algebraic geometry. The talk will include recent progress using determinants of weighted trees and applications of this approach to maps of small degree.
November 10 and 17, 2011
12:00pm
703 Thackeray Hall
Prof. Kiumars Kaveh, Univ. of Pittsburgh
"Toric degenerations, integrable systems and Okounkov bodies"
Abstract: A (completely) integrable system is a Hamiltonian system which admits a maximal number of "first integrals" (also called "conservation laws"). Integrable systems are abundant in physics and mathematics and are very well-studied. In this talk we make a connection between integrable systems and algebraic geometry, discussing a general method for constructing integrable systems on a large class of varieties. This relies on methods from algebra, namely degenerating a given variety to a "toric variety". Many well-known examples of integrable systems, e.g. Guillemin-Sternberg integrable system on the flag variety, fit into this picture.
December 1, 2011
12:00pm
703 Thackeray Hall
Prof. Jason DeBlois, Univ. of Pittsburgh
"Hyperbolic disk packings and the topology of moduli space"
Abstract: Each 2-cell of the Delaunay tessellation determined by a set of points in the hyperbolic plane is 'cyclic': its vertex set lies on a circle. Call a 2-cell 'centered' if its interior contains the center of this circle. There is a sense in which centered polygons behave better than those which are cyclic but not centered. I will make this precise, then show that the set of non-centered 2-cells has a nice underlying structure that one can use to control their pathology. As an application I will describe a sort of finite version of Mumford's compactness criterion for finding compact subsets of the genus-g moduli space.
January 26, 2012
12:00pm
427 Thackeray Hall
Chris Kapulkin, Univ. of Pittsburgh
"An introduction to polynomial functors"
Abstract: This talk is meant to be an introduction to the theory of polynomial functors and their applications. Building on set-theoretic intuitions I will introduce the notion of a polynomial functor on a (slice of) locally cartesian closed category and show some of its basic properties. Later, I will present several applications with a special emphasis on homotopy theory and higher category theory.
February 2, 2012
12:00pm
427 Thackeray Hall
Prof Alexander Borisov, Univ. of Pittsburgh
"Determinants of weighted trees and applications to plane compactifications"
Abstract: We derive "local" formulas for determinant matrices associated to weighted trees. Main motivation and applications come from the graphs of rational curves obtained by successive blowups "at infinity" of the projective plane. In particular, we define two integer invariants of these curves and interpret them as functions on the appropriate Zariski-Riemann space of valuations. We explain how these invariants naturally appear in the two-dimensional Jacobian conjecture and prove that if their values are fixed, the corresponding valuations form finitely many families, modulo polynomial automorphisms.
February 9, 2012
12:00pm
427 Thackeray Hall
Prof Gregory Constantine, Univ. of Pittsburgh
"Four Squares Of Sums Of Sets Of Cosines"
Abstract
March 29, 2012
12:00pm
427 Thackeray Hall
Prof. Kiumars Kaveh, Univ. of Pittsburgh
"Reciprocity Law on Algebraic Curves"
Abstract: I will talk about Galois theory of algebraic curves and Weil reciprocity.
The talk examines Galois theory and class field theory (nicely covered in Tom's course for number fields) for field of rational functions on an algebraic curve. I will give a simple proof of Weil reciprocity using Newton polygons. I will try to cover most of background material (no need to have attended Tom's course). Familiarity with Galois theory will be assumed.
April 5, 2012
12:00pm
427 Thackeray Hall
Prof. Ping Xu, Penn State University
"Geometry of Maurer-Cartan elements on complex manifolds"
Abstract: Maurer-Cartan elements on a complex manifold are extensions of holomorphic Poisson structures. We study the geometry of these structures, by investigating their cohomology and homology theory. In particular, we describe a duality on the homology groups, which generalizes the Serre duality of Dolbeault cohomology.
April 19, 2012
12:00pm
427 Thackeray Hall
Prof. Stephen DeBacker
"Unexpected twists"
Abstract: The conjectural Local Langlands Correspondence (LLC) states that the set of irreducible smooth discrete series representations of a p-adic group may be partitioned into finite sets, called L-packets, such that many wonderful properties hold. One of the expected properties states that an appropriate combination of characters of the representations in an L-packet will be stable (that is, as a function on the set of strongly regular semisimple rational elements, the combination should assume the same value at any two elements that are conjugate over the algebraic closure). We have found that, in a very natural setting, the “obvious" L-packet does not have this property. To overcome this difficulty, a certain twist must be added to the mix.
April 26, 2012
12:00pm
427 Thackeray Hall
Hoang Le Truong, Univ. of Pittsburgh and Hanoi Math Institute
"The equality (I^2 = q I) in sequentially cohen-macaulay rings"
Abstract
May 3, 2012
12:00pm
427 Thackeray Hall
Walter Freyn, University of Muenster
"From SL(2) and hyperbolic space to hyperbolic Kac-Moody algebras and their buildings"
Abstract: In this talk we start with with the finite dimensional Lie algebra sl(2) and hyperbolic space and describe how to construct from those elementary building blocks hyperbolic Kac-Moody algebras and their associated Kac-Moody groups. Associated to Kac-Moody algebras are twin buildings. We describe embeddings of hyperbolic twin buildings into the compact real forms of Kac-Moody algebras.
### Fall 2010/Spring 2011 Schedule
September 30, 2010
12:00pm
703 Thackeray Hall
Dr. Thomas Hales, Univ. of Pittsburgh
"The Fundamental Lemma for Beginners."
Abstract: At the International Congress of Mathematicians in India last month, Ngo Bao Chau was awarded a Fields medal for his proof of the "Fundamental Lemma." This talk is particularly intended for students and mathematicians who are not specialists in the theory of Automorphic Representions. I will describe the significance and some of the applications of the "Fundamental Lemma." I will explain why this problem turned out to be so difficult to solve and will give some of the key ideas that go into the proof.
October 21, 2010
12:00pm
703 Thackeray Hall
Dr. Thomas Hales, Univ. of Pittsburgh
"The fundamental lemma and the Hitchin fibration"
Abstract: The fundamental lemma is a collection of identities of integrals that comes up in the study of a trace formula. These identities have recently been proved by Ngo Bao Chau, for which he was awarded a Fields Medal earlier this year. At the heart of his proof is a beautiful interpretation of these integrals in terms of the cohomology of the Hitchin fibration. This talk will explain the geometry of the Hitchin fibration in relation to the fundamental lemma.
October 28, 2010
12:00pm
703 Thackeray Hall
Chris Kapulkin, Univ. of Pittsburgh
"From Atiyah to Lurie. 20 years of topological quantum field theory"
Abstract: I will define an n-dimensional TQFT as a symmetric monoidal functor from the category of n-cobordisms to the category of complex vector spaces and show some of its applications to algebraic topology and representation theory. This notion will be examine in case n=2---I will sketch the proof that the category of 2-dimensional TQFTs is equivalent to the category of commutative Frobenius algebras. In the last part of the talk, I will present some recent development: work of Jacob Lurie on Baez-Dolan Cobordism Hypothesis. Even though I will introduce such notions as monoidal structure or a symmetry, some familiarity with category theory will be assumed.
November 11, 2010
12:00pm
703 Thackeray Hall
Dr. Yimu Yin, Univ. of Pittsburgh
"Integration in algebraically closed valued fields with sections"
Abstract: I will describe how to construct Hrushovski-Kazhdan style motivic integration in certain expansions of ACVF(0, 0). Such an expansion is typically obtained by adding a full section from the RV-sort into the VF-sort and some (arbitrary) extra structure in the RV-sort. The construction of integration, that is, the inverse of the lifting map L, is rather straightforward. What is a bit surprising is that the kernel of L is still generated by one element, exactly as in the case of integration in ACVF(0, 0). I will also describe an application to zeta functions, showing that their rationality, shown by Denef and Pas in the 80s, is uniform.
November 18, December 2, 2010
12:00pm
Dr. Kiumars Kaveh, Univ. of Pittsburgh
"Convex polytopes, irreducible representations and flag varieties"
Abstract: I review some basic facts about the crystal bases for finite dimensional irreducible representations of a reductive group G. A remarkable property of a crystal basis (due to Littelmann and Bernstein-Zelevinsky) is that its elements can be naturally parametrized by the set of integral points in a convex polytope (a string polytope). I will then discuss a recent result that the Littelmann parametrization coincides with a geometric valuation on the field of rational functions of the flag variety of G. This valuation is constructed out of a sequence of Schubert varieties. This extends an earlier result of A. Okounkov for symplectic group and confirms the general philosophy that the string polytopes are analogues of Newton polytopes for toric varieties.
December 9, 2010
12:00pm
Dr. Bogdan Ion, Univ. of Pittsburgh
"Geometric Complexity Theory (after Mulmuley)"
Abstract: Geometric complexity theory (GCT) is an approach to the algebraic P vs NP problem laid out by Ketan Mulmuley and collaborators. I will give an overview of GCT and discuss the geometric and representation-theoretical conjectures on which everything ultimately relies.
February 3, 2011
12:00pm
703 Thackeray Hall
Chris Kapulkin
"Why do n-categories matter?"
Abstract: After its introduction in 1945 by Saunders MacLane and Samuel Eilenberg, the notion of category has been generalized in many different directions. One such direction is to consider so called higher dimensional categories which apart from the objects (0-cells) and morphisms (1-cells) also have morphisms between morphisms
(2-cells) and so on. The motivation to consider such structures comes from many different parts of mathematics, for example: topology, algebra, theoretical computer science, and mathematical physics. In this talk I will sketch those motivations and show how higher dimensional categories provide a universal framework to work with various algebraic, topological, and logical structures. Finally, I will try to sketch different approaches to the definition of higher dimensional category and discuss their advantages and disadvantages.
February 10 and 24, 2011
12:00pm
703 Thackeray Hall
Prof. Kiumars Kaveh
"Polytope algebra and cohomology rings of toric varieties"
Abstract: I will give a brief introduction to "toric varieties". Main example of a toric variety is the projective space. They are objects of much interest in algebraic geometry, combinatorics and topology. The geometry and topology of toric varieties are closely related to the geometry and combinatorics of convex polytopes. I will discuss a very nice description of cohomology ring of a smooth projective toric variety due to Khovanskii-Pukhlikov. This is related to the so-called polytope algebra associated to a convex polytope in R^n. If time permits I will mention a result of mine which describes the cohomology rings of flag variety and Grassmannian in a similar fashion. For the most part I just assume basic background in algebra, geometry and topology.
February 17, 2011
12:00pm
703 Thackeray Hall
Prof. Xander faber
"The Berkovich Ramification Locus for Rational Functions"
Abstract: Given a nonconstant holomorphic map f: X \to Y between compact Riemann surfaces, one of the first objects we learn to construct is its ramification divisor R_f, which describes the locus at which f fails to be locally injective. The divisor R_f is a finite formal linear combination of points of X that is combinatorially constrained by the Hurwitz formula.
Now let k be an algebraically closed field that is complete with respect to a nontrivial non-Archimedean absolute value. For example, k = C_p. Here the role of a Riemann surface is played by a projective Berkovich analytic curve. As these curves have many points that are not algebraic over k, some new (non-algebraic) ramification behavior appears for maps between them. For example, the ramification locus is no longer a divisor, but rather a closed analytic subspace. The goal of this talk is to introduce the Berkovich projective line and describe some of the interesting features of the ramification locus for self-maps f: P^1 \to P^1.
March 4, 2011
12:00pm
703 Thackeray Hall
Prof. Greg Constantine, Univ. of Pittsburgh
"Faithful characters of finite groups"
Abstract: Characters of the faithful representations of a finite group are examined, and a formula that their degrees verify is obtained. As a follow-up, a combinatorial construction of symmetric designs using generating functions and characters of Abelian groups is presented.
March 17, 2011
12:00pm
703 Thackeray Hall
Dr. Thomas Hales, Univ. of Pittsburgh
"The Use of the Fundamental Lemma"
Abstract:
In a series of lectures in Paris in 1980, Langlands conjectured the "fundamental lemma", a collection of identities of integrals associated with reductive groups over nonarchimedean local fields. These identities were proved by Ngo Bao Chau in a book published last year. The fundamental lemma has already had many applications to the theory of automorphic forms and number theory. This talk will give a survey of some theorems that rely on the fundamental lemma.
April 7, 2011
12:00pm
Prof. Greg Constantine, Univ. of Pittsburgh
"Constructions of symmetric designs"
Abstract: Methods of constructing symmetric designs, and outstanding open problems in this area are presented.
### Fall 2009/Spring 2010 Schedule
September 3, 2009
12:00pm
703 Thackeray Hall
Prof. Julia Gordon, University of British Columbia
"On motivic-ness of some positive-depth characters"
Abstract: This talk will be about trying to use motivic integration to study Harish-Chandra characters of p-adic groups. I will talk about linear functionals in the context of motivic integration, and will prove that Harish-Chandra characters of some positive-depth supercuspidal representations of p-adic groups are "constructible motivic exponential functions" (I will define all these terms).
October 1, 2009
12:00pm
703 Thackeray Hall
Prof. Bogdan Ion, University of Pittsburgh
"On PBW bases"
Abstract: Virtually all the proofs of Poincare-Birkhoff-Witt type theorems are of combinatorial nature reducing one way or another to the knowledge of generators and relations for the algebras in question. I will explain how to obtain PBW theorems for reasonably large classes of algebras without requiring any explicit information about generators or relations.
October 5, 2009
12:00pm
Kyungyong Lee, Purdue Univeristy
Title : q,t-Catalan numbers
Abstract : The q,t-Catalan numbers naturally occur in the study of Macdonald polynomials, which are an important family of multivariable orthogonal polynomials introduced by Macdonald with applications to a wide variety of subjects including Hilbert schemes, harmonic analysis, representation theory, mathematical physics, and algebraic combinatorics. Haiman and Garsia-Haglund proved that they are polynomials of q and t with nonnegative coefficients. We give simple upper bounds on coefficients in terms of partition numbers, and find all coefficients which achieve the bounds. Our main idea is to develop a nontrivial morphism from the space of alternating polynomials to partitions. This is a joint work with Li Li.
October 15, 2009
12:00pm
703 Thackeray Hall
Prof. Alexander Borisov, University of Pittsburgh
"A geometric approach to the two-dimensional Jacobian Conjecture"
Abstract: The Jacobian Conjecture of Keller states that any unramified polynomial map from the affine complex space to itself must be invertible. We study such maps in dimension two by compactifying and blowing up points to get a map from some rational surface to the projective plane. Using ideas of the Minimal Model Program, we obtain strong restrictions on the combinatorial structure of this rational surface. We exhibit a surface satisfying these restrictions and explain how it might lead to a counterexample to the Jacobian Conjecture.
October 22, 2009
12:00pm
703 Thackeray Hall
Prof. Jeffrey Wheeler, University of Pittsburgh
"A Proof the Erdos-Heilbronn Problem Using the Polynomial Method of Alon, Nathanson, and Ruzsa"
Abstract: In the early 1960's, Paul Erdos and Hans Heilbronn conjectured that for any two nonempty subsets A and B of Z/pZ the number of restricted sums (restricted in the sense that we require the elements to be distinct) of an element from A with an element from B is at least the smaller of p and |A|+|B|-3. This problem is related to independent results of Cauchy and Harold Davenport which established that there are at least the minimum of p and |A|+|B|-1 sums of the form a+b (with the restriction removed). One thing that makes the problem interesting is that the results of Cauchy and Davenport were immediately established whereas the conjecture of Erdos and Heilbronn was open for more than 30 years.
We present the proof of the conjecture due to Noga Alon, Melvyn Nathanson, and Emre Rusza. This technique is known as the Polynomial Method and is regarded by many as a powerful tool in the area of Additive Combinatorics.
October 29, 2009
12:00pm
703 Thackeray Hall
Truong Nguyen, Univeristy of Pittsburgh
"Counting Points on Elliptic Curves over Finite Fields"
November 5, 2009
12:00pm
703 Thackeray Hall
Petr Pancoska, Center for Clinical Pharmacology, Department of Medicine, University of Pittsburgh
"Entromics as the theoretical foundation of individual genomics: From gene sequencing to severity of cystic fibrosis using physics in graph theory."
Abstract: The goal of entromics is to derive a quantitative characterization of the energy cost for the assembly of the genome using the information about genome DNA sequence as the exclusive input. From this effort we derive a thermodynamic formula, which combines enthalpy term with a special (compensatory) entropy term that was not known before. We therefore benefit from the study of this novel entropy distribution along the genome, which leads us to the name "entromics".
Entromics uses Eulerian oriented multigraphs to describe DNA sequences. This enables recognizing sequences in the genome that are mutually homomorphic, while being dissimilar in all aspects considered by current biology. This opens a whole new dimension of genomics, discovering until now hidden, but biologically important relationships in the genome. The focus of the presentation will be to deriving physical and biological interpretation of the DNA homomorphism from selective combination of mathematical proposition results with basic physical principles. Examples of clinical applications of entromics will be also shown and related open mathematical problems will be presented for discussion.
November 12, 2009
12:00pm
703 Thackeray Hall
Tran Nam Trung, University of Pittsburgh
"Regularity index of Hilbert functions of powers of ideals"
Abstract: Let A be a Noetherian standard graded algebra over an Artinian ring A_0. For a finitely generated graded A-module M, there is a function H called the Hilbert function of M. It is well-known that there is a polynomial P with rational coefficients called the Hilbert polynomial of M such that agrees with the Hilbert function at all sufficiently large natural numbers m. The regularity index of the Hilbert function of M is defined by ri(M):= min {m_0 | H(m)=P(m) forall m >= m_0}. Let I be a homogeneous ideal of A. It is shown that the regularity index of the Hilbert function of I^n M is a linear function of n for all n large enough.
November 19, 2009
12:00pm
703 Thackeray Hall
Prof. Gregory Contantine, Univ. of Pittsburgh
"Combinatorics of the Bose-Mesner algebra"
Abstract: We describe the origins of the Bose-Mesner algebra, and its connections to optimal designs and codes through extreme spectral properties. The focus then shifts to the Johnson scheme by showing that extreme spectra lead to geometric symmetry. Within this context, a class of combinatorial problems, including flags of maximal length, shall be described.
January 7, 2010
3:00pm
704 Thackeray Hall
APPLICANT COLLOQUIUM
Prof. Tonghai Yang, Univ. of Wisconsin
"The Gross-Zagier Formula Revisited"
More>
February 25, 2010
12:00pm
703 Thackeray Hall
Prof. Bogdan Ion, Univ. of Pittsburgh
"Generalized exponents and the combinatorics of minimal expressions"
Abstract: I will describe a general scheme for computing generalized exponents (and in fact all q-multiplicities). The fact on which ultimately everything rests is an explicit formula for Fourier coefficients of the Cherednik kernel (a non-symmetric partition function). I will explain some of the details required to prove this formula and the combinatorics necessary to express the answer.
March 3, 2010
3:00pm
704 Thackeray Hall
APPLICANT COLLOQUIUM
Prof. Kiumars Kaveh, McMasters Univ.
"Convex Bodies, Algebraic Equations & Group Actions"
More>
March 4, 2010
3:00pm
704 Thackeray Hall
APPLICANT COLLOQUIUM
Prof. Jeehoon Park, McGill University
More>
March 18, 2010
12:00pm
Hoang Le Truong
”Hilbert Coefficients and Sequentially Cohen-Macaulay Modules”
More>
March 25, 2010
12:00pm
Soon-Yi Kang, Univ. of Pittsburgh
"Mock modularity that appears in q-hypergeometric series and traces of singular moduli"
Abstract: The theory of mock modular forms, which is an extension of the classical modular forms, has been rapidly developed in recent years. In this talk, we present two most famous examples of the mock modular forms along with their applications to partition and number theory. They are Ramanujan's mock theta functions and the generating series of the traces of singular moduli.
April 8, 2010
12:00pm
Tom Hales, Univ. of Pittsburgh
Abstract: This talk will survey some recent elementary results from the areas of packings and tilings.
April 15, 2010
12:00pm
Prof. Alexander Borisov, Univ. of Pittsburgh
"Lattice-free simplices"
Abstract: Consider an n-dimensional lattice Z^n inside the real space R^n.
A lattice-free simplex is a simplex with vertices in Z^n and no other points from Z^n inside or on the boundary. The ultimate goal is to classify these simplices up to the automorphisms of the lattice. In dimension two the answer is obvious, and in dimension three it is relatively easy and well-known. However even in dimension 4 the answer is not known. I will give a survey of old and recent results on this topic.
April 22, 2010
12:00pm
703 Thackeray Hall
Prof. Bogdan Ion
"Generalized exponents and the combinatorics of minimal expressions 3"
Abstract: I will describe a general scheme for computing generalized exponents (and in fact all q-multiplicities). The fact on which ultimately everything rests is an explicit formula for Fourier coefficients of the Cherednik kernel (a non-symmetric partition function). I will explain some of the details required to prove this formula and the combinatorics necessary to express the answer.
### Fall 2008/Spring 2009 Schedule
August 28, 2008
12:00pm
703 Thackeray Hall
Dr. Thomas Hales, Univ. of Pittsburgh
"The transfer principle for the fundamental lemma"
September 4, 2008
12:00pm
703 Thackeray Hall
Dr. Thomas Hales, Univ. of Pittsburgh
"What is the Fundamental Lemma?"
September 4, 2008
1:00pm
703 Thackeray Hall
Yimu Yin, Univ. of Pittsburgh
"Kazhdan Hrushovski Motivic Integration"
September 11, 2008
12:00pm
703 Thackaray Hall
Dr. Bogdan Ion, University of Pittsburgh
"The Fourier-Mukai transform"
Abstract: This is an expository talk on the definition and basic
properties of the Fourier-Mukai transform. Some applications (such as
Atiyah's classification of vector bundles over elliptic curves) will
also be discussed.
References:
1) Mukai. Duality between $D(X)$ and $D(\hat X)$ with its
application to Picard sheaves. Nagoya Math. J. (1981) vol. 81 pp.
153-175
2) Atiyah. Vector bundles over an elliptic curve. Proc. London Math.
Soc. (3) (1957) vol. 7 pp. 414-452
September 18, 2008
12:00pm
703 Thackeray Hall
Dr. Alexander Borisov, Univ. of Pittsburgh
"A geometric approach to the two-dimensional Jacobian Conjecture"
Abstract: We pursue the most natural (from the birational geometry viewpoint) approach to the classical two-dimensional Jacobian Conjecture. Starting with a possible counterexample, we resolve the singularities at infinity to get a map from a rational surface to the projective plane. A priori, one can say very little about the structure of the intersection graph of the blown-up curves. By careful analysis, we manage to put severe restrictions on this graph. As a corollary, we prove that all the images of these curves pass through a single point on the projective plane.
September 25, 2008
12:00pm
Jeffrey Wheeler
"The Erdos-Heilbronn Problem for Finite Groups"
Abstract: The Erdos-Heilbronn Conjecture states that for any two nonempty subsets A and B of Z/pZ we have |A \dot{+} B| \geq min { p, |A|+|B|-3 }, where A \dot{+} B is the set of sums a+b mod p with a \in A and b \in B and a \neq b. Dias da Silva and Hamidounne established the result for the case A = B in 1994 while Alon, Nathanson, and Ruzsa established the more general result in 1995. We further generalize this result and extend it from Z/pZ to arbitrary finite (including non-abelian) groups. This is a joint work with Paul Balister of the University of Memphis.
October 2, 2008
12:00pm
703 Thackeray Hall
Dr. Thomas Hales, Univ. of Pittsburgh
"Ngo's proof of the Fundamental Lemma (overview)"
Abstract: This talk will describe the general outline of Ngo's proof of the fundamental lemma. it will touch on some of the key structures in the proof: affine Springer fibers, the Hitchin fibration, the stabilization of the trace formula, and a key theorem on supports.
October 9, 2008
12:00pm
Dr. Greg Constanstine, Univ. of Pittsburgh
Abstract: Existence and and construction of Hadamard designs are examined from viewpoints of maximal cliques in association schemes, systems of distinct representatives, covering colored arborescences in complete graphs, and probability theory.
October 16, 2008
12:00pm
Dr. Bogdan Ion, Univ of Pittsburgh
"Affine Springer fibers (after Kazhdan, Lusztig, Bezrukavnikov)"
Abstract: This is an introduction to affine Springer fibers and their basic properties. We also give a proof of the dimension formula for fibers over regular semisimple elements, following Bezrukavnikov. References:[1] Kazhdan and Lusztig. Fixed point varieties on affine flag manifolds. Israel J. Math. (1988) vol. 62 (2) pp. 129-168 [2]
Bezrukavnikov. The dimension of the fixed point set on affine flag manifolds. Math. Res. Lett. (1996) vol. 3 (2) pp. 185-189
October 23, 2008
12:00pm
Tonghai Yang
"Arithmetic Intersection and the Non-abelian Chowla-Selberg formula"
Abstract: Let F=Q(sqrt D) be a real quadratic field. Let X be the Hilbert modular surface, viewed as an arithmetic 3-fold over integers. It has two families of naturally defined cycles, the arithmetic Hizebruch-Zagier divisors (dimension 2) T_m and arithmetic CM cycles CM(K) associated to a quartic CM number field K. They intersect properly when K is non-biquadratic. In this talk, we give an explicit formula for their intersections in terms of arithmetic on K. As an application, we explain how it implies the first non-abelian generalization of the celebrated Chowla-Selberg formula, a special case of the Colmez conjecture.
October 30, 2008
12:00pm
Dr. Bogdan Ion, Univ of Pittsburgh
"Affine Springer fibers II (after Kazhdan, Lusztig, Bezrukavnikov)"
Abstract: This is an introduction to affine Springer fibers and their basic properties. We also give a proof of the dimension formula for fibers over regular semisimple elements, following Bezrukavnikov. References:[1] Kazhdan and Lusztig. Fixed point varieties on affine flag manifolds. Israel J. Math. (1988) vol. 62 (2) pp. 129-168 [2]
Bezrukavnikov. The dimension of the fixed point set on affine flag manifolds. Math. Res. Lett. (1996) vol. 3 (2) pp. 185-189
November 6, 2008
12:00pm
Ruggero Gabbrielli, Centre for Orthopaedic Biomechanics, Department of Mechanical Engineering, University of Bath
"Periodic Space Partitions from a Pattern Forming Equation"
Abstract: A new counterexample to Kelvin’s Conjecture on minimal foams has been found. The conjecture stated that the minimal surface area partition of space into cells of equal volume was a tiling by truncated octahedra with slightly curved faces. Weaire and Phelan found a counterexample whose periodic unit includes two different tiles, a dodecahedron and a polyhedron with 14 faces. Successively, Sullivan showed the existence of a whole domain of partitions by polyhedra having only pentagonal and hexagonal faces that included the Phelan-Weaire structure..
Here is presented a new set of partitions with lower surface area than Kelvin's partition containing quadrilateral, pentagonal and hexagonal faces. These and other new partitions have been generated via the Voronoi diagram of spatially periodic sets of points obtained as local maxima of the stationary solution of the 3D Swift-Hohenberg partial differential equation in a triply periodic boundary, with pseudorandom initial conditions.
February 5, 2008
12:00pm
Peter Lumsdaine, Carnegie Mellon Univ.
"Higher Categories in Algebra"
Anstract: Higher categories have been studied since the 1970's in pure category theory, algebraic topology, and algebraic geometry. More recently, however, they have become of interest to a wider audience, as "categorification" techniques and results have emerged in a range of areas. I will give an introduction to higher categories, and a quick survey of some applications.
February 12, 2009
12:00pm
Dr. Greg Constantine, Univ. of Pittsburgh
"A construction of 2-designs of any block size with transitive automorphism groups"
Abstract: Large infinite families of nontrivial 2-designs are known to exist, yet a systematic listing by basic parameters, such as block size, was not known. I shall demonstrate how a nontrivial 2-design with automorphism group transitive on blocks can be constructed for any block size. These objects are, therefore, less sporadic than one may have thought.
February 19, 2008
12:00pm
Sophie Morel, Institute for Advanced Study
"On the cohomology of some non-compact Shimura varieties"
Abstract : In this talk, I will explain how the method originally developed by Ihara, Langlands and Kottwitz to compute the cohomology of a Shimura variety (use the Grothendieck-Lefschetz fixed point formula in positive characteristic to calculate the trace on the cohomology of a power of Frobenius at a good place times a Hecke operator trivial at that place, and then compare the result with Arthur's trace formula) applies to intersection cohomology of the Satake-Baily-Borel compactification of the Shimura varieties of unitary groups over Q and of the Siegel moduli varieties. I will also present applications to the calculation of the L-function of the intersection complex (for unitary groups and small-dimensional symplectic groups) and some applications involving base change from quasi-split unitary groups to general linear groups.
March 5 , 2009
1:00pm
703 Thackeray Hall
Dr. Yimu Yin, Univ of Pittsburgh
"Fourier Transform in Algebraically Closed Valued Fields"
February 26, March 19, 26 2008
1:00pm
Alexander Borisov, Univ of Pittsburgh
"An introduction to Higgs bundles"
Abstract: Higgs bundles on projective curves are in some sense natural generalizations of semistable holomorphic vector bundles. I will give some basic definitions and state some theorems regarding them.
Thurs April 2, 2009
1:00pm
703 Thackeray Hall
Dr. Thomas Hales, Univ. of Pittsburgh
"The Reinhardt Conjecture"
Abstract: In 1934, Reinhardt made a conjecture about the shape of a (centrally symmetric) disk in the plane with the property that its best possible packing in the plane is the worst. The conjecture is that an octagon with its edges clipped (called the "smoothed octagon") is the worst possible shape from the point of view of packings. This talk will describe some recent progress toward a solution to this conjecture.
April 9, 2009
1:00pm
703 Thackeray Hall
Dr. Thomas Hales, Univ. of Pittsburgh
"A progress report on the Flyspeck formal proof project"
Abstract: A few years ago the Flyspeck formal proof project was launched. The purpose of this project is to give a complete formal proof of the Kepler conjecture. The Kepler conjecture asserts that no packing of congruent balls in three dimensions can have density greater than the density of the familiar cannonball arrangement. A formal proof is a proof in which every logical step has been checked by a computer, based on the fundamental axioms of mathematics. Originally, this project was estimated to take 20 work years to complete. The project now appears to be over half-way complete. This talk will discuss some recent progress toward the completion of the Flyspeck project.
April 16, 2009
12:00pm
Thack 703
Florence Lecomte, University of Strasbourg
"Motives and Realizations"
Abstract: Without giving any construction, I will explain the main properties of Voevodsky's motives. With easy examples, I will show how they work and how you can realize them.
2007-2008 Schedule
|
# 2 Financial data and portfolio allocation
The main aim of this exercise is to familiarize yourself with the tidyverse. To download price data you can make use of the convenient tidyquantpackage. If you have trouble using tidyquant, check out the documentation. In case you did not install R and RStudio yet, you should follow the instructions provided on Absalon. Start the session by loading the tidyverse package as shown below.
# install.packages("tidyverse")
# install.packages("tidyquant")
library(tidyverse)
library(tidyquant)
### 2.1.1 Exercises
1. Download daily prices for one stock market ticker of your choice (e.g. AAPL) from Yahoo!Finance. Plot the time series of adjusted closing prices. To download the data you can use the command tq_get. If you do not know how to use it, make sure you read the help file by calling ?tq_get. I especially recommended to take a look in the examples section in the documentation.
2. Compute daily returns for the asset and visualize the distribution of daily returns in a histogram. Also, use geom_vline() to add a dashed red line that indicates the 5% quantile of the daily returns within the histogram.
3. Compute summary statistics (mean, standard deviation, minimum and maximum) for daily returns
4. Now the tidyverse magic starts: Take your code from before and generalize it such all the computations are performed for an arbitrary vector of tickers (e.g., ticker <- c("AAPL", "MMM", "BA")). Automate the download, the plot of the price time series and the table of summary statistics for this arbitrary number of assets.
5. Are days with high aggregate trading volume often followed by high aggregate trading volume days? Compute the aggregate daily trading volume (in USD) and find an appropriate visualization to analyze the question.
### 2.1.2 Solutions
prices <- tq_get("AAPL", get = "stock.prices")
prices %>% head() # How does the data look like?
## # A tibble: 6 x 8
## symbol date open high low close volume adjusted
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 AAPL 2011-01-03 11.6 11.8 11.6 11.8 445138400 10.1
## 2 AAPL 2011-01-04 11.9 11.9 11.7 11.8 309080800 10.2
## 3 AAPL 2011-01-05 11.8 11.9 11.8 11.9 255519600 10.3
## 4 AAPL 2011-01-06 12.0 12.0 11.9 11.9 300428800 10.2
## 5 AAPL 2011-01-07 11.9 12.0 11.9 12.0 311931200 10.3
## 6 AAPL 2011-01-10 12.1 12.3 12.0 12.2 448560000 10.5
tq_get downloads stock market data from Yahoo!Finance if you do not specify another data source. The function returns a tibble with 8 self-explaining columns, symbol, date, the market prices at the open, high, low and close, the daily volume (in number of shares) and the adjusted price in USD which factors in anything that might affect the stock price after the market closes, e.g. stock splits, repurchases and dividends.
ggplot(aes(x = date, y = adjusted)) +
geom_line() +
labs(x = "Date", y = "Price") +
scale_y_log10() # Change the y axis-scale
Figure 2.1 illustrates the time series of downloaded adjusted prices. Make sure you understand every single line of code! (What is the purpose of %>%? What are the arguments of aes()? Which alternative geoms could you use to visualize the time series? Hint: if you do not know the answers try to change the code to see what difference your intervention causes).
Next, we compute daily (log)-returns defined as $$\log(p_t) - \log(p_{t-1})$$ where $$p_t$$ is the adjusted day $$t$$ price.
returns <- prices %>%
return = log_price - lag(log_price)) %>% # Compute log returns
select(symbol, date, return)
## # A tibble: 6 x 3
## symbol date return
## <chr> <date> <dbl>
## 1 AAPL 2011-01-03 NA
## 2 AAPL 2011-01-04 0.00521
## 3 AAPL 2011-01-05 0.00815
## 4 AAPL 2011-01-06 -0.000809
## 5 AAPL 2011-01-07 0.00714
## 6 AAPL 2011-01-10 0.0187
The resulting tibble contains three columns, the last is the one with the daily log returns. Note that the first entry naturally contains the entry NA because there is no leading price. Note also, that the computations require that the time series is ordered by date - otherwise, lag would be meaningless. For the upcoming exercises we remove missing values as these would require careful treatment when computing, e.g., sample averages. In general, however, make sure you understand why NA values occur and if you can simply get rid of these observations.
returns <- returns %>% drop_na()
q05 <- quantile(returns %>% pull(return), 0.05) # Compute the 5 % quantile of the returns
returns %>% # create a histogram for daily returns
ggplot(aes(x = return)) +
geom_histogram(bins = 100) +
labs(x = "Return", y = "") +
geom_vline(aes(xintercept = q05),
color = "red",
linetype = "dashed")
Here, bins = 100 determines the width of the bins in the illustration. Also, make sure you understand how to use the geom geom_vline() to add a dotted red line that indicates the 5% quantile of the daily returns. Finally, I compute the summary statistics for the return time series.
returns %>%
summarise_at(vars(return),
list(daily_mean = mean,
daily_sd = sd,
daily_min = min,
daily_max = max))
## # A tibble: 1 x 4
## daily_mean daily_sd daily_min daily_max
## <dbl> <dbl> <dbl> <dbl>
## 1 0.000968 0.0180 -0.138 0.113
# Alternatively: compute summary statistics for each year
returns %>% group_by(year = year(date)) %>%
summarise_at(vars(return),
list(daily_mean = mean,
daily_sd = sd,
daily_min = min,
daily_max = max))
## # A tibble: 11 x 5
## year daily_mean daily_sd daily_min daily_max
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2011 0.000821 0.0165 -0.0576 0.0572
## 2 2012 0.00113 0.0185 -0.0665 0.0850
## 3 2013 0.000308 0.0182 -0.132 0.0501
## 4 2014 0.00135 0.0136 -0.0833 0.0788
## 5 2015 -0.000121 0.0169 -0.0631 0.0558
## 6 2016 0.000467 0.0147 -0.0680 0.0629
## 7 2017 0.00157 0.0110 -0.0395 0.0592
## 8 2018 -0.000221 0.0181 -0.0686 0.0681
## 9 2019 0.00253 0.0166 -0.105 0.0661
## 10 2020 0.00237 0.0294 -0.138 0.113
## 11 2021 -0.000553 0.0192 -0.0426 0.0525
Now the tidyverse magic starts: Tidy data and a tidy workflow makes it extremely easy to generalize the computations from before to as many assets you like. The following code takes a vector of tickers, ticker <- c("AAPL", "MMM", "BA"), and automates the download as well as the plot of the price time series. At the end, we create the table of summary statistics for an arbitrary number of assets.
ticker <- c("AAPL", "MMM", "BA")
all_prices <- tq_get(ticker, get = "stock.prices") # Exactly the same code as in the first exercise
all_prices %>%
ggplot(aes(x = date,
color = symbol)) +
geom_line() +
labs(x = "Date", y = "Price") + scale_y_log10()
Do you note the tiny difference relative to the code we used before? tq_get(ticker) is able to return a tibble for several symbols as well. All we need to do to illustrate all tickers instead of only one is to include color = symbol in the ggplot aesthetics. That way, a separate line is generated for each ticker.
all_returns <- all_prices %>%
group_by(symbol) %>% # we perform the computations per symbol
select(symbol, date, return) %>%
drop_na()
all_returns %>%
group_by(symbol) %>%
summarise_at(vars(return),
list(daily_mean = mean, daily_sd = sd, daily_min = min, daily_max = max))
## # A tibble: 3 x 5
## symbol daily_mean daily_sd daily_min daily_max
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 BA 0.000561 0.0228 -0.272 0.218
## 2 MMM 0.000434 0.0137 -0.139 0.119
## 3 AAPL 0.000968 0.0180 -0.138 0.113
The same holds for returns as well. Before computing returns as before we use group_by(symbol) such that the mutate command is performed for each symbol individually. Exactly the same logic applies for the computation of summary statistics: group_by(symbol) is the key to aggregate the time series into ticker-specific variables of interest.
Lastly: Don’t try this if you are not prepared for substantial waiting times but you are now equipped with all tools to download price data for each ticker listed in the SP500 index with exactly the same number of lines of code. Just use ticker <- tq_index("SP500") which provides you with a tibble that contains each symbol that is (currently) part of the SP500.
Sometimes, aggregation across other variables than symbol makes sense as well, as e.g. in the last exercise: We take the downloaded tibble with prices and compute aggregate daily trading volume in USD. Recall, that the column volume is denoted in traded shares. I multiply the trading volume with the daily closing price to get a measure of the aggregate trading volume in USD. Scaling by 1e9 denotes trading volume in billion USD.
volume <- all_prices %>%
mutate(volume_usd = volume * close / 1e9) %>% # I denote trading volume in billion USD
group_by(date) %>%
summarise(volume = sum(volume_usd))
volume %>% # Plot the time series of aggregate trading volume
ggplot(aes(x = date, y = volume)) +
geom_line() +
labs(x = "Date", y = "Aggregate trading volume (BUSD)")
One way to illustrate the persistence of trading volume would be to plot volume on day $$t$$ against volume on day $$t-1$$ as in the example below:
all_prices %>%
group_by(symbol) %>%
mutate(volume = volume*close/1e9) %>%
ggplot(aes(x=lag(volume), y = volume, color = symbol)) +
geom_point() +
labs(x = "Lag Aggregate trading volume (B USD) ", y = "Aggregate trading volume (B USD)")
## Warning: Removed 1 rows containing missing values (geom_point).
Do you understand where the warning ## Warning: Removed 1 rows containing missing values (geom_point). comes from and what it means? Pure eye-balling reveals that days with high trading volume are often followed by similarly high trading volume days.
## 2.2 Compute and visualize the efficient frontier
This exercise is closely aligned to the slides on optimal portfolio choice and asks you to compute and visualize the efficient frontier for a number of stocks. The solution code replicates the figure used in the slides.
### 2.2.1 Exercises
• I prepared the dataset exercise_clean_crsp_returns.rds which contains monthly returns of all US-listed stocks that have been continuously traded during the last 50 years. The dataset is provided in Absalon. Download the file and read it in using read_rds(). How many distinct tickers are listed? For how many observations per year are returns recorded? (The code used to prepare the dataset is based on the cleaned CRSP data later in Exercise 2.5 and available at the end of the solution in 2.5).
• Compute the vector of sample average returns and the sample variance covariance matrix. Which ticker exhibited the highest (lowest) monthly return among all stocks? Are there any stocks that experience negative correlation?
• Compute the minimum variance portfolio weight as well as the expected return and volatility of this portfolio
• Compute the efficient portfolio weights which achieve 3 times the expected return of the minimum variance portfolio.
• Make use of the two mutual fund theorem and compute the expected return and volatility for a range of combinations of the minimum variance portfolio and the efficient portfolio.
• Plot the risk-return characteristics of the individual assets in a diagram (x-axis: $$\mu$$, y-axis: $$\sigma$$). Also, add the efficient frontier and the two efficient portfolios from question 3 into the illustration.
### 2.2.2 Solutions
library(tidyverse)
The provided file contains three columns. permno is the unique ticker identifier in the CRSP universe. date corresponds to the monthly timestamp and ret.adj are the adjusted monthly returns (for more specifications, take a look in the code which I used to prepare the file). I performed some cleaning steps beforehand so we are ready to go.
crsp %>% count(permno) # 78 unique permnos
## # A tibble: 78 x 2
## permno n
## <int> <int>
## 1 10145 840
## 2 10516 840
## 3 11308 840
## 4 11404 840
## 5 11674 840
## 6 11850 840
## 7 12036 840
## 8 12052 840
## 9 12060 840
## 10 12490 840
## # ... with 68 more rows
crsp %>%
count(year = lubridate::year(date)) # lubridate is a powerful tool to work with dates and time
## # A tibble: 70 x 2
## year n
## <dbl> <int>
## 1 1950 936
## 2 1951 936
## 3 1952 936
## 4 1953 936
## 5 1954 936
## 6 1955 936
## 7 1956 936
## 8 1957 936
## 9 1958 936
## 10 1959 936
## # ... with 60 more rows
For each of the 70 years of data we observe 936 month-ticker return observations.
Next I transform the file from being tidy into a $$(T \times N)$$ matrix with one column for each ticker to compute the covariance matrix $$\Sigma$$ and also the expected return vector $$\mu$$.
returns <- crsp %>% pivot_wider(names_from = permno, values_from = ret.adj) %>% select(-date)
sigma <- returns %>%
cov(use = "pairwise.complete.obs") # Compute return sample covariance matrix
N <- ncol(sigma)
mu <- returns %>%
colMeans() %>%
as.matrix()
returns %>%
summarise_all(max) %>%
which.max() # What is the maximum return?
## 21135
## 53
sigma[sigma<0] # Are there any negative entries in Sigma?
## numeric(0)
We see that permno 21135 achieved a 53 percent return within one month. Interestingly, there is no single negative entry in the estimated variance covariance matrix.
The procedure to compute minimum variance portfolio weights has been introduced in class already. The next step requires to compute the efficient portfolio weights for a given level of return. The naming convention is aligned with the lecture slides.
iota <- rep(1, N)
wmvp <- solve(sigma) %*% iota
wmvp <- wmvp/sum(wmvp)
c(t(wmvp)%*%mu, sqrt(t(wmvp)%*%sigma%*%wmvp))
## [1] 0.9237972 4.4960914
# Compute efficient portfolio weights for given level of expected return
mu_bar <- 3 * t(wmvp)%*%mu # some benchmark return
C <- as.numeric(t(iota)%*%solve(sigma)%*%iota)
D <- as.numeric(t(iota)%*%solve(sigma)%*%mu)
E <- as.numeric(t(mu)%*%solve(sigma)%*%mu)
lambda_tilde <- as.numeric(2*(mu_bar -D/C)/(E-D^2/C))
weff <- wmvp + lambda_tilde/2*(solve(sigma)%*%mu - D/C*solve(sigma)%*%iota)
# Merge the weight vectors together and illustrate the differences in portfolio weights
full_join(as_tibble(weff, rownames = "ticker") %>% rename("Efficient Portfolio" = "V1"),
as_tibble(wmvp, rownames = "ticker") %>% rename("Min. Variance Portfolio" = "V1")) %>%
left_join(as_tibble(mu, rownames = "ticker") %>% rename("mu" = "V1")) %>%
pivot_longer(-c(ticker, "mu"), names_to = "Portfolio") %>% ggplot(aes(x= mu, y = value, color = Portfolio)) +
geom_point() +
labs(x = "Expected return", y = "Portfolio weight")
## Warning: The x argument of as_tibble.matrix() must have unique column names if .name_repair is omitted as of tibble 2.0.0.
## Using compatibility .name_repair.
## Joining, by = "ticker"
## Joining, by = "ticker"
The figure above illustrates the two different wealth allocations. It is evident that in order to achieve higher expected returns, the efficient portfolio takes more aggressive positions in the individual assets. More specifically, when plotting the sample mean of the individual assets on the $$x$$-axis it becomes clear that the minimum variance portfolio does not take $$\hat\mu$$ into account while the efficient portfolio puts more weights on assets with high expected returns.
The two mutual fund theorem claims that as soon as we have two efficient portfolio (such as the minimum variance portfolio and the efficient portfolio for another required level of expected returns as of above), we can characterize the entire efficient frontier by combining these two portfolio. This is done in the code below. Familiarize yourself with the inner workings of the forloop!.
# Use the two mututal fund theorem
c <- seq(from = -0.4, to = 1.2, by = 0.01)
res <- tibble(c = c,
mu = NA,
sd = NA)
for(i in seq_along(c)){ # For loop
w <- (1-c[i])*wmvp + (c[i])*weff # Portfolio of mvp and efficient portfolio
res$mu[i] <- t(w) %*% mu # Portfolio expected return res$sd[i] <- sqrt(t(w) %*% sigma %*% w) # Portfolio volatility
}
Finally, it is easy to visualize everything within one, powerful figure using ggplot2.
# Visualize the efficient frontier
ggplot(res, aes(x = sd, y = mu)) +
geom_point() + # Plot all sd/mu portfolio combinations
geom_point(data = res %>% filter(c %in% c(0,1)),
color = "red",
size = 4) + # locate the mvp and efficient portfolio
geom_point(data = tibble(mu = mu, sd = sqrt(diag(sigma))),
aes(y = mu, x = sd), color = "blue", size = 1) + # locate the individual assets
theme_minimal() # make the plot a bit nicer
If you want to replicate the figure in the lecture notes, you have to download price data for all constituents of the Dow Jones index first. After computing the returns, you can follow up with the code chunks provided above.
library(tidyquant)
ticker <- tq_index("DOW") # Retrieve ALL tickers in Dow Jones index from Yahoo!Finance
index_prices <- ticker %>%
returns <- index_prices %>%
group_by(symbol) %>% # compute returns for each symbol
mutate_fun = to.monthly,
indexAt = "lastof") %>% # this cryptic line computes monthly returns based on (dividend) adjusted prices
mutate(return = 100*(log(adjusted) - log(lag(adjusted)))) %>% # compute log returns in percent
na.omit() # Compute (log) returns for each symbol (adjusted prices take dividend payments into account)
# We need: sample mean return mu, sample covariance matrix Sigma
sigma <- returns %>%
pivot_wider(names_from = symbol, values_from = return) %>% # reorder data to a T x N matrix
select(-date) %>%
cov(use = "pairwise.complete.obs") # Compute return sample covariance matrix
mu <- returns %>%
group_by(symbol) %>%
summarise(mean = mean(return)) %>%
pull(mean) # Compute sample average
## 2.3 Simple portfolio analysis
In this exercise you will familiarize yourself with functions, numerical optimization and loops. You will compute efficient portfolio weights (that is, portfolios with the lowest possible volatility for a given level of expected return). To make things more interesting, we will also look at some potential frictions such as short-selling constraints which prevent investors to choose the optimal portfolio. After that we implement back-testing procedures to analyze the performance of different allocation strategies.
### 2.3.1 Exercises
1. Below you find code to download and prepare return time-series for 6 different assets. Use this tibble and compute the sample covariance $$\hat{\Sigma}$$ of the returns based on the entire sample
2. Use $$\hat{\Sigma}$$ to compute the minimum variance portfolio weights. That is, given the number of assets $$N$$, choose a weight vector $$w$$ of length $$N$$ that minimizes the portfolio volatility $\arg\min_{w} w'\hat\Sigma w\text{, such that } w'\iota = 1$ where $$\iota$$ is a vector of ones. You can do this by deriving the closed form solution for this optimization problem.
3. Suppose now short-selling is prohibited. Compute the restricted minimum variance portfolio weights as a solution to the minimization problem $\arg\min_{w} w'\hat\Sigma w\text{, such that } w'\iota = 1 \text{ and } w_i\geq0 \forall i=1,\ldots,N.$ You can use the package quadprog for the purpose of numerical optimization for quadratic programming problems.
4. What is the average monthly (in-sample) return and standard deviation of the minimum variance portfolio, $$\text{Var}(r_p) =\text{Var}(w'r)$$?
5. Implement a simple backtesting strategy: Adjust your code from before and re-estimate the weights based on a rolling-window with length 100 days using a for loop. Before performing the out-of-sample exercise: What are your expectations? Which of the two strategies should deliver the lowest return volatility?
6. What is the out-of-sample portfolio return standard deviation of this portfolio? What about a portfolio with short-sale constraints? What about a naive portfolio that invests the same amount in every asset (thus, $$w_\text{Naive} = 1/N\iota$$)? Do the results reflect your initial thoughts? Why could there be deviations between the theoretically optimal portfolio and the best performing out-of-sample strategy?
### 2.3.2 Solutions
# Use the following lines of code as the underlying asset universe
library(tidyverse)
library(tidyquant)
ticker <- c("AAPL", "MMM", "AXP", "MSFT", "GS") # APPLE, 3M, Microsoft, Goldman Sachs
# and American Express
returns <- tq_get(ticker, get = "stock.prices", from = "2015-01-01") %>%
group_by(symbol) %>%
select(symbol, date, return) %>%
pivot_wider(names_from = symbol,
values_from = return) %>%
drop_na() %>%
select(-date) %>%
as.matrix()
## AAPL MMM AXP MSFT GS
## [1,] -2.857596e-02 -0.022810868 -0.02680185 -0.009238249 -0.03172061
## [2,] 9.431544e-05 -0.010720599 -0.02154240 -0.014786082 -0.02043667
## [3,] 1.392464e-02 0.007222566 0.02160511 0.012624946 0.01479266
## [4,] 3.770266e-02 0.023684663 0.01407560 0.028993874 0.01583961
## [5,] 1.071732e-03 -0.012359980 -0.01274760 -0.008440374 -0.01546565
## [6,] -2.494925e-02 -0.005459671 -0.01022680 -0.012581783 -0.01224467
The estimated variance-covariance matrix $$\hat\Sigma$$ is a symmetric $$N \times N$$ matrix with full rank. It can be easily shown that the minimum variance portfolio weights are of the form $$w\propto \hat\Sigma^{-1}\iota$$. Therefore, we compute the inverse of $$\hat\Sigma$$ (which is done in R with solve()) and normalize the weight vector $$w$$ such that it sums up to 1.
sigma <- cov(returns) # computes the sample variance covariance matrix. Questions? use ?cov()
sigma
## AAPL MMM AXP MSFT GS
## AAPL 0.0003501208 0.0001241169 0.0001523432 0.0002237342 0.0001792109
## MMM 0.0001241169 0.0002197898 0.0001556313 0.0001205125 0.0001617816
## AXP 0.0001523432 0.0001556313 0.0003997009 0.0001629901 0.0002720981
## MSFT 0.0002237342 0.0001205125 0.0001629901 0.0003023027 0.0001756623
## GS 0.0001792109 0.0001617816 0.0002720981 0.0001756623 0.0003636318
# Closed form solution for minimum variance portfolio weights
N <- length(ticker)
w <- solve(sigma) %*% rep(1, N) # %*% denotes matrix multiplication in R.
w <- w / sum(w) # Scale w such that it sums up to 1
The weights above are the minimum variance portfolio weights for an investment universe which consists of the chosen assets only.
To compute the optimal portfolio weights in the presence of, e.g., short-selling constraints, analytic solutions to the optimization problem above are not feasible anymore. However, note that we are faced with a quadratic optimization problem with one binding constraint ($$w'\iota = 1$$) and $$N$$ inequality constraints $$(w_i\geq 0)$$. Numerical optimization of such problems are well-established and we can rely on routines readily available in R, for instance with the package quadprog.
Make sure to familiarize yourself with the documentation of the packages main function ?solve.QP such that you understand the following notation. In general, quadratic programming problems are of the form $\arg\min_w -\mu'w + 1/2w'Dw \text{ such that } A'w \geq b_0$ where $$A$$ is a ($$\tilde N\times k$$) matrix with the first $$m$$ columns denoting equality constraints and the remaining $$k-m$$ columns denoting inequality constraints. To get started, the following code computes the minimum variance portfolio weights using numerical procedures instead of the analytic solution above.
# Numerical solution for the minimum variance portfolio problem
w_numerical <- solve.QP(Dmat = sigma,
dvec = rep(0, N), # for now we do not include mean returns thus \mu = 0
Amat = cbind(rep(1, N)), # A has one column which is a vector of ones
bvec = 1, # bvec is 1 and enforces the constraint that weights sum up to one
meq = 1) # there is one (out of one) equality constraint
# Check that w and w_numerical are the same (up to numerical instabilities)
cbind(w, w_numerical$solution) ## [,1] [,2] ## AAPL 0.13036800 0.13036800 ## MMM 0.54885951 0.54885951 ## AXP 0.08478301 0.08478301 ## MSFT 0.22394675 0.22394675 ## GS 0.01204273 0.01204273 We see that the optimizer backs out the correct portfolio weights (that is, they are the same as the ones from the analytic solution). Now we extend the code above to include short-sale constraints. # Make sure you understand what the matrix t(A) is doing: A <- cbind(1, diag(N)) t(A) ## [,1] [,2] [,3] [,4] [,5] ## [1,] 1 1 1 1 1 ## [2,] 1 0 0 0 0 ## [3,] 0 1 0 0 0 ## [4,] 0 0 1 0 0 ## [5,] 0 0 0 1 0 ## [6,] 0 0 0 0 1 # Now: Introduce inequality constraint: no element of w is allowed to be negative solve.QP(Dmat = sigma, dvec = rep(0, N), Amat = A, bvec = c(1, rep(0, N)), meq = 1) ##$solution
## [1] 0.13036800 0.54885951 0.08478301 0.22394675 0.01204273
##
## $value ## [1] 8.947308e-05 ## ##$unconstrained.solution
## [1] 0 0 0 0 0
##
## $iterations ## [1] 2 0 ## ##$Lagrangian
## [1] 0.0001789462 0.0000000000 0.0000000000 0.0000000000 0.0000000000
## [6] 0.0000000000
##
## $iact ## [1] 1 It seems like not much changed in the code: We added $$N$$ inequality constraints that make sure no weight is negative. This condition ensures no short-selling. The equality constraint makes sure that the weights sum up to one. The output chunk above also illustrates everything that is returned to you when calling the function solve.QP. Familiarize yourself with the documentation but what is crucial to understand is that you can call the vector that minimizes the objective function with w$solution after running w <- solve.QP(...) as shown above. Similar w$value returns the value of the function evaluated at the minimum. To be precise, the output of the function solve.QPis a list with 6 elements. List elements can be accessed with$. If in doubt, read Chapter 20.5 of R for Data Science. By changing A you can impose additional or other restrictions. For instance, how would you compute portfolio weights that minimize the portfolio volatility under the constraint that you cannot invest more than 30% of your wealth into any individual asset?
r_p <- returns %*% w # portfolio returns
list(sd = sd(r_p), mean = mean(r_p)) # Realized volatility of the portfolio
## $sd ## [1] 0.01337708 ## ##$mean
## [1] 0.0005556903
The code snippet above shows how to compute the realized portfolio return volatility and how to generate some summary statistics for these values which can be used for backtesting purposes. However, as $$\hat\Sigma$$ is computed using the entire history of the assets, we benefit from information that no real investor can actually use to base her decisions on. Instead, in the following we focus on out-of-sample computation of the portfolio weights based on a rolling window. Therefore, every month (day) we update the available set of information, recompute our concurrent estimate of $$\hat\Sigma_t$$ and rebalance our portfolio to hold the efficient minimum variance portfolio. The next day, we then compute the realized return of this portfolio. That way, we never use information which is only available in the future to choose the optimal portfolio.
# Small function that computes the minimum variance portfolio weight for a given matrix of returns
mvp_weights <- function(tmp){
sigma <- cov(tmp)
# Closed form solution
w <- solve(sigma) %*% rep(1, ncol(tmp))
w <- w / sum(w)
return(w)
}
# Small function that computes the no-short selling minimum variance portfolio weight
mvp_weights_ns <- function(tmp){
sigma <- cov(tmp)
w <- solve.QP(Dmat = sigma,
dvec = rep(0, N),
Amat = cbind(1, diag(N)),
bvec = c(1, rep(0, N)),
meq = 1)
return(w\$solution)
}
# Define out-of-sample periods
window_length <- 100
periods <- nrow(returns) - window_length # total number of out-of-sample periods
all_returns <- matrix(NA, nrow = periods, ncol = 3) # A matrix to collect all returns
colnames(all_returns) <- c("mvp", "mvp_ns", "naive") # we implement 3 strategies
for(i in 1:periods){ # Rolling window
return_window <- returns[i : (i + window_length - 1),] # the last X returns available up to date t
# The three portfolio strategies
mvp <- mvp_weights(return_window)
mvp_ns <- mvp_weights_ns(return_window)
mvp_naive <- rep(1/N, N) # Naive simply invests the same in each asset
# Store realized returns w%*%w_t+1 in the matrix all_returns
all_returns[i, 1] <- returns[i + window_length, ] %*% mvp # realized mvp return
all_returns[i, 2] <- returns[i + window_length, ] %*% mvp_ns # realized constrained mvp return
all_returns[i, 3] <- returns[i + window_length, ] %*% mvp_naive # realized naive return
}
The framework above allows you to add additional portfolio strategies and to compare their out-of-sample performance to the benchmarks above. How do the portfolio actually perform? To make the values easier to read, I convert everything to annualized measures (assuming 250 trading days).
all_returns <- all_returns %>% as_tibble()
all_returns %>%
pivot_longer(everything()) %>% # Tidy-up the tibble for easier summary statistics
group_by(name) %>%
summarise_at(vars(value),
list(Mean = ~250*mean(.), Volatility = ~sqrt(250)*sd(.), Sharpe = ~sqrt(250)*mean(.)/sd(.))) %>%
arrange(Volatility)
## # A tibble: 3 x 4
## name Mean Volatility Sharpe
## <chr> <dbl> <dbl> <dbl>
## 1 mvp_ns 0.131 0.213 0.614
## 2 mvp 0.124 0.217 0.571
## 3 naive 0.166 0.229 0.725
## 2.4 Equivalence between Certainty equivalent maximization and minimum variance optimization
Slide 42 (Parameter uncertainty) argues that an investor with a quadratic utility function with certainty equivalent $\max_w CE(w) = \omega'\mu - \frac{\gamma}{2} \omega'\Sigma \omega \text{ s.t. } \iota'\omega = 1$ faces an equivalent optimization problem to a framework where portfolio weights are chosen with the aim to minimize volatility given a pre-specified level or expected returns $\min_w \omega'\Sigma \omega \text{ s.t. } \omega'\mu = \bar\mu \text{ and } \iota'\omega = 1$ Note the differences: In the first case, the investor has a (known) risk aversion $$\gamma$$ which determines her optimal balance between risk ($$\omega'\Sigma\omega)$$ and return ($$\mu'\omega$$). In the second case, the investor has a target return she wants to achieve while minimizing the volatility. Intuitively, both approaches are closely connected if we consider that the risk aversion $$\gamma$$ determines the desirable return $$\bar\mu$$. More risk averse investors (higher $$\gamma$$) will chose a lower target return to keep their volatility level down. The efficient frontier then spans all possible portfolios depending on the risk aversion $$\gamma$$, starting from the minimum variance portfolio ($$\gamma = \infty$$).
### 2.4.1 Exercises
• Proof that there is an equivalence between the optimal portfolio weights in both cases.
### 2.4.2 Solution
In the following I solve for the optimal portfolio weights for a Certainty equivalent maximizing investor. The first order condition reads \begin{aligned} \mu - \lambda \iota &= \gamma \Sigma \omega \\ \Leftrightarrow \omega &= \frac{1}{\gamma}\Sigma^{-1}\left(\mu - \lambda\iota\right) \end{aligned} Next, we make use of the constraint $$\iota'\omega = 1$$. \begin{aligned} \iota'\omega &= 1 = \frac{1}{\gamma}\left(\iota'\Sigma^{-1}\mu - \lambda\iota'\Sigma^{-1}\iota\right)\\ \Rightarrow \lambda &= \frac{1}{\iota'\Sigma^{-1}\iota}\left(\iota'\Sigma^{-1}\mu - \gamma \right). \end{aligned} Plug-in the value of $$\lambda$$ reveals \begin{aligned} \omega &= \frac{1}{\gamma}\Sigma^{-1}\left(\mu - \frac{1}{\iota'\Sigma^{-1}\iota}\left(\iota'\Sigma^{-1}\mu - \gamma \right)\right) \\ \Rightarrow \omega &= \frac{\Sigma^{-1}\iota}{\iota'\Sigma^{-1}\iota} + \frac{1}{\gamma}\left(\Sigma^{-1} - \frac{\Sigma^{-1}\iota}{\iota'\Sigma^{-1}\iota}\iota'\Sigma^{-1}\right)\mu\\ &= \omega_\text{mvp} + \frac{1}{\gamma}\left(\Sigma^{-1}\mu - \frac{\iota'\Sigma^{-1}\mu}{\iota'\Sigma^{-1}\iota}\Sigma^{-1}\iota\right) \end{aligned} As shown in the slides, this corresponds to the efficient portfolio with desired return $$\bar r$$ such that(in the notation of the slides) $\frac{1}{\gamma} = \frac{\tilde\lambda}{2} = \frac{\bar\mu - D/C}{E - D^2/C}$ which implies that the desired return is just $\bar\mu = \frac{D}{C} + \frac{1}{\gamma}\left({E - D^2/C}\right)$ which is $$\frac{D}{C} = \mu'\omega_\text{mvp}$$ for $$\gamma\rightarrow \infty$$ as expected.
|
# ${{\boldsymbol H}^{0}}$ Production Cross Section in ${{\boldsymbol p}}{{\boldsymbol p}}$ Collisions at $\sqrt {s }$ = 13 TeV INSPIRE search
Assumes ${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV
VALUE (pb) DOCUMENT ID TECN COMMENT
$\bf{ 59 \pm5}$ OUR AVERAGE
$61.1$ $\pm6.0$ $\pm3.7$ 1
2019 BA
CMS ${{\mathit p}}{{\mathit p}}$ , 13 TeV, ${{\mathit \gamma}}{{\mathit \gamma}}$ , ${{\mathit Z}}$ ${{\mathit Z}^{*}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ (${{\mathit \ell}}$ = ${{\mathit e}}$ , ${{\mathit \mu}}$ )
$57.0$ ${}^{+6.0}_{-5.9}$ ${}^{+4.0}_{-3.3}$ 2
2018 CG
ATLS ${{\mathit p}}{{\mathit p}}$ , 13 TeV, ${{\mathit \gamma}}{{\mathit \gamma}}$ , ${{\mathit Z}}$ ${{\mathit Z}^{*}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ (${{\mathit \ell}}$ = ${{\mathit e}}$ , ${{\mathit \mu}}$ )
• • • We do not use the following data for averages, fits, limits, etc. • • •
$47.9$ ${}^{+9.1}_{-8.6}$ 2
2018 CG
ATLS ${{\mathit p}}{{\mathit p}}$ , 13 TeV, ${{\mathit \gamma}}{{\mathit \gamma}}$
$68$ ${}^{+11}_{-10}$ 2
2018 CG
ATLS ${{\mathit p}}{{\mathit p}}$ , 13 TeV, ${{\mathit Z}}$ ${{\mathit Z}^{*}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ (${{\mathit \ell}}$ = ${{\mathit e}}$ , ${{\mathit \mu}}$ )
$69$ ${}^{+10}_{-9}$ $\pm5$ 3
2017 CO
ATLS ${{\mathit p}}{{\mathit p}}$ , 13 TeV, ${{\mathit Z}}$ ${{\mathit Z}^{*}}$ $\rightarrow$ 4 ${{\mathit \ell}}$
1 SIRUNYAN 2019BA use 35.9 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV.
2 AABOUD 2018CG use 36.1 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV.
3 AABOUD 2017CO use 36.1 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV with ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit Z}}{{\mathit Z}^{*}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ where ${{\mathit \ell}}$ = ${{\mathit e}}$ , ${{\mathit \mu}}$ for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV. Differential cross sections for the Higgs boson transverse momentum, Higgs boson rapidity, and other related quantities are measured as shown in their Figs. 8 and 9.
References:
SIRUNYAN 2019BA
PL B792 369 Measurement and interpretation of differential cross sections for Higgs boson production at $\sqrt{s} =$ 13 TeV
AABOUD 2018CG
PL B786 114 Combined measurement of differential and total cross sections in the $H \rightarrow \gamma \gamma$ and the $H \rightarrow ZZ^* \rightarrow 4\ell$ decay channels at $\sqrt{s} = 13$ TeV with the ATLAS detector
AABOUD 2017CO
JHEP 1710 132 Measurement of inclusive and differential cross sections in the $H \rightarrow ZZ^* \rightarrow 4\ell$ decay channel in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector
|
# How to diagonalize infinite symmetric banded matrices?
Given the tridiagonal symmetric infinite matrix of 0 and 1's
$$\left( \begin{matrix} 0&1&0&0&\ldots&0\\ 1&0&1&0&\ldots&0\\ 0&1&0&1&\ldots&0\\ \ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\ 0&0&0&\ldots&0&1\\ 0&0&0&\ldots&1&0\\ \end{matrix}\right)$$
How do you go about solving for the largest eigenvalues/eigenvectors? From a physical perspective, this is analogous to coupled harmonic oscillators along a linear chain, thus I expect the eigenvectors to look like the fundmental modes of a string with both ends fixed (i.e. in the continuum limit with scaled coordinates they look something like $\sin(nx)$).
I can solve for any finite matrix, but I'd like to understand how to solve this problem when the number of rows $N \rightarrow \infty$.
-
They are in fact sines, and you can use this to explicitly write down both the eigenvectors and the eigenvalues for all $N$. – Qiaochu Yuan Feb 27 '12 at 17:02
@QiaochuYuan as I suspect, but the question is how do you determine this when faced with the problem for the first time? I like to learn the method so that I can solve other, more difficult problems of this type. – Hooked Feb 27 '12 at 17:06
I don't know a general method. This question is quite special. – Qiaochu Yuan Feb 27 '12 at 17:09
Well, I know a method for the eigenvalues, which is to write down a recurrence for the characteristic polynomial and compute the corresponding generating function or recognize the recurrence (it turns out to be a modified form of the Chebyshev polynomials of some kind or other). But I don't know a good reason to expect that the eigenvectors are as nice as they are other than the continuum limit or a direct computation. – Qiaochu Yuan Feb 27 '12 at 17:43
– Qiaochu Yuan Feb 27 '12 at 18:54
If $H$ is a Hilbert space with basis $\{e_n\}$ and $S$ is this shift operator given by $Se_n=e_{n+1}$, then your matrix is the operator $S+S^*$. This operator has no eigenvalues. Because if $x=\sum_n\alpha_ne_n$ is an eigenvector with eigenvalue $\lambda$, you would have $$(S+S^*)x=\lambda x,$$ which translates into $$\sum_n\lambda\alpha_n=\lambda x=(S+S^*)x=\sum_{n=1}^\infty\alpha_ne_{n+1}\ + \ \sum_{n=2}^\infty\alpha_ne_{n-1}=\alpha_2e_1+\sum_{n=2}^\infty(\alpha_{n-1}+\alpha_{n+1})e_n.$$ So the coefficients $\alpha_n$ have to satisfy the recursion $$\alpha_2=\lambda \alpha_1, \ \ \alpha_{n+1}=\lambda \alpha_n-\alpha_{n-1}.$$ We can always assume $\alpha_1=1$, and so we have $$\alpha_1=1,\ \alpha_2=\lambda,\ \alpha_3=\lambda^2-1,\ \alpha_4=\lambda^3-\lambda-1, \ \ldots$$ that is $$\alpha_{n+1}=\lambda^n-\sum_{j=0}^{n-2}\lambda^j=\lambda^n-\frac{1-\lambda^{n-1}}{1-\lambda}=\frac{\lambda^n-\lambda^{n+1}+\lambda^{n-1}-1}{1-\lambda}.$$ It is not hard to see that no choice of $\lambda$ will make the sequence $\{\alpha_n\}$ lie in $\ell^2(\mathbb{N})$, since we can never have even $\alpha_n\to0$.
(Since you are not putting your matrix in the context of operators on Hilbert spaces, one could argue that the above computation actually allows you to find eigenvectors without the $\ell^2$ restriction; but then any complex number would be an eigenvalue and in particular you cannot expect to diagonalize your matrix)
I'm not sure I completely understand the take-home from this answer. Are you saying that, in the limit of infinite $N$, there are no (or any) possible eigenpairs? For any finite matrix of this type they clearly exist. They approach a sine wave as I and Qiaochu have mentioned and the eigenvalues can be found explicitly. Does this have something to do with the convergence of the series of eigenvalues? – Hooked Feb 27 '12 at 18:44
@QiaochuYuan: I'm not sure what you mean by "distributional eigenvectors". As an operator algebraist, I'm well versed in the Spectral Theorem, spectral measures, etc. In infinite dimension you can certainly talk about the spectrum (it is indeed a key concept), but there is no obvious or meaningful way to attach a vector (an "eigenvector") to an arbitrary element of the spectrum. The best you can say is that any normal operator $T$ on a Hilbert space can be written as $\int_{\sigma(T)} \lambda dE(\lambda)$, where $E$ is a projection-valued measure on the spectrum $\sigma(T)$ of $T$. – Martin Argerami Feb 27 '12 at 20:29
|
# What happens when chemical equilibrium is restored after being disturbed?
I'm aware that, according to Le Chatelier's principle, a reaction will shift its equilibrium position in order to counteract the disturbance at equilibrium. However, I don't know what that means after equilibrium is re-established.
For example, take the generic reaction
$$\ce{a A + b B <=> cC}$$
At some point, equilibrium is disturbed, causing the forward reaction to be favored. Once equilibrium is re-established, how do the concentrations of the species compare to before equilibrium was disturbed?
• $$K_{eq} = \dfrac{[C]_0^c}{[A]_0^a[B]_0^b} = \dfrac{[C]_1^c}{[A]_1^a[B]_1^b}$$ – MaxW Feb 10 '19 at 23:29
Once equilibrium is re-established, how do the concentrations of the species compare to before equilibrium was disturbed?
Let's say it is a homogeneous equilibrium. You have to consider different cases:
1. The equilibrium was disturbed by changing one of the concentrations (removing or adding a substance). You can't get back to the same set of concentrations because there are too few or too many atoms.
2. The equilibrium was disturbed by adding solvent. Again, you can't get back to the same set of concentrations (all are lower than at equilibrium, and a net reaction in one or the other direction can't increase all of them at once).
3. You disturbed the equilibrium by changing the temperature. The equilibrium constant is at a different temperature will be different, so the old set of concentrations will not satisfy the new equilibrium constant.
4. You disturbed the equilibrium by intermittently changing the temperature (or - more elaborate - by attaching a power source to an electrochemical cell, and then replacing it by a wire shortcutting the cell). In this case, the concentrations change exclusively because of the chemical reaction (no "outside" disturbance), and they can return back to the original concentrations once the disturbance is removed.
In cases 1. and 2., however, the reaction quotient Q will go back to the original one (=K). In case 3., Q will be different.
Equilibrium for a reaction means that $$\sum \nu_i \mu_i = 0$$
where $$\nu_i$$ is the stoichiometric coefficient for species "i" and $$\mu_i$$ is the chemical potential, the full (total) chemical potential, of species "i".
So, the concentrations will change until this summation equals zero. It is that simple.
There is no need to try to explain this with equilibrium constants, but if you wish to, do make sure you do it with proper ratios of activity coefficients, there is little to be gained by approximating activity coefficients with concentrations, as is often done. It leads only to headaches later in life when you don't get thrown simple ivory tower mixtures :)
|
# Normal Distribution Support cal per minute how to find Z-score?
I know how to calculate question which are phrased like so
A study of data collected at a company manufacturing flashlight batteries shows that a batch of 8000 batteries have a mean life of 250 minutes with a standard deviation of 20 minutes. Assuming a Normal Distribution, estimate:
(i) How many batteries will fail before 220 minutes?
But I can not figure questions phrased like this:
Support call times at a technical support center are Normally distributed with a mean time of 8 minutes and 45 seconds and a standard deviation of 1 minute and 5 seconds. On a particular day, a total of 500 calls are taken at the centre. How many of these calls are likely to last more than 10 minutes
I dont understand how to find the z-score in this question as its to do with time?
To calculate the z-score you have to standardize the random variable. The support call time is distributed as $$T\sim\mathcal N\left(8.75, (1\frac1{12})^2 \right)$$
Reasoning: $$45$$ seconds are $$0.75$$ minutes. And 5 seconds are $$\frac1{12}$$ minutes.
Therefore $$Z=\frac{T-8.75}{1\frac1{12}}=\frac{T-8.75}{\frac{13}{12}}$$. Then it is asked for
$$P(T> 10)=1-P(T\leq 10)=1-\Phi\left(\frac{10-8.75}{\frac{13}{12}}\right)=1-\Phi\left(\frac{\frac54}{\frac{13}{12}}\right)=1-\Phi\left(\frac{15}{13}\right)$$
This is the probability that one arbitrary call last more than 10 minutes.
• So you add the 1 Minuit with the.05 seconds? I do not understand you are squaring – Sean Jan 10 at 13:12
• The squaring is because of the notation. A normal distributed variable x is distributed as $\mathcal N(\mu, \sigma^2)$. The variance is $\sigma^2$ and therefore the square root of it is the standard deviation $\sigma$. $\textbf{So you add the 1 Minuit with the.05 seconds?}$ Yes, that´s right. – callculus Jan 10 at 17:39
The Z score is how many standard deviations above the mean 10 minutes is. It is $$5/4$$ minutes more than the mean of eight minutes and forty five seconds, and the standard deviation is $$13/12$$ minutes, so the Z score is $$15/13$$.
|
Articles written in Journal of Chemical Sciences
• Effect of metal ion doping on the photocatalytic activity of aluminophosphates
The metal ions (Ti+4, Mg+2, Zn+2 and Co+2) have been substituted in place of Al$^{+3}$ in aluminophosphates (AlPOs). These compounds were used for the first time as possible photocatalysts for the degradation of organic dyes. Among the doped AlPOs, ZnAlPO-5, CoAlPO-5, MgAlPO-11, 18 and 36 did not show any photocatalytic activity. MgAlPO-5 showed photocatalytic activity and different loading of Mg (4, 8, 12 atom % of Mg) were investigated. The activity can be enhanced by the increasing of concentration of the doped metal ions. TiAlPO-5 (4, 8, 12 atom % of Ti) showed the highest photocatalytic activity among all the compounds and its activity was compared to that of Degussa P25 (TiO2). The activity of photocatalysts was correlated with the diffuse reflectance and photoluminescence spectra.
• Synthesis, structure and ionic conductivity in scheelite type Li0.5Ce$_{0.5−x}$Ln$_x$MoO4 ($x = 0$ and 0.25, Ln = Pr, Sm)
Scheelite type solid electrolytes, Li0.5Ce$_{0.5−x}$Ln$_x$MoO4 ($x = 0$ and 0.25, Ln = Pr, Sm) have been synthesized using a solid state method. Their structure and ionic conductivity (𝜎) were obtained by single crystal X-ray diffraction and ac-impedance spectroscopy, respectively. X-ray diffraction studies reveal a space group of $I4_1/a$ for Li0.5Ce$_{0.5−x}$Ln$_x$MoO4 ($x = 0$ and 0.25, Ln = Pr, Sm) scheelite compounds. The unsubstituted Li0.5Ce0.5MoO4 showed lithium ion conductivity $\sim 10^{−5}-10^{−3} \Omega^{−1}$cm-1 in the temperature range of 300-700°C ($\sigma = 2.5 \times 10^{−3} \Omega^{−1}$cm-1 at 700°C). The substituted compounds show lower conductivity compared to the unsubstituted compound, with the magnitude of ionic conductivity being two (in the high temperature regime) to one order (in the low temperature regime) lower than the unsubstituted compound. Since these scheelite type structures show significant conductivity, the series of compounds could serve in high temperature lithium battery operations.
• Ce0.98Pd0.02O$_{2-\delta}$: Recyclable, ligand free palladium(II) catalyst for Heck reaction
Palladium substituted in cerium dioxide in the form of a solid solution, Ce0.98 Pd0.02 O1.98 is a new heterogeneous catalyst which exhibits high activity and 100% trans-selectivity for the Heck reactions of aryl bromides including heteroaryls with olefins. The catalytic reactions work without any ligand. Nanocrystalline Ce0.98 Pd0.02 O1.98 is prepared by solution combustion method and Pd is in +2 state. The catalyst can be separated, recovered and reused without significant loss in activity.
• Photocatalytic properties of KBiO3 and LiBiO3 with tunnel structures
In the present study, KBiO3 is synthesized by a standard oxidation technique while LiBiO3 is prepared by hydrothermal method. The synthesized catalysts are characterized by X-ray diffraction (XRD), Scanning ElectronMicroscopy (SEM), BET surface area analysis and Diffuse Reflectance Spectroscopy (DRS). The XRD patterns suggest that KBiO3 crystallizes in the cubic structure while LiBiO3 crystallizes in orthorhombic structure and both of these adopt the tunnel structure. The SEM images reveal micron size polyhedral shaped KBiO3 particles and rod-like or prismatic shape particles for LiBiO3. The band gap is calculated from the diffuse reflectance spectrum and is found to be 2.1 eV and 1.8 eV for KBiO3 and LiBiO3, respectively. The band gap and the crystal structure data suggest that these materials can be used as photocatalysts. The photocatalytic activity of KBiO3 and LiBiO3 are evaluated for the degradation of anionic and cationic dyes, respectively, under UV and solar radiations.
• Transition metal oxide loaded MCM catalysts for photocatalytic degradation of dyes
Transition metal oxide (TiO2, Fe2O3, CoO) loaded MCM-41 and MCM-48 were synthesized by a two-step surfactant-based process. Nanoporous, high surface area compounds were obtained after calcination of the compounds. The catalysts were characterized by SEM, XRD, XPS, UV-vis and BET surface area analysis. The catalysts showed high activity for the photocatalytic degradation of both anionic and cationic dyes. The degradation of the dyes was described using Langmuir-Hinshelwood kinetics and the associated rate parameters were determined.
• Synthesis and characterization of nano silicon and titanium nitride powders using atmospheric microwave plasma technique
We have demonstrated a simple, scalable and inexpensive method based on microwave plasma for synthesizing 5 to 10 g/h of nanomaterials. Luminescent nano silicon particles were synthesized by homogenous nucleation of silicon vapour produced by the radial injection of silicon tetrachloride vapour and nano titanium nitride was synthesized by using liquid titanium tetrachloride as the precursor. The synthesized nano silicon and titanium nitride powders were characterized by XRD, XPS, TEM, SEM and BET. The characterization techniques indicated that the synthesized powders were indeed crystalline nanomaterials.
• # Journal of Chemical Sciences
Volume 132, 2019
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
MikeMirzayanov's blog
By MikeMirzayanov, 2 weeks ago, ,
Many thanks to problem authors — Tech Scouts instructors. Please, review the author's solutions. They are beautiful and short. Our community has many to learn from mathematicians!
•
• +49
•
» 2 weeks ago, # | +68 To me, the answer for K seems ambiguous as there can be considered to be 2018 sequences covering the whole circle. Hence, 2018*2019/2 is the answer that I was getting (even though I did not participate).
» 2 weeks ago, # | 0
» 2 weeks ago, # | 0 1164R — Divisible by 83 , Anser is 0, 0 mod 83 = ???? :)))
• » » 2 weeks ago, # ^ | 0 n is natural, 0 is x_0, 0 index is not natural
• » » » 2 weeks ago, # ^ | 0 ok . thanks :3 <3
» 2 weeks ago, # | ← Rev. 2 → +29 I think the answer to problem K should be $\frac{2018*2017}{2}+2018=2037171$The whole circle may be counted 2018 times (because a sequence is an enumerated collection of numbers and there are 2018 possible starting points)It can be seen that these 2018 sequences are pairwise distinct.
» 2 weeks ago, # | +1 Why areas are 3 in B? I used affine transformations in B, to solve
• » » 2 weeks ago, # ^ | +3 The area of triangle ABP will be equal to 3/(1+3) of the area of the triangle ABC as P divides AC as 3:1. The area of triangle AMP will be equal to 1/(1+3) of the area of the triangle ABP as M divides AB as 1:3. Therfore, area of triangle AMP will be equal to 3/4 * 1/4 = 3/16 of the area of ABC.
» 2 weeks ago, # | +8 Well, I for got the +1 in Problem K :'(
» 2 weeks ago, # | +18 Is there a way to submit my answers now ? I couldn't take part of the contest and I would like to submit it ... Thanks :3
• » » 2 weeks ago, # ^ | +20 You have already got all answers in this tutorial... Any reasons to submit?
• » » » 2 weeks ago, # ^ | -10 Just wanted to test ourselves, how good we are in maths, before seeing editorial.
• » » » » 2 weeks ago, # ^ | +1 Then write all answers on a paper, then compare it with the editorial.
• » » » » » 2 weeks ago, # ^ | -20 Yeah, but that doesn't still solved his problem, he still wanted to submit. Isn't it?
» 2 weeks ago, # | +2 In some of these problems, you could take one particular instance and work out the answer for that, since you know there is only 1 answer to the problem (because of how the system works).For example, for the 2018 integers written on a circle with sum 1. You can assume 2017 of them are positive and one of them is the negative sum of all the others +1. It's now easy to see that all sequences that do not contain the negative number are positive.
|
# Posts Tagged ‘ analytics ’
## integrating R with other systems
June 16, 2012
By
I just returned from the useR! 2012 conference for developers and users of R. One of the common themes to many of the presentations was integration of R-based statistical systems with other systems, be they other programming languages, web systems, or enterprise data systems. Some highlights for me were an update to Rserve that includes
## Will 2015 be the Beginning of the End for SAS and SPSS?
May 15, 2012
By
Learning to use a data analysis tool well takes significant effort, so people tend to continue using the tool they learned in college for much of their careers. As a result, the software used by professors and their students is … Continue reading →
## Survey of Data Science / Analytics / Big Data / Applied Stats / Machine Learning etc. Practitioners
May 10, 2012
By
As I’ve discussed here before, there is a debate raging (ok, maybe not raging) about terms such as “data science”, “analytics”, “data mining”, and “big data”. What do they mean, how do they overlap, and perhaps most importantly, who are the people who work in these fields? Along with two other DC-area Data Scientists, Marck
## Visualising Activity Around a Twitter Hashtag or Search Term Using R
February 6, 2012
By
I think one of valid criticisms around a lot of the visualisations I post here and on my various #f1datajunkie blogs is that I often don’t post any explanatory context around the visualisations. This is partly a result of the way I use my blog posts in a selfish way to document the evolution of
## Programmers Should Know R
August 6, 2011
By
Programmers should definitely know how to use R. I don’t mean they should switch from their current language to R, but they should think of R as a handy tool during development.Again and again I find myself working with Java code like the following. td.linenos { background-color: #f0f0f0; padding-right: 10px; } span.lineno { background-color: #f0f0f0; Related posts:
July 30, 2011
By
This past Friday, the web portal to the US Federal government, USA.gov, organized hackathons across the US for programmers and data scientists to work with and analyze the data from their link-shortening service. It turns out that if you shorten a web link with bit.ly, the shortened link looks like 1.usa.gov/V6NpL (that one goes to
## making meat shares more efficient with R and Symphony
May 9, 2011
By
$making meat shares more efficient with R and Symphony$
In my previous post, I motivated a web application that would allow small-scale sustainable meat producers to sell directly to consumers using a meat share approach, using constrained optimization techniques to maximize utility for everyone involved. In this post, I’ll walk through some R code that I wrote to demonstrate the technique on a small
## intuitive visualizations of categorization for non-technical audiences
April 25, 2011
By
For a project I’m working on at work, I’m building a predictive model that categorizes something (I can’t tell you what) into two bins. There is a default bin that 95% of the things belong to and a bin that the business cares a lot about, containing 5% of the things. Some readers may be
## Social Media Analytics Research Toolkit (SMART@znmeb) Is Moving Into Private Beta
March 31, 2010
By
Download "Getting Started with the Social Media Analytics Research Toolkit" (pdf, 1.25 megabytes) Download the Social Media Analytics Research Toolkit My Social Media Analytics Research Toolkit is about to move into private beta. What's in the release?...
|
# Find c > 0 such that the area of the region enclosed by the parabolas y = x^2 - c^2 and y = c^2 -...
## Question:
Find c > 0 such that the area of the region enclosed by the parabolas {eq}y = x^2 - c^2 {/eq} and {eq}y = c^2 - x^2 {/eq} is 210.
## Area bounded by curves:
To find the area bounded by two curves, first of all find the limits of integration by equating both equations, then integrate the function Integration. Area bounded by the two curves {eq}y=f(x) {/eq} and {eq}y=g(x) {/eq} is expressed by the formula {eq}A=\int_{a}^{b} [f(x)-g(x)]dx {/eq}, where a and b are limits of integration.
The given region are
{eq}y=x^2-c^2 {/eq} and {eq}y=c^2-x^2 {/eq}
Equate both the equations
Thus {eq}c^2-x^2=x^2-c^2 {/eq}
{eq}c^2=x^2 {/eq}
{eq}x=\pm c {/eq}
Thus limits of integration for x are from {eq}-c {/eq} to {eq}c {/eq}
Area between the curves
{eq}A=\int_{-c}^{c} [ (c^2-x^2)-(x^2-c^2) ]dx {/eq}
{eq}A=\int_{-c}^{c} (2c^2-2x^2)dx {/eq}
{eq}A=[ 2c^2x-\frac{2x^3}{3} ]_{-c}^{c} {/eq}
As area given is {eq}A=210 {/eq}
{eq}210=[ 2c^3+2c^3-\frac{2c^3}{3}-\frac{2c^3}{3} ] {/eq}
{eq}210= \frac{8c^3}{3} {/eq}
{eq}c^3=78.75 {/eq}
{eq}c= 4.22 {/eq}
|
# How to argue this consequence?
Suppose that $\Omega=\mathbf{R}^n_+$ and consider a function $0<u<\sup\limits_\Omega u=M<\infty$ such that: $$\Delta u+u-1=0 \ \ \text{in} \ \ \Omega,$$ $$u=0 \ \ \text{on} \ \ \partial\Omega.$$ If $u$ exists, then $M>1$.
I don't know to argue this. My idea is to try by contradiction. Suppose that $M\leq1$, so $$\Delta u=1-u\geq0,$$ that is, $u$ is a subharmonic function. If $u$ attains a maximum in the interior of $\Omega$, for the maximum principle, $u$ should be a constant function and this would be the contradiction. But I don't know how to prove that the maximum is attain in interior.
-
What is the purpose of the function, $f$? – djws Nov 20 '12 at 6:45
You don't need suposse that. I will delete this. Sorry. – José Carlos Nov 20 '12 at 6:58
Do you have any ideia how to argue this? – José Carlos Nov 20 '12 at 7:00
Please! Stop adding [Solved] to the title! – Asaf Karagila Nov 25 '12 at 22:21
This is not a full answer, but maybe can help someone to give the full answer. By using Op's idea, im supposing that $u\leq 1$.
Case $n=1$
As the OP pointed out, if the function $u$ attains its maximum, then its must be constant, so we can suppose that $u\neq 1$. But $u\neq 1$ implies that $u''(x)>0$, or equivalently, $u$ is strictly convex.
Because $u(0)=0$ and $u>0$ we can conclude that $u$ is unlimited, which is a absurd. This concludes the case $n=1$.
Case $n>1$
We have some issues that can not happen. For example:
1 - If $u$ attains a local maximum and a local minimum, this would implies that there is some point $x$ such that $\Delta u(x)=0$.
2 - $u(x)$ can not converges to $0$ as $x\rightarrow \infty$.
Maybe there is a straightforwaard argument, but i think that with 1 and 2 is possible to conclude that $u(x)=1$ for some point.
Edit: Case $n>1$ (complete)
By using some results of Berestycki, Caffarelli and Nirenberg (see referece above and the references therein) we can conclude that $u$ is symmetric i.e. $u=u(x_n)$. This implies in our case that $\displaystyle\frac{\partial^2 u}{\partial x_n^2}=\Delta u>0$. Now, with the help of the case $n=1$ we can conlude.
References:
H. Berestycki - L. Caffarelli - L. Nirenberg, Further qualitative properties for elliptic equations in unbounded domains, Annali della Scuola Normale Superiore di Pisa - Classe di Scienze (1997), Volume: 25, Issue: 1-2, Publisher: Scuola Normale Superiore, page 69-94
-
The article you mention proves that this problem has no solution for $n=2,3$. – Beni Bogosel Nov 25 '12 at 23:18
yes, i know. But before he proves it, he shows that if there exist a solution to the problem then $u> 2$. Here we just prove that $u>1$ and this is sufficient for what they want in the article. @BeniBogosel – Tomás Nov 26 '12 at 0:04
I haven't read the paper. Please, give a little help, because I haven't found where they proved $u>2$. What I found is "For this problem it is not difficult to verify that if there were a solution, then $M>2$". – vesszabo Nov 28 '12 at 16:38
Well haha @vesszabo, come to the party. They dont prove it and i dont know how to prove, nevertheless we have proved that $u>1$. Do you have any idea? – Tomás Nov 28 '12 at 16:40
Absolutely not :-( Luckily, as you said, $u>1$ is enough. Could it be a typo? However a direct proof without citing that article would be interesting. – vesszabo Nov 28 '12 at 16:56
Suppose that $u \in H_0^1(\Omega)$. Then we have $$-\int_\Omega \nabla u \nabla \phi +\int_\Omega u\phi =\int_\Omega \phi, \forall \phi \in C_c^\infty(\Omega).$$
We can find a sequence of smooth functions $\phi_n$ which converges to $u$ in $H_0^1(\Omega)$. Then we have $$-\int_\Omega |\nabla u|^2 +\int_\Omega u^2 =\int_\Omega u.$$
Suppose that $M \leq 1$. Then $u^2 \leq u$ everywhere, and therefore $$\int_\Omega |\nabla u|^2 =0$$ This implies that $u$ is constant, and therefore zero. Contradiction.
Maybe this can be adapted to be used without the assumption that $u \in H_0^1(\Omega)$
-
Interesting calculation Beni, but i think the idea is equivalently to the item 2 of case $n>1$. If $u\in H_0^1(\Omega)$ then $u=0$, because in this case $u$ must attain a maximum value in $\Omega$. What do you think? – Tomás Nov 25 '12 at 22:47
It's just an idea. I suppose that this assumption is quite restrictive... – Beni Bogosel Nov 25 '12 at 22:51
I have tried to show that $u\in H_0^1(\Omega)$ without result. Did you tried? – Tomás Nov 25 '12 at 22:55
I don't think it is possible... Note that if the second equality can be obtained locally, then the conclusion also follows. The problem is that if we try to prove the equality for $\omega \subset \subset \Omega$ then a boundary term appears, and we have no sign control on it. – Beni Bogosel Nov 25 '12 at 23:03
but $u>0$ and maybe we can use the fact that $H_{loc}^1(\Omega)$ ? – Tomás Nov 26 '12 at 10:27
|
Combinations - selecting 7 persons
1. Sep 3, 2010
rajatgl16
In how many ways 7 persons can be selected from 5 indian, 4 british and 2 chinise, if atleast 2 are to be selected from each country.
2. Sep 3, 2010
rpf_rr
Re: Combination
you have to choose two chinese forced, at least two british and two indians, and one between british or indian. so you have
1*$$\frac{4!}{2!2!}$$*$$\frac{4!}{2!2!}$$*(3+2)=180
the last term: 3 indians+2 british
Last edited: Sep 3, 2010
3. Sep 4, 2010
rajatgl16
Re: Combination
hey I have answer booklet (not solution). And in it answer given is "100".
I tried it as,
At least 2 persons have to be slected form each country so :
ways of selecting 2 persons from 5 indians is 5C2
ways of selecting 2 persons from 4 british is 4C2
ways of selecting 2 persons from 2 chinese is 2C2
Thus ways of slecting 6 persons form entire group is 5C2 * 4C2 * 2C2
Now 1 person has to be selected from remaing 3 Indians and 2 british and 0 chinese
possible way of selceting 1 person is 5C1
Thus final answer to select 7 persons is 5C2 * 4C2* 2C2 * 5C1=300 so its also wrong
4. Sep 4, 2010
rpf_rr
Re: Combination
for nCk you mean n!/(k!*(n-k)!) ? If yes we computed the same thing, or better, you are right, i've done an error in the third factor, it is actually 5C2=5!/(3!2!), for me it's 600, but i made the same reasoning as you did, and i think it's right, if we understand the problem correctly.
5. Sep 4, 2010
rajatgl16
Re: Combination
Then may be ans in my ans booklet is wrong.
6. Sep 4, 2010
Office_Shredder
Staff Emeritus
Re: Combination
You've overcounted some. Imagine the British people are labeled A,B,C and D.
Scenario 1: You pick two British, A and B. Then you pick two Indians. Then you pick your last person from the five remaining people and pick person C.
Now imagine instead you pick two British, A and C. You pick the same two Indians as before. You pick your last person from the five remaining people and the person is B.
In both situations you've picked the same set of people but you counted them separately
7. Sep 4, 2010
rpf_rr
Re: Combination
Officeshredder is right. There are two possible situations, you pick 2 chinese 3 british and 2 indians or you pick 2 chinese 2 british and 3 indians, so you have
2C2*4C3*5C2+2C2*4C2*5C3=100
8. Sep 4, 2010
rajatgl16
Re: Combination
Hmm. I was wrong. Thanks guys for helping me,
|
# zbMATH — the first resource for mathematics
## Liang, Dong
Compute Distance To:
Author ID: liang.dong Published as: Liang, D.; Liang, Dong External Links: ORCID
Documents Indexed: 153 Publications since 1989
all top 5
#### Co-Authors
12 single-authored 11 Wang, Nian 8 Rui, Hongxing 7 Wang, Hong 7 Wang, Wenqia 6 Ding, Dawei 6 Fu, Kai 6 Zhang, Zhiyue 5 Ewing, Richard Edward 5 Li, Wanshan 5 Lyons, Stephen L. 5 Qin, Guan 5 Wu, Jianhong 5 Yuan, Yirang 5 Zhang, Bo 5 Zhao, Weidong 5 Zhou, Zhongguo 4 Cui, Ming 4 Gao, Liping 4 Liu, Qiegen 4 Wang, Shanshan 3 Chen, Wenbin 3 Deng, Dingwen 3 Du, Chuanbin 3 Fan, Yizheng 3 Hou, Baohui 3 Li, Xingjie 3 Sun, Guanying 3 Wang, Bo 3 Wilhelm, Wilbert E. 3 Xie, Jianqiang 2 Cai, Nian 2 Cheng, Aijie 2 Cheng, Yu 2 Guo, Qiang 2 Hu, Gensheng 2 Lin, Yanping 2 Pan, Hongfei 2 Peng, Xi 2 Sun, Tongjun 2 Tang, Jun 2 Wang, Jialing 2 Wang, Yushun 2 Weng, Peixuan 2 Wong, Yau Shu 2 Xie, Weisi 2 Xu, Wenwen 2 Xu, Zongben 2 Yan, Jie 2 Yan, Jinliang 2 Yuan, Qiang 2 Zhang, Fan 2 Zhou, Meiju 1 Akhavan, Yousef 1 Bao, Wenxia 1 Cai, Chenguang 1 Chang, Yuchou 1 Chen, Junning 1 Chen, Michael J. 1 Chen, Qingqing 1 Chen, Yenwei 1 Ding, Lianghui 1 Ding, Yan 1 Dong, Pei 1 Du, Ning 1 Duan, Zhenjie 1 Fan, Min 1 Fan, Yuzheng 1 Feng, David Dagan 1 Feng, Xingdong 1 Gao, Fuzheng 1 Gao, Yulong 1 Gojović, Marija Živković 1 Gong, Sunling 1 Gu, Fenxia 1 Guo, Cunshan 1 Han, Xianhua 1 Hou, Thomas Yizhao 1 Hu, Hongjie 1 Huang, Huaxiong 1 Huang, Jie (Jenny) 1 Huang, Linsheng 1 Huo, Xiukun 1 Iwamoto, Yutaro 1 Ji, Zheng 1 Jiang, Yaolin 1 Jiang, Ziwei 1 Jin, Dequan 1 Jin, Zhen 1 Kandel, Hom N. 1 Kong, Nana 1 Li, Fangxia 1 Li, Feng 1 Li, Feng 1 Li, Shuangdong 1 Li, Wei 1 Li, Wenshu 1 Li, Xiaoyi 1 Li, XinDong 1 Li, Yonghai 1 Li, Yuanfu ...and 79 more Co-Authors
all top 5
#### Serials
11 Journal of Computational and Applied Mathematics 7 Journal of Computational Physics 6 Applied Numerical Mathematics 6 Communications in Computational Physics 5 Numerical Methods for Partial Differential Equations 5 Journal of Scientific Computing 4 Applied Mathematics and Computation 4 Numerical Mathematics 4 International Journal of Numerical Analysis and Modeling 3 Journal of Shandong University. Natural Science Edition 3 Applied Mathematics and Mechanics. (English Edition) 3 IEEE Transactions on Image Processing 3 Journal of University of Science and Technology of China 3 Systems Engineering and Electronics 3 Computational & Mathematical Methods in Medicine 2 Journal of Mathematical Analysis and Applications 2 Mathematical Methods in the Applied Sciences 2 International Journal for Numerical Methods in Engineering 2 Acta Mathematicae Applicatae Sinica 2 Acta Mathematicae Applicatae Sinica. English Series 2 Northeastern Mathematical Journal 2 International Journal of Computer Mathematics 2 SIAM Journal on Scientific Computing 2 Journal of Harbin Institute of Technology. New Series 1 Computer Methods in Applied Mechanics and Engineering 1 IMA Journal of Applied Mathematics 1 IMA Journal of Numerical Analysis 1 International Journal for Numerical Methods in Fluids 1 Chaos, Solitons and Fractals 1 IEEE Transactions on Automatic Control 1 Naval Research Logistics 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Numerische Mathematik 1 SIAM Journal on Numerical Analysis 1 Numerical Mathematics 1 Mathematica Numerica Sinica 1 Chinese Annals of Mathematics. Series A 1 Acta Automatica Sinica 1 Computers & Operations Research 1 Journal of Shanghai Jiaotong University (Chinese Edition) 1 Journal of Southwest Jiaotong University 1 Journal of Tsinghua University. Science and Technology 1 Science in China. Series A 1 Journal of Southeast University. English Edition 1 Numerical Algorithms 1 Linear Algebra and its Applications 1 Mathematical Programming. Series A. Series B 1 Journal of Nonlinear Science 1 Applied Mathematics. Series A (Chinese Edition) 1 Computational and Applied Mathematics 1 Discrete and Continuous Dynamical Systems 1 Mathematical Problems in Engineering 1 Differential Equations and Dynamical Systems 1 Nonlinear Dynamics 1 Soft Computing 1 Discrete Mathematics and Theoretical Computer Science. DMTCS 1 Far East Journal of Applied Mathematics 1 Communications in Nonlinear Science and Numerical Simulation 1 Computational Geosciences 1 Progress in Natural Science 1 The ANZIAM Journal 1 Nonlinear Analysis. Real World Applications 1 Journal of Systems Science and Complexity 1 Communications in Information and Systems 1 Communications in Mathematical Sciences 1 ANACM. Applied Numerical Analysis and Computational Mathematics 1 Journal of Hefei University of Technology. Natural Science 1 International Journal of Computational Methods 1 Mathematical Biosciences and Engineering 1 AJXJTU. Academic Journal of Xi’an Jiaotong University 1 Journal of Anhui University. Natural Science Edition 1 Journal of Jiangsu University of Science and Technology. Natural Science Edition 1 Applicable Analysis and Discrete Mathematics 1 SIAM Journal on Imaging Sciences 1 International Journal of Biomathematics 1 Communications in Theoretical Physics 1 Journal of Control Science and Engineering 1 Chinese Journal of Engineering Mathematics 1 Science China. Information Sciences 1 Journal of Agricultural, Biological, and Environmental Statistics 1 Numerical Algebra, Control and Optimization 1 International Journal of Numerical Analysis and Modeling. Series B 1 Journal of Applied Analysis and Computation
all top 5
#### Fields
83 Numerical analysis (65-XX) 45 Partial differential equations (35-XX) 40 Fluid mechanics (76-XX) 19 Biology and other natural sciences (92-XX) 17 Computer science (68-XX) 13 Optics, electromagnetic theory (78-XX) 11 Geophysics (86-XX) 9 Operations research, mathematical programming (90-XX) 8 Information and communication theory, circuits (94-XX) 7 Ordinary differential equations (34-XX) 7 Systems theory; control (93-XX) 6 Combinatorics (05-XX) 4 Dynamical systems and ergodic theory (37-XX) 3 Calculus of variations and optimal control; optimization (49-XX) 3 Statistics (62-XX) 3 Classical thermodynamics, heat transfer (80-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Mathematical logic and foundations (03-XX) 1 Real functions (26-XX) 1 Difference and functional equations (39-XX) 1 Approximations and expansions (41-XX) 1 Integral equations (45-XX) 1 Mechanics of deformable solids (74-XX)
#### Citations contained in zbMATH Open
84 Publications have been cited 526 times in 359 Documents Cited by Year
An approximation to miscible fluid flows in porous media with point sources and sinks by an Eulerian-Lagrangian localized adjoint method and mixed finite element methods. Zbl 0988.76054
Wang, Hong; Liang, Dong; Ewing, Richard E.; Lyons, Stephen L.; Qin, Guan
2000
Travelling waves and numerical approximations in a reaction advection diffusion equation with nonlocal delayed effects. Zbl 1017.92024
Liang, D.; Wu, J.
2003
Energy-conserved splitting FDTD methods for Maxwell’s equations. Zbl 1185.78020
Chen, Wenbin; Li, Xingjie; Liang, Dong
2008
Population dynamic models with nonlocal delay on bounded domains and their numerical computations. Zbl 1231.35287
Liang, Dong; So, Joseph W.-H.; Zhang, Fan; Zou, Xingfu
2003
Energy-conserved splitting finite-difference time-domain methods for Maxwell’s equations in three dimensions. Zbl 1220.78116
Chen, Wenbin; Li, Xingjie; Liang, Dong
2010
The Laplacian spread of a tree. Zbl 1153.05323
Fan, Yi-Zheng; Xu, Jing; Wang, Yi; Liang, Dong
2008
The splitting finite-difference time-domain methods for Maxwell’s equations in two dimensions. Zbl 1122.78021
Gao, Liping; Zhang, Bo; Liang, Dong
2007
An efficient S-DDM iterative approach for compressible contamination fluid flows in porous media. Zbl 1305.76074
Du, Chuanbin; Liang, Dong
2010
A new energy-conserved S-FDTD scheme for Maxwell’s equations in metamaterials. Zbl 1277.78033
Li, Wanshan; Liang, Dong; Lin, Yanping
2013
The least eigenvalue of graphs with given connectivity. Zbl 1171.05365
Ye, Miao-Lin; Fan, Yi-Zheng; Liang, Dong
2009
An optimal weighted upwinding covolume method on non-standard grids for convection-diffusion problems in 2D. Zbl 1110.76321
Liang, Dong; Zhao, Weidong
2006
Modelling population growth with delayed nonlocal reaction on 2 dimensions. Zbl 1061.92048
Liang, Dong; Wu, Jianhong; Zhang, Fan
2005
An ELLAM-MFEM solution technique for compressible fluid flows in porous media with point sources and sinks. Zbl 0979.76051
Wang, Hong; Liang, Dong; Ewing, Richard E.; Lyons, Stephen L.; Qin, Guan
2000
A high-order upwind method for the convection-diffusion problem. Zbl 0897.76064
Liang, Dong; Zhao, Weidong
1997
The mass-preserving S-DDM scheme for two-dimensional parabolic equations. Zbl 1388.65073
Zhou, Zhongguo; Liang, Dong
2016
The efficient S-DDM scheme and its analysis for solving parabolic equations. Zbl 1349.76499
Liang, Dong; Du, Chuanbin
2014
An ELLAM approximation for highly compressible multicomponent flows in porous media. Zbl 1094.76555
Wang, H.; Liang, D.; Ewing, R. E.; Lyons, S. L.; Qin, G.
2002
The mass-preserving and modified-upwind splitting DDM scheme for time-dependent convection-diffusion equations. Zbl 1357.65141
Zhou, Zhongguo; Liang, Dong
2017
A kind of upwind schemes for convection diffusion equations. Zbl 0850.65169
Liang, Dong
1991
Computation of a moving drop/bubble on a solid surface using a front-tracking method. Zbl 1161.76541
Huang, Huaxiong; Liang, Dong; Wetton, Brian
2004
The spatial fourth-order energy-conserved S-FDTD scheme for Maxwell’s equations. Zbl 1349.78094
Liang, Dong; Yuan, Qiang
2013
Travelling wave solutions in a delayed predator-prey diffusion PDE system: point-to-periodic and point-to-point waves. Zbl 1250.35171
Liang, Dong; Weng, Peixuan; Wu, Jianhong
2012
Symmetric energy-conserved splitting FDTD scheme for the Maxwell’s equations. Zbl 1364.78035
Chen, Wenbin; Li, Xingjie; Liang, Dong
2009
An efficient second-order characteristic finite element method for nonlinear aerosol dynamic equations. Zbl 1176.76070
Liang, Dong; Wang, Wenqia; Cheng, Yu
2009
Asymptotic patterns of a structured population diffusing in a two-dimensional strip. Zbl 1152.35409
Weng, Peixuan; Liang, Dong; Wu, Jianhong
2008
An improved numerical simulator for different types of flows in porous media. Zbl 1079.76044
Wang, Hong; Liang, Dong; Ewing, Richard E.; Lyons, Stephen L.; Qin, Guan
2003
A multipoint flux mixed finite element method for the compressible Darcy-Forchheimer models. Zbl 1426.76319
Xu, Wenwen; Liang, Dong; Rui, Hongxing
2017
Pinning synchronization of fractional order complex-variable dynamical networks with time-varying coupling. Zbl 1380.93133
Ding, Dawei; Yan, Jie; Wang, Nian; Liang, Dong
2017
Hybrid control of Hopf bifurcation in a dual model of Internet congestion control system. Zbl 1306.93056
Ding, Da-Wei; Qin, Xue-Mei; Wang, Nian; Wu, Ting-Ting; Liang, Dong
2014
Augmented Lagrangian-based sparse representation method with dictionary updating for image deblurring. Zbl 1279.68332
Liu, Qiegen; Liang, Dong; Song, Ying; Luo, Jianhua; Zhu, Yuemin; Li, Wenshu
2013
Splitting finite difference methods on staggered grids for the three-dimensional time-dependent Maxwell equations. Zbl 1364.78036
Gao, Liping; Zhang, Bo; Liang, Dong
2008
A fractional step ELLAM approach to high-dimensional convection-diffusion problems with forward particle tracking. Zbl 1110.65091
Liang, Dong; Du, Chuanbin; Wang, Hong
2007
Error estimates for mixed finite element approximations of the viscoelasticity wave equation. Zbl 1071.65129
Gao, Liping; Liang, Dong; Zhang, Bo
2004
Characteristics-finite element methods for seawater intrusion. Numerical simulation and theoretical analysis. Zbl 0968.76567
Yuan, Yirang; Liang, Dong; Rui, Hongxing
1998
The finite difference scheme for nonlinear Schrödinger equations on unbounded domain by artificial boundary conditions. Zbl 1412.65094
Wang, Bo; Liang, Dong
2018
Adaptive synchronization of fractional order complex-variable dynamical networks via pinning control. Zbl 1377.34070
Ding, Da-Wei; Yan, Jie; Wang, Nian; Liang, Dong
2017
The conservative characteristic FD methods for atmospheric aerosol transport problems. Zbl 1349.76461
Fu, Kai; Liang, Dong
2016
Adaptive dictionary learning in sparse gradient domain for image recovery. Zbl 1373.94258
Liu, Qiegen; Wang, Shanshan; Ying, Leslie; Peng, Xi; Zhu, Yanjie; Liang, Dong
2013
Numerical analysis of graded mesh methods for a class of second kind integral equations on the real line. Zbl 1051.65131
Liang, Dong; Zhang, Bo
2004
Predicting the consequences of seawater intrusion and protection projects. Zbl 0988.76518
Yuan, Yirang; Liang, Dong; Rui, Hongxing
2001
An accurate approximation to compressible flow in porous media with wells. Zbl 1072.76571
Wang, Hong; Liang, Dong; Ewing, Richard E.; Lyons, Stephen L.; Qin, Guan
2000
Optimal weighted upwind finite volume method for convection-diffusion equations in 2D. Zbl 1418.65159
Gao, Yulong; Liang, Dong; Li, Yonghai
2019
The time fourth-order compact ADI methods for solving two-dimensional nonlinear wave equations. Zbl 1427.65157
Deng, Dingwen; Liang, Dong
2018
High-order energy-preserving schemes for the improved Boussinesq equation. Zbl 1407.76109
Yan, Jinliang; Zhang, Zhiyue; Zhao, Tengjin; Liang, Dong
2018
Multiscale analysis for convection-dominated transport equations. Zbl 1163.76048
Hou, Thomas Y.; Liang, Dong
2009
Decomposition schemes and acceleration techniques in application to production-assembly-distribution system design. Zbl 1278.90130
Liang, Dong; Wilhelm, Wilbert E.
2008
The characteristics-finite difference methods for sea water intrusion numerical simulation and optimal order $$\ell^2$$ error estimates. Zbl 0895.76060
Yuan, Yirang; Liang, Dong; Rui, Hongxing; Wang, Gaohong
1996
Upwind generalized difference schemes for convection-diffusion problems. Zbl 0727.65082
Liang, Dong
1990
The energy-preserving finite difference methods and their analyses for system of nonlinear wave equations in two dimensions. Zbl 1434.65111
Deng, Dingwen; Liang, Dong
2020
A new fourth-order energy dissipative difference method for high-dimensional nonlinear fractional generalized wave equations. Zbl 07264479
Xie, Jianqiang; Zhang, Zhiyue; Liang, Dong
2019
A conservative splitting difference scheme for the fractional-in-space Boussinesq equation. Zbl 1444.35132
Xie, Jianqiang; Zhang, Zhiyue; Liang, Dong
2019
The conservative and fourth-order compact finite difference schemes for regularized long wave equation. Zbl 1419.65033
Wang, Bo; Sun, Tongjun; Liang, Dong
2019
The new mass-conserving S-DDM scheme for two-dimensional parabolic equations with variable coefficients. Zbl 1427.65205
Zhou, Zhongguo; Liang, Dong; Wong, Yaushu
2018
Mass-preserving time second-order explicit-implicit domain decomposition schemes for solving parabolic equations with variable coefficients. Zbl 1400.65048
Zhou, Zhongguo; Liang, Dong
2018
High-order finite difference methods for a second order dual-phase-lagging models of microscale heat transfer. Zbl 1411.80005
Deng, Dingwen; Jiang, Yaolin; Liang, Dong
2017
The time second order mass conservative characteristic FDM for advection-diffusion equations in high dimensions. Zbl 1379.65061
Fu, Kai; Liang, Dong
2017
Symmetric energy-conserved S-FDTD scheme for two-dimensional Maxwell’s equations in negative index metamaterials. Zbl 1372.78020
Li, Wanshan; Liang, Dong
2016
Energy-conserved splitting spectral methods for two dimensional Maxwell’s equations. Zbl 1293.78014
Zeng, Fanhai; Ma, Heping; Liang, Dong
2014
Numerical analysis of the second-order characteristic FEM for nonlinear aerosol dynamic equations. Zbl 1278.76056
Cui, Ming; Fu, Kai; Liang, Dong; Cheng, Yu; Wang, Wenqia
2014
Existence and properties of stationary solution of dynamical neural field. Zbl 1219.35328
Jin, Dequan; Liang, Dong; Peng, Jigen
2011
Numerical analysis to discontinuous Galerkin methods for the age structured population model of marine invertebrates. Zbl 1156.92035
Sun, Guanying; Liang, Dong; Wang, Wenqia
2009
The modified method of upwind with finite difference fractional steps procedure for the numerical simulation and analysis of seawater intrusion. Zbl 1141.86302
Yuan, Yirang; Liang, Dong; Rui, Hongxing
2006
The conservative splitting domain decomposition method for multicomponent contamination flows in porous media. Zbl 1453.65305
Liang, Dong; Zhou, Zhongguo
2020
The spatial fourth-order compact splitting FDTD scheme with modified energy-conserved identity for two-dimensional Lorentz model. Zbl 07126192
Li, W.; Liang, D.
2020
Second order in time and space corrected explicit-implicit domain decomposition scheme for convection-diffusion equations. Zbl 1415.76443
Akhavan, Yousef; Liang, Dong; Chen, Michael
2019
The conservative splitting high-order compact finite difference scheme for two-dimensional Schrödinger equations. Zbl 1404.65102
Wang, Bo; Liang, Dong; Sun, Tongjun
2018
Analysis of a Fourier pseudo-spectral conservative scheme for the Klein-Gordon-Schrödinger equation. Zbl 1390.65082
Wang, Jialing; Wang, Yushun; Liang, Dong
2018
Global energy-tracking identities and global energy-tracking splitting FDTD schemes for the Drude models of Maxwell’s equations in three-dimensional metamaterials. Zbl 1381.78013
Li, Wanshan; Liang, Dong; Lin, Yanping
2017
ADI-FDTD method for two-dimensional transient electromagnetic problems. Zbl 1373.78437
Li, Wanshan; Zhang, Yile; Wong, Yau Shu; Liang, Dong
2016
Hopf bifurcation control in a FAST TCP and RED model via multiple control schemes. Zbl 1346.93193
Ding, Dawei; Wang, Chun; Ding, Lianghui; Wang, Nian; Liang, Dong
2016
Stability analysis of R&D cooperation in a supply chain. Zbl 1394.90059
Xu, Luyun; Liang, Dong; Duan, Zhenjie; Xiao, Xu
2015
Undersampled MR image reconstruction with data-driven tight frame. Zbl 1343.92265
Liu, Jianbo; Wang, Shanshan; Peng, Xi; Liang, Dong
2015
The time second-order characteristic FEM for nonlinear multicomponent aerosol dynamic equations in environment. Zbl 1329.65222
Fu, Kai; Liang, Dong; Wang, Wenqia; Cui, Ming
2015
Locally one-dimensional-alternating segment explicit-implicit and locally one-dimensional-alternating segment Crank-Nicolson methods for two-dimension parabolic equations. Zbl 1317.65181
Zhang, Shou-Hui; Liang, Dong
2015
Accelerating dynamic cardiac MR imaging using structured sparse representation. Zbl 1307.92159
Cai, Nian; Wang, Shengru; Zhu, Shasha; Liang, Dong
2013
New energy-conserved identitiesand super-convergence of the symmetric EC-S-FDTD scheme for Maxwell’s equations in 2D. Zbl 1373.78435
Gao, Liping; Liang, Dong
2012
A new multiple sub-domain RS-HDMR method and its application to tropospheric alkane photochemistry model. Zbl 1252.65110
Yuan, Qiang; Liang, Dong
2011
Second-order characteristic schemes in time and age for a nonlinear age-structured population model. Zbl 1214.92058
Liang, Dong; Sun, Guanying; Wang, Wenqia
2011
A generalization of column generation to accelerate convergence. Zbl 1184.90102
Liang, Dong; Wilhelm, Wilbert E.
2010
Wavelet Galerkin methods for aerosol dynamic equations in atmospheric environment. Zbl 1364.76089
Liang, Dong; Guo, Qiang; Gong, Sunling
2009
Structured influenza model for meta-population. Zbl 1342.92238
Gojović, Marija Živković; Liang, Dong; Wu, Jianhong
2009
Spectral property of certain class of graphs associated with generalized Bethe trees and transitive graphs. Zbl 1199.05215
Fan, Yi-Zheng; Li, Shuang-Dong; Liang, Dong
2008
Mixed finite element method for Sobolev equations and its alternating-direction iterative scheme. Zbl 0956.65089
Zhang, Huaiyu; Liang, Dong
1999
A characteristics mixed finite element method of numerical simulation for 2-phase immiscible flow. Zbl 0759.76043
Liang, Dong
1991
The energy-preserving finite difference methods and their analyses for system of nonlinear wave equations in two dimensions. Zbl 1434.65111
Deng, Dingwen; Liang, Dong
2020
The conservative splitting domain decomposition method for multicomponent contamination flows in porous media. Zbl 1453.65305
Liang, Dong; Zhou, Zhongguo
2020
The spatial fourth-order compact splitting FDTD scheme with modified energy-conserved identity for two-dimensional Lorentz model. Zbl 07126192
Li, W.; Liang, D.
2020
Optimal weighted upwind finite volume method for convection-diffusion equations in 2D. Zbl 1418.65159
Gao, Yulong; Liang, Dong; Li, Yonghai
2019
A new fourth-order energy dissipative difference method for high-dimensional nonlinear fractional generalized wave equations. Zbl 07264479
Xie, Jianqiang; Zhang, Zhiyue; Liang, Dong
2019
A conservative splitting difference scheme for the fractional-in-space Boussinesq equation. Zbl 1444.35132
Xie, Jianqiang; Zhang, Zhiyue; Liang, Dong
2019
The conservative and fourth-order compact finite difference schemes for regularized long wave equation. Zbl 1419.65033
Wang, Bo; Sun, Tongjun; Liang, Dong
2019
Second order in time and space corrected explicit-implicit domain decomposition scheme for convection-diffusion equations. Zbl 1415.76443
Akhavan, Yousef; Liang, Dong; Chen, Michael
2019
The finite difference scheme for nonlinear Schrödinger equations on unbounded domain by artificial boundary conditions. Zbl 1412.65094
Wang, Bo; Liang, Dong
2018
The time fourth-order compact ADI methods for solving two-dimensional nonlinear wave equations. Zbl 1427.65157
Deng, Dingwen; Liang, Dong
2018
High-order energy-preserving schemes for the improved Boussinesq equation. Zbl 1407.76109
Yan, Jinliang; Zhang, Zhiyue; Zhao, Tengjin; Liang, Dong
2018
The new mass-conserving S-DDM scheme for two-dimensional parabolic equations with variable coefficients. Zbl 1427.65205
Zhou, Zhongguo; Liang, Dong; Wong, Yaushu
2018
Mass-preserving time second-order explicit-implicit domain decomposition schemes for solving parabolic equations with variable coefficients. Zbl 1400.65048
Zhou, Zhongguo; Liang, Dong
2018
The conservative splitting high-order compact finite difference scheme for two-dimensional Schrödinger equations. Zbl 1404.65102
Wang, Bo; Liang, Dong; Sun, Tongjun
2018
Analysis of a Fourier pseudo-spectral conservative scheme for the Klein-Gordon-Schrödinger equation. Zbl 1390.65082
Wang, Jialing; Wang, Yushun; Liang, Dong
2018
The mass-preserving and modified-upwind splitting DDM scheme for time-dependent convection-diffusion equations. Zbl 1357.65141
Zhou, Zhongguo; Liang, Dong
2017
A multipoint flux mixed finite element method for the compressible Darcy-Forchheimer models. Zbl 1426.76319
Xu, Wenwen; Liang, Dong; Rui, Hongxing
2017
Pinning synchronization of fractional order complex-variable dynamical networks with time-varying coupling. Zbl 1380.93133
Ding, Dawei; Yan, Jie; Wang, Nian; Liang, Dong
2017
Adaptive synchronization of fractional order complex-variable dynamical networks via pinning control. Zbl 1377.34070
Ding, Da-Wei; Yan, Jie; Wang, Nian; Liang, Dong
2017
High-order finite difference methods for a second order dual-phase-lagging models of microscale heat transfer. Zbl 1411.80005
Deng, Dingwen; Jiang, Yaolin; Liang, Dong
2017
The time second order mass conservative characteristic FDM for advection-diffusion equations in high dimensions. Zbl 1379.65061
Fu, Kai; Liang, Dong
2017
Global energy-tracking identities and global energy-tracking splitting FDTD schemes for the Drude models of Maxwell’s equations in three-dimensional metamaterials. Zbl 1381.78013
Li, Wanshan; Liang, Dong; Lin, Yanping
2017
The mass-preserving S-DDM scheme for two-dimensional parabolic equations. Zbl 1388.65073
Zhou, Zhongguo; Liang, Dong
2016
The conservative characteristic FD methods for atmospheric aerosol transport problems. Zbl 1349.76461
Fu, Kai; Liang, Dong
2016
Symmetric energy-conserved S-FDTD scheme for two-dimensional Maxwell’s equations in negative index metamaterials. Zbl 1372.78020
Li, Wanshan; Liang, Dong
2016
ADI-FDTD method for two-dimensional transient electromagnetic problems. Zbl 1373.78437
Li, Wanshan; Zhang, Yile; Wong, Yau Shu; Liang, Dong
2016
Hopf bifurcation control in a FAST TCP and RED model via multiple control schemes. Zbl 1346.93193
Ding, Dawei; Wang, Chun; Ding, Lianghui; Wang, Nian; Liang, Dong
2016
Stability analysis of R&D cooperation in a supply chain. Zbl 1394.90059
Xu, Luyun; Liang, Dong; Duan, Zhenjie; Xiao, Xu
2015
Undersampled MR image reconstruction with data-driven tight frame. Zbl 1343.92265
Liu, Jianbo; Wang, Shanshan; Peng, Xi; Liang, Dong
2015
The time second-order characteristic FEM for nonlinear multicomponent aerosol dynamic equations in environment. Zbl 1329.65222
Fu, Kai; Liang, Dong; Wang, Wenqia; Cui, Ming
2015
Locally one-dimensional-alternating segment explicit-implicit and locally one-dimensional-alternating segment Crank-Nicolson methods for two-dimension parabolic equations. Zbl 1317.65181
Zhang, Shou-Hui; Liang, Dong
2015
The efficient S-DDM scheme and its analysis for solving parabolic equations. Zbl 1349.76499
Liang, Dong; Du, Chuanbin
2014
Hybrid control of Hopf bifurcation in a dual model of Internet congestion control system. Zbl 1306.93056
Ding, Da-Wei; Qin, Xue-Mei; Wang, Nian; Wu, Ting-Ting; Liang, Dong
2014
Energy-conserved splitting spectral methods for two dimensional Maxwell’s equations. Zbl 1293.78014
Zeng, Fanhai; Ma, Heping; Liang, Dong
2014
Numerical analysis of the second-order characteristic FEM for nonlinear aerosol dynamic equations. Zbl 1278.76056
Cui, Ming; Fu, Kai; Liang, Dong; Cheng, Yu; Wang, Wenqia
2014
A new energy-conserved S-FDTD scheme for Maxwell’s equations in metamaterials. Zbl 1277.78033
Li, Wanshan; Liang, Dong; Lin, Yanping
2013
The spatial fourth-order energy-conserved S-FDTD scheme for Maxwell’s equations. Zbl 1349.78094
Liang, Dong; Yuan, Qiang
2013
Augmented Lagrangian-based sparse representation method with dictionary updating for image deblurring. Zbl 1279.68332
Liu, Qiegen; Liang, Dong; Song, Ying; Luo, Jianhua; Zhu, Yuemin; Li, Wenshu
2013
Adaptive dictionary learning in sparse gradient domain for image recovery. Zbl 1373.94258
Liu, Qiegen; Wang, Shanshan; Ying, Leslie; Peng, Xi; Zhu, Yanjie; Liang, Dong
2013
Accelerating dynamic cardiac MR imaging using structured sparse representation. Zbl 1307.92159
Cai, Nian; Wang, Shengru; Zhu, Shasha; Liang, Dong
2013
Travelling wave solutions in a delayed predator-prey diffusion PDE system: point-to-periodic and point-to-point waves. Zbl 1250.35171
Liang, Dong; Weng, Peixuan; Wu, Jianhong
2012
New energy-conserved identitiesand super-convergence of the symmetric EC-S-FDTD scheme for Maxwell’s equations in 2D. Zbl 1373.78435
Gao, Liping; Liang, Dong
2012
Existence and properties of stationary solution of dynamical neural field. Zbl 1219.35328
Jin, Dequan; Liang, Dong; Peng, Jigen
2011
A new multiple sub-domain RS-HDMR method and its application to tropospheric alkane photochemistry model. Zbl 1252.65110
Yuan, Qiang; Liang, Dong
2011
Second-order characteristic schemes in time and age for a nonlinear age-structured population model. Zbl 1214.92058
Liang, Dong; Sun, Guanying; Wang, Wenqia
2011
Energy-conserved splitting finite-difference time-domain methods for Maxwell’s equations in three dimensions. Zbl 1220.78116
Chen, Wenbin; Li, Xingjie; Liang, Dong
2010
An efficient S-DDM iterative approach for compressible contamination fluid flows in porous media. Zbl 1305.76074
Du, Chuanbin; Liang, Dong
2010
A generalization of column generation to accelerate convergence. Zbl 1184.90102
Liang, Dong; Wilhelm, Wilbert E.
2010
The least eigenvalue of graphs with given connectivity. Zbl 1171.05365
Ye, Miao-Lin; Fan, Yi-Zheng; Liang, Dong
2009
Symmetric energy-conserved splitting FDTD scheme for the Maxwell’s equations. Zbl 1364.78035
Chen, Wenbin; Li, Xingjie; Liang, Dong
2009
An efficient second-order characteristic finite element method for nonlinear aerosol dynamic equations. Zbl 1176.76070
Liang, Dong; Wang, Wenqia; Cheng, Yu
2009
Multiscale analysis for convection-dominated transport equations. Zbl 1163.76048
Hou, Thomas Y.; Liang, Dong
2009
Numerical analysis to discontinuous Galerkin methods for the age structured population model of marine invertebrates. Zbl 1156.92035
Sun, Guanying; Liang, Dong; Wang, Wenqia
2009
Wavelet Galerkin methods for aerosol dynamic equations in atmospheric environment. Zbl 1364.76089
Liang, Dong; Guo, Qiang; Gong, Sunling
2009
Structured influenza model for meta-population. Zbl 1342.92238
Gojović, Marija Živković; Liang, Dong; Wu, Jianhong
2009
Energy-conserved splitting FDTD methods for Maxwell’s equations. Zbl 1185.78020
Chen, Wenbin; Li, Xingjie; Liang, Dong
2008
The Laplacian spread of a tree. Zbl 1153.05323
Fan, Yi-Zheng; Xu, Jing; Wang, Yi; Liang, Dong
2008
Asymptotic patterns of a structured population diffusing in a two-dimensional strip. Zbl 1152.35409
Weng, Peixuan; Liang, Dong; Wu, Jianhong
2008
Splitting finite difference methods on staggered grids for the three-dimensional time-dependent Maxwell equations. Zbl 1364.78036
Gao, Liping; Zhang, Bo; Liang, Dong
2008
Decomposition schemes and acceleration techniques in application to production-assembly-distribution system design. Zbl 1278.90130
Liang, Dong; Wilhelm, Wilbert E.
2008
Spectral property of certain class of graphs associated with generalized Bethe trees and transitive graphs. Zbl 1199.05215
Fan, Yi-Zheng; Li, Shuang-Dong; Liang, Dong
2008
The splitting finite-difference time-domain methods for Maxwell’s equations in two dimensions. Zbl 1122.78021
Gao, Liping; Zhang, Bo; Liang, Dong
2007
A fractional step ELLAM approach to high-dimensional convection-diffusion problems with forward particle tracking. Zbl 1110.65091
Liang, Dong; Du, Chuanbin; Wang, Hong
2007
An optimal weighted upwinding covolume method on non-standard grids for convection-diffusion problems in 2D. Zbl 1110.76321
Liang, Dong; Zhao, Weidong
2006
The modified method of upwind with finite difference fractional steps procedure for the numerical simulation and analysis of seawater intrusion. Zbl 1141.86302
Yuan, Yirang; Liang, Dong; Rui, Hongxing
2006
Modelling population growth with delayed nonlocal reaction on 2 dimensions. Zbl 1061.92048
Liang, Dong; Wu, Jianhong; Zhang, Fan
2005
Computation of a moving drop/bubble on a solid surface using a front-tracking method. Zbl 1161.76541
Huang, Huaxiong; Liang, Dong; Wetton, Brian
2004
Error estimates for mixed finite element approximations of the viscoelasticity wave equation. Zbl 1071.65129
Gao, Liping; Liang, Dong; Zhang, Bo
2004
Numerical analysis of graded mesh methods for a class of second kind integral equations on the real line. Zbl 1051.65131
Liang, Dong; Zhang, Bo
2004
Travelling waves and numerical approximations in a reaction advection diffusion equation with nonlocal delayed effects. Zbl 1017.92024
Liang, D.; Wu, J.
2003
Population dynamic models with nonlocal delay on bounded domains and their numerical computations. Zbl 1231.35287
Liang, Dong; So, Joseph W.-H.; Zhang, Fan; Zou, Xingfu
2003
An improved numerical simulator for different types of flows in porous media. Zbl 1079.76044
Wang, Hong; Liang, Dong; Ewing, Richard E.; Lyons, Stephen L.; Qin, Guan
2003
An ELLAM approximation for highly compressible multicomponent flows in porous media. Zbl 1094.76555
Wang, H.; Liang, D.; Ewing, R. E.; Lyons, S. L.; Qin, G.
2002
Predicting the consequences of seawater intrusion and protection projects. Zbl 0988.76518
Yuan, Yirang; Liang, Dong; Rui, Hongxing
2001
An approximation to miscible fluid flows in porous media with point sources and sinks by an Eulerian-Lagrangian localized adjoint method and mixed finite element methods. Zbl 0988.76054
Wang, Hong; Liang, Dong; Ewing, Richard E.; Lyons, Stephen L.; Qin, Guan
2000
An ELLAM-MFEM solution technique for compressible fluid flows in porous media with point sources and sinks. Zbl 0979.76051
Wang, Hong; Liang, Dong; Ewing, Richard E.; Lyons, Stephen L.; Qin, Guan
2000
An accurate approximation to compressible flow in porous media with wells. Zbl 1072.76571
Wang, Hong; Liang, Dong; Ewing, Richard E.; Lyons, Stephen L.; Qin, Guan
2000
Mixed finite element method for Sobolev equations and its alternating-direction iterative scheme. Zbl 0956.65089
Zhang, Huaiyu; Liang, Dong
1999
Characteristics-finite element methods for seawater intrusion. Numerical simulation and theoretical analysis. Zbl 0968.76567
Yuan, Yirang; Liang, Dong; Rui, Hongxing
1998
A high-order upwind method for the convection-diffusion problem. Zbl 0897.76064
Liang, Dong; Zhao, Weidong
1997
The characteristics-finite difference methods for sea water intrusion numerical simulation and optimal order $$\ell^2$$ error estimates. Zbl 0895.76060
Yuan, Yirang; Liang, Dong; Rui, Hongxing; Wang, Gaohong
1996
A kind of upwind schemes for convection diffusion equations. Zbl 0850.65169
Liang, Dong
1991
A characteristics mixed finite element method of numerical simulation for 2-phase immiscible flow. Zbl 0759.76043
Liang, Dong
1991
Upwind generalized difference schemes for convection-diffusion problems. Zbl 0727.65082
Liang, Dong
1990
all top 5
#### Cited by 650 Authors
36 Liang, Dong 14 Li, Jichun 10 Huang, Yunqing 9 Wang, Hong 9 Zhang, Zhiyue 8 Li, Wan-Tong 8 Rui, Hongxing 8 Wang, Zhi Cheng 7 Fan, Yizheng 7 Fu, Kai 7 Yuan, Yirang 7 Zhou, Zhongguo 6 Mei, Ming 6 Ruan, Shigui 6 Wang, Yushun 6 Xie, Jianqiang 6 Yi, Taishan 6 Yuan, Yueding 6 Zou, Xingfu 5 Andrade, Enide 5 Bani-Yaghoub, Majid 5 Cai, Jiaxiang 5 Guo, Hui 5 Hong, Jialin 5 Lin, Yanping 5 Robbiano, María 5 Sun, Weiwei 5 Wang, Wenqia 4 Droniou, Jérôme 4 Gao, Fuzheng 4 Gao, Liping 4 Guo, Zhiming 4 Hu, Wenjie 4 Liu, Bolian 4 Trofimchuk, Sergei I. 4 Wang, Yi 4 Wu, Jianhong 4 Zhang, Jiansong 4 Zhao, Weidong 3 Arbogast, Todd 3 Cai, Wenjun 3 Chen, Yanping 3 Cui, Ming 3 Dehghan Takht Fooladi, Mehdi 3 Du, Chuanbin 3 Du, Ning 3 Feng, Hui 3 Gong, Yuezheng 3 Ji, Lihai 3 Kong, Linghua 3 Li, Wanshan 3 Li, XinDong 3 Lin, Guo 3 Ou, Chunhua 3 Rodríguez, Jonnathan 3 Schnaubelt, Roland 3 Sun, Shuyu 3 Wang, Yang 3 Wang, Yifu 3 Xue, Guanyu 3 Yang, Chaoxia 3 Yang, Wei 3 Yin, Jingxue 3 You, Zhifu 2 Abbasbandy, Saeid 2 Aguerrea, Maitere 2 Al-Jararha, Mohammadkheer M. 2 Al-Lawatia, Mohamed 2 Arrarás, Andres 2 Bai, Jiahui 2 Cai, Wentao 2 Cao, Jinde 2 Chen, Haibo 2 Chen, Huanzhen 2 Chen, Wenbin 2 Cheng, Hanz Martin C. 2 Deng, Dingwen 2 Duan, Yueliang 2 Eilinghoff, Johannes 2 Ewing, Richard Edward 2 Fang, Zhiwei 2 Ginting, Victor 2 Hansen, Per Christian 2 Hasík, Karel 2 He, Mingyan 2 Huang, Chieh-Sen 2 Jahnke, Tobias 2 Jia, Hongen 2 Jiang, Guoping 2 Jiang, Ziwen 2 Jiwari, Ram 2 Kopfová, Jana 2 Krell, Stella 2 Kumar, Sarvesh 2 Li, Buyang 2 Li, Hong 2 Li, Lin 2 Li, Xingjie 2 Li, Yonghai 2 Lin, Chikun ...and 550 more Authors
all top 5
#### Cited in 99 Serials
33 Journal of Computational Physics 22 Journal of Computational and Applied Mathematics 22 Linear Algebra and its Applications 17 Applied Mathematics and Computation 17 Journal of Differential Equations 15 Computers & Mathematics with Applications 15 Journal of Scientific Computing 12 Numerical Methods for Partial Differential Equations 9 Journal of Mathematical Analysis and Applications 8 Applied Numerical Mathematics 8 Computational and Applied Mathematics 7 Computer Methods in Applied Mechanics and Engineering 7 Applied Mathematical Modelling 6 Numerische Mathematik 6 Applied Mathematics Letters 6 Advances in Difference Equations 5 Mathematical Problems in Engineering 5 Communications in Nonlinear Science and Numerical Simulation 5 Computational & Mathematical Methods in Medicine 4 Chaos, Solitons and Fractals 4 Mathematics and Computers in Simulation 4 SIAM Journal on Numerical Analysis 4 Abstract and Applied Analysis 4 Nonlinear Analysis. Real World Applications 3 Computers and Fluids 3 Mathematical Methods in the Applied Sciences 3 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 3 International Journal of Computer Mathematics 3 Journal of Dynamics and Differential Equations 3 SIAM Journal on Scientific Computing 3 Journal of Systems Science and Complexity 3 Communications on Pure and Applied Analysis 3 Science China. Mathematics 2 Linear and Multilinear Algebra 2 ZAMP. Zeitschrift für angewandte Mathematik und Physik 2 BIT 2 Numerical Functional Analysis and Optimization 2 Applied Mathematics and Mechanics. (English Edition) 2 Acta Mathematicae Applicatae Sinica. English Series 2 Numerical Algorithms 2 Journal of Nonlinear Science 2 Discrete and Continuous Dynamical Systems 2 Nonlinear Dynamics 2 Journal of Dynamical and Control Systems 2 Discrete and Continuous Dynamical Systems. Series B 2 Journal of Applied Mathematics 2 Journal of Applied Mathematics and Computing 2 International Journal of Computational Methods 2 Boundary Value Problems 2 Advances in Mathematical Physics 2 Journal of Applied Analysis and Computation 1 Applicable Analysis 1 Computer Physics Communications 1 International Journal of Theoretical Physics 1 Journal of Mathematical Physics 1 Zhurnal Vychislitel’noĭ Matematiki i Matematicheskoĭ Fiziki 1 Mathematics of Computation 1 Czechoslovak Mathematical Journal 1 Journal of Optimization Theory and Applications 1 Kybernetes 1 Transactions of the American Mathematical Society 1 Graphs and Combinatorics 1 Computers & Operations Research 1 Neural Networks 1 European Journal of Applied Mathematics 1 Applications of Mathematics 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 European Journal of Operational Research 1 Mathematical Programming. Series A. Series B 1 Cybernetics and Systems Analysis 1 Journal of Mathematical Sciences (New York) 1 Turkish Journal of Mathematics 1 Advances in Computational Mathematics 1 Complexity 1 Discussiones Mathematicae. Graph Theory 1 Taiwanese Journal of Mathematics 1 Journal of Inequalities and Applications 1 Discrete Dynamics in Nature and Society 1 Computational Geosciences 1 Foundations of Computational Mathematics 1 Computational Methods in Applied Mathematics 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 Multiscale Modeling & Simulation 1 Mediterranean Journal of Mathematics 1 Mathematical Biosciences and Engineering 1 Frontiers of Mathematics in China 1 Discrete and Continuous Dynamical Systems. Series S 1 Journal of Nonlinear Science and Applications 1 Symmetry 1 The Journal of Mathematical Neuroscience 1 Journal of the Korean Society for Industrial and Applied Mathematics 1 S$$\vec{\text{e}}$$MA Journal 1 Journal of Theoretical Biology 1 East Asian Journal on Applied Mathematics 1 Journal of Function Spaces 1 International Journal of Partial Differential Equations 1 International Journal of Applied and Computational Mathematics 1 AMM. Applied Mathematics and Mechanics. (English Edition) 1 Results in Applied Mathematics
all top 5
#### Cited in 27 Fields
203 Numerical analysis (65-XX) 169 Partial differential equations (35-XX) 87 Fluid mechanics (76-XX) 58 Biology and other natural sciences (92-XX) 42 Optics, electromagnetic theory (78-XX) 35 Combinatorics (05-XX) 25 Ordinary differential equations (34-XX) 23 Linear and multilinear algebra; matrix theory (15-XX) 17 Dynamical systems and ergodic theory (37-XX) 12 Systems theory; control (93-XX) 8 Computer science (68-XX) 8 Geophysics (86-XX) 8 Operations research, mathematical programming (90-XX) 7 Integral equations (45-XX) 6 Mechanics of deformable solids (74-XX) 5 Real functions (26-XX) 5 Information and communication theory, circuits (94-XX) 4 Operator theory (47-XX) 3 Classical thermodynamics, heat transfer (80-XX) 3 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 2 Global analysis, analysis on manifolds (58-XX) 2 Quantum theory (81-XX) 2 Statistical mechanics, structure of matter (82-XX) 1 Difference and functional equations (39-XX) 1 Probability theory and stochastic processes (60-XX) 1 Astronomy and astrophysics (85-XX)
|
Matematicheskie Trudy
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Subscription Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Mat. Tr.: Year: Volume: Issue: Page: Find
Mat. Tr., 2013, Volume 16, Number 1, Pages 18–27 (Mi mt247)
Homogeneous almost normal Riemannian manifolds
V. N. Berestovskiĭ
Omsk Branch of Sobolev Institute of Mathematics, Siberian Branch of the Russian Academy of Sciences, Omsk, Russia
Abstract: In this article, we introduce a newclass of compact homogeneous Riemannian manifolds $(M=G/H,\mu)$ almost normal with respect to a transitive Lie group $G$ of isometries for which by definition there exists a $G$-left-invariant and an $H$-right-invariant inner product $\nu$ such that the canonical projection $p\colon(G,\nu)\rightarrow(G/H,\mu)$ is a Riemannian submersion and the norm ${|\boldsymbol\cdot|}$ of the product $\nu$ is at least the bi-invariant Chebyshev normon $G$ defined by the space $(M,\mu)$. We prove the following results: Every homogeneous Riemannian manifold is almost normal homogeneous. Every homogeneous almost normal Riemannian manifold is naturally reductive and generalized normal homogeneous. For a homogeneous $G$-normal Riemannian manifold with simple Lie group $G$, the unit ball of the norm ${|\boldsymbol\cdot|}$ is a Löwner–John ellipsoid with respect to the unit ball of the Chebyshev norm; an analogous assertion holds for the restrictions of these norms to a Cartan subgroup of the Lie group $G$. Some unsolved problems are posed.
Key words: Weyl group, naturally reductive Riemannian manifold, Chebyshev norm, homogeneous normal Riemannian manifold, homogeneous generalized normal Riemannian manifold, homogeneous almost normal Riemannian manifold, Cartan subagebra, Löwner–John ellipsoid.
Full text: PDF file (207 kB)
References: PDF file HTML file
English version:
Siberian Advances in Mathematics, 2014, 24:1, 12–17
Bibliographic databases:
UDC: 514.70
Citation: V. N. Berestovskiǐ, “Homogeneous almost normal Riemannian manifolds”, Mat. Tr., 16:1 (2013), 18–27; Siberian Adv. Math., 24:1 (2014), 12–17
Citation in format AMSBIB
\Bibitem{Ber13} \by V.~N.~Berestovski{\v\i} \paper Homogeneous almost normal Riemannian manifolds \jour Mat. Tr. \yr 2013 \vol 16 \issue 1 \pages 18--27 \mathnet{http://mi.mathnet.ru/mt247} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=3156671} \transl \jour Siberian Adv. Math. \yr 2014 \vol 24 \issue 1 \pages 12--17 \crossref{https://doi.org/10.3103/S1055134414010027}
|
## Chi-Squared Test
Let the probabilities of various classes in a distribution be , , ..., . The expected frequency
is a measure of the deviation of a sample from expectation. Karl Pearson proved that the limiting distribution of is (Kenney and Keeping 1951, pp. 114-116).
where is Pearson's Function. There are some subtleties involved in using the test to fit curves (Kenney and Keeping 1951, pp. 118-119).
When fitting a one-parameter solution using , the best-fit parameter value can be found by calculating at three points, plotting against the parameter values of these points, then finding the minimum of a Parabola fit through the points (Cuzzi 1972, pp. 162-168).
References
Cuzzi, J. The Subsurface Nature of Mercury and Mars from Thermal Microwave Emission. Ph.D. Thesis. Pasadena, CA: California Institute of Technology, 1972.
Kenney, J. F. and Keeping, E. S. Mathematics of Statistics, Pt. 2, 2nd ed. Princeton, NJ: Van Nostrand, 1951.
|
# 3D Problem transforming vertices in RHW projection
## Recommended Posts
Hello,
I have a big old project that cannot be converted from D3D_XYZRHW to D3D_XYZ.
The problem is whatever polygons around me distort, but everything in the distance is good.
It seems to happen when NDC space vertex.x <= 0. I guess clipping must be the solution; but I don't know how.
Here is my current code.
VERTEX_DECL LocalToScreen( const Vector& vector ) {
VERTEX_DECL vertex;
vertex.xyz = vector - Game::Eye;
vertex.xyz = Game::CameraOrientation * Game::ProjectionMatrix * vertex.xyz;
vertex.xyz.x = vertex.xyz.x / vertex.xyz.z;
vertex.xyz.y = vertex.xyz.y / vertex.xyz.z;
vertex.xyz.z = 1.0 / vertex.xyz.z;
// This stops polygons around camera distorting; but it fully removes them which
// is not a proper solution. I think I need some kind of clamping.
if ( vertex.z <= 0.0 ) {
return vertex;
}
vertex.xyz.x = vertex.xyz.x * Game::HalfViewportWidth + Game::HalfViewportWidth;
vertex.xyz.y = vertex.xyz.y * -Game::HalfViewportHeight + Game::HalfViewportHeight;
vertex.rhw = vertex.xyz.z;
vertex.xyz.z = vertex.xyz.z * 0.000588; // Legacy. This doesn't seem to do anything.
return vertex;
}
Edited by scragglypoo
## Create an account
Register a new account
• ### Forum Statistics
• Total Topics
628368
• Total Posts
2982293
• ### Similar Content
• Hello! I am looking for a concept artist for a 3d brawl game. I will need sketches done for characters, and arenas. I would prefer a hobbyist for free, but I will pay if the work i am getting is significantly better than a free hobbyists. If interested, please do email me at [email protected]
• I created this model in Maya and have been practicing on the side while studying for school. This may not be the best picture to show faces or line flow and the resolution does not help things either. Still, I would love some opinions on where I can improve on my 3D modeling. Also, this is just a skin model and I spent no time texturing the model or accounting for clothing. Thank you in advance for any feedback.
• By G-Dot
Hello everybody! I've got a little problem. I need to create jetpack action. The main target is when I will press some button on my keybord my character will fly in the sky and stay here for some time then he will remove to the ground. I'm working with Unreal Engine 4 with blueprints.
• I have a very simple vertex/pixel shader for rendering a bunch of instances with a very simple lighting model.
When testing, I noticed that the instances were becoming dimmer as the world transform scaling was increasing. I determined that this was due to the fact that the the value of float3 normal = mul(input.Normal, WorldInverseTranspose); was shrinking with the increased scaling of the world transform, but the unit portion of it appeared to be correct. To address this, I had to add normal = normalize(normal);.
I do not, for the life of me, understand why. The WorldInverseTranspose contains all of the components of the world transform (SetValueTranspose(Matrix.Invert(world * modelTransforms[mesh.ParentBone.Index]))) and the calculation appears to be correct as is.
Why is the value requiring normalization? under);
);
float4 CalculatePositionInWorldViewProjection(float4 position, matrix world, matrix view, matrix projection) { float4 worldPosition = mul(position, world); float4 viewPosition = mul(worldPosition, view); return mul(viewPosition, projection); } VertexShaderOutput VS(VertexShaderInput input) { VertexShaderOutput output; matrix instanceWorldTransform = mul(World, transpose(input.InstanceTransform)); output.Position = CalculatePositionInWorldViewProjection(input.Position, instanceWorldTransform, View, Projection); float3 normal = mul(input.Normal, WorldInverseTranspose); normal = normalize(normal); float lightIntensity = -dot(normal, DiffuseLightDirection); output.Color = float4(saturate(DiffuseColor * DiffuseIntensity).xyz * lightIntensity, 1.0f); output.TextureCoordinate = SpriteSheetBoundsToTextureCoordinate(input.TextureCoordinate, input.SpriteSheetBounds); return output; } float4 PS(VertexShaderOutput input) : SV_Target { return Texture.Sample(Sampler, input.TextureCoordinate) * input.Color; }
• Hey, Im using directx allocate hierarchy from dx9 to use a skinned mesh system.
one mesh will be only the skeleton with all animations others meshes will be armor, head etc, already skinned with skeleton above. No animation, idle position with skin, thats all I want to use the animation from skeleton to other meshes, so this way I can customize character with different head, armor etc. What I was thinking its copy bone matrices from skeleton mesh to others meshes, but Im a bit confused yet what way I can do this.
Thanks.
• 10
• 9
• 13
• 24
• 11
|
# Taking Seats on a Plane
This is a neat little problem that I was discussing today with my lab group out at lunch. Not particularly difficult but interesting implications nonetheless
Imagine there are a 100 people in line to board a plane that seats 100. The first person in line realizes he lost his boarding pass so when he boards he decides to take a random seat instead. Every person that boards the plane after him will either take their "proper" seat, or if that seat is taken, a random seat instead.
Question: What is the probability that the last person that boards will end up in his/her proper seat.
Moreover, and this is the part I'm still pondering about. Can you think of a physical system that would follow this combinatorial statistics? Maybe a spin wave function in a crystal etc...
-
What is unsatisfactory about Moron's answer? – J. M. Nov 19 '10 at 12:22
This is a classic puzzle!
The answer is that the probability that the last person ends in up in his proper seat is exactly $\frac{1}{2}$
The reasoning goes as follows:
First observe that the fate of the last person is determined the moment either the first or the last seat is selected! This is because the last person will either get the first seat or the last seat. Any other seat will necessarily be taken by the time the last guy gets to 'choose'.
Since at each choice step, the first or last is equally probable to be taken, the last person will get either the first or last with equal probability: $\frac{1}{2}$.
Sorry, no clue about a physical system.
-
This is a good intuitive way to think about it. A formal proof is too heavy for a over-a-cup-of-coffee discussion, this is just right. I'll give you the credit since nobody seems to want to take a shot at the physical application – crasic Sep 29 '10 at 8:23
Here is a rephrasing which simplifies the intuition of this nice puzzle.
Suppose whenever someone finds their seat taken, they politely evict the squatter and take their seat. In this case, the first passenger keeps getting evicted (and choosing a new random seat) until, by the time everyone else has boarded, he has been forced by a process of elimination into his correct seat.
This process is the same as the original process except for the identities of the people in the seats, so the probability of the last boarder finding their seat occupied is the same.
When the last boarder boards, the first boarder is either in his own seat or in the last boarder's seat, which have both looked exactly the same (i.e. empty) to the first boarder up to now, so there is no way the poor first boarder could be more likely to choose one than the other.
-
Very nice! (4 more to go...) – Stef Mar 23 at 18:54
This answer also gives an intuitive explanation for the nice result in Byron Schmuland's answer: When the $k$th passenger reaches the plane, there are $n-(k-1)$ empty seats. If the first passenger stands up, he will see that he is in an arbitrary one of $n-k+2$ seats, all of which have looked the same to him so far. So there is a $\frac{1}{n-k+2}$ chance that, when seated, he is occupying the $k$th passenger's seat. – Matt Aug 19 at 2:22
Let's find the chance that any customer ends up in the wrong seat.
For $2\leq k\leq n$, customer $k$ will get bumped when he finds his seat occupied by someone with a smaller number, who was also bumped by someone with a smaller number, and so on back to customer $1$.
This process can be summarized by the diagram $$1\longrightarrow j_1\longrightarrow j_2\longrightarrow\cdots\longrightarrow j_m\longrightarrow k.$$
Here $j_1<j_2<\cdots <j_m$ is any (possibly empty) increasing sequence of integers strictly between $1$ and $k$. The probability of this sequence of events is $${1\over n}\times{1\over(n+1)-j_1}\times {1\over(n+1)-j_2}\times\cdots\times{1\over(n+1)-j_m}.$$
Thus, the probability that customer $k$ gets bumped is $$p(k)={1\over n}\sum\prod_{\ell=1}^m {1\over(n+1)-j_\ell}$$ where the sum is over all sets of $j$ values $1<j_1<j_2<\cdots <j_m<k$. That is, \begin{eqnarray*} p(k)&=&{1\over n}\sum_{J\subseteq\\{2,\dots,k-1\\}}\ \, \prod_{j\in J}{1\over (n+1)-j}\cr &=&{1\over n}\ \,\prod_{j=2}^{k-1} \left(1+{1\over (n+1)-j}\right)\cr &=&{1\over n}\ \,\prod_{j=2}^{k-1} {(n+2)-j\over (n+1)-j}\cr &=&{1\over n+2-k}. \end{eqnarray*}
In the case $k=n$, we get $p(n)=1/2$ as in the other solutions. Maybe there is an intuitive explanation of the general formula; I couldn't think of one.
Added reference: Finding your seat versus tossing a coin by Yared Nigussie, American Mathematical Monthly 121, June-July 2014, 545-546.
-
may i ask 2 questions, 1: how could I get $\sum_{J\subseteq\{2,\dots,k-1\}} \prod_{j\in J}{1\over (n+1)-j} = \prod_{j=2}^{k-1} \left(1+{1\over (n+1)-j}\right)$? 2. the bumping may not start from customer 1, it could start from anyone. e.g. the diagram could be $5\longrightarrow j_1\longrightarrow j_2\longrightarrow\cdots\longrightarrow j_m\longrightarrow k$ if the first person in the line (lost his ticket) seats at seat #5. right? – athos Oct 1 '13 at 1:18
1. $\prod_{i\in I}(1+x_i)=\sum_{J\subseteq I}\prod_{j\in J}x_j$ – Byron Schmuland Oct 1 '13 at 12:13
2. No, any bumping can be traced back to passenger 1. – Byron Schmuland Oct 1 '13 at 12:14
thank you for your explanation. for point 1, after drawing it out i finally understand it. but for point 2, could you please elaborate a bit more? scenario A: the first person in the line bumped into seat #1, customer #1 then bumped into seat #5, this is $1\longrightarrow j_1=5\longrightarrow j_2\longrightarrow\cdots\longrightarrow j_m\longrightarrow k$; Scenario B: the first person in the line bumped into seat #5, customer #5 then bumped on, this is $j_1=5\longrightarrow j_2\longrightarrow\cdots\longrightarrow j_m\longrightarrow k$ -- these are 2 different scenarios right? – athos Oct 1 '13 at 15:05
+1 for the reference! here's a link to it – Matt Aug 19 at 2:09
This analysis is correct, but not complete enough to convince me. For example, why is the fate of the last person settled as soon as the first person's seat chosen? Why will any other seat but the first person's or the last person's be taken by the time the last person boards?
I had to fill in the holes for myself this way...
The last person's fate is decided as soon as anybody chooses the first person's seat (nobody is now in a wrong seat, so everybody else gets their assigned seat, including the last person) or the last person's seat (the last person now won't get their correct seat). Any other choice at any stage doesn't change the probabilities at all.
Rephrasing... at each stage, either the matter gets settled and there is a 50/50 chance it gets settled each way for the last person's seat, or the agony is just postponed. The matter can thus be settled at any stage, and the probabilities at that stage are the only ones that matter -- and they are 50/50 no matter what stage. Thus, the overall probability is 50/50.
-
I don't really have the intuition for this, but I know the formal proof. This is equivalent to showing that the probability that in a permutation of $[n]$ chosen uniformly at random, two elements chosen uniformly at random are in the same cycle is $1/2$. By symmetry, it's enough to show that the probability that $1$ and $2$ are in the same cycle is $1/2$.
There are many ways to show this fact. For example: the probability that $1$ is in a cycle of length $k$ is $1/n$, for $1 \le k \le n$. This is true because the number of possible $k$-cycles containing $1$ is ${n-1 \choose k-1} (k-1)! = (n-1)!/(n-k)!$, and the number of ways to complete a permutation once a $k$-cycle is chosen is $(n-k)!$. So there are $(n-1)!$ permutations of $[n]$ in which $1$ is in a $k$-cycle. Now the probability that $2$ is in the same cycle as $1$, given that $1$ is in a $k$-cycle, is $(k-1)/(n-1)$. So the probability that $2$ is in the same cycle as $1$ is $$\sum_{k=1}^n {k-1 \over n-1} {1 \over n} = {1 \over n(n-1)} \sum_{k=1}^n (k-1) = {1 \over n(n-1)} {n(n-1)\over 2} = 1/2.$$
Alternatively, the Chinese restaurant process with $\alpha = 0, \theta = 1$ generates a uniform random permutation of $[n]$ at the $n$th step; $2$ is paired with $1$ at the second step with probability $1/2$. This is a bit more elegant but requires some understanding of the CRP.
-
Let P(n) denote the probability of the last passenger getting his seat if we begin with n passengers.
Consider the simple case for just 2 seats:
P(2) = 1/2 {first boarder picks his own seat with 1/2 probability}
For n seats: (i) With 1/n probability, the passenger picks the seat of the first passenger, the n'th seat from the end (in which case the last passenger would definitely get his seat). (ii) With 1/n probability, the current passenger picks the seat of the last passenger, first seat from the end (and now, the last passenger can definitely not get his own seat). (iii) Otherwise, the passenger picks some other seat (say #i from the end) among the n-2 remaining seats (with probability 1/n), continuing the dilemma. The problem now reduces to the initial problem with i seats.
Therefore, P(n) = (1/n * (1)) + (1/n * (0)) + Sum {(1/n) * P(i), 2 < i < n};
= 1/n + 1/n {Sum P(i), 2 < i < n } --(1)
Similarly,((n-1)/n)P(n-1) = 1/n + 1/n Sum { P(i) , 2 < i < n-1 }
= [ 1/n + 1/n Sum { P(i) , 2 < i < n } ] - 1/n P(n-1)
=> (n-1)/n P(n-1) = P(n) -1/n P(n-1)
=> P(n) = P(n-1)
=> P(m) = P(n-1) , for all integers m > n-1
Since P(2) = 1/2, Therefore P(m) = 1/2 for all integers m > 2
-
I tried to synthesize the proof for myself from stuff I've read to get rid of all calculations (somehow I found the argument that "each person's choice is 50-50 between good and bad once we throw away the irrelevant stuff" convincing but hard to formalize).
Claim 1: when the last passenger boards, the remaining empty seat will either be his own or the first passenger's.
Proof: If the remaining empty seat belongs to passenger $n \neq 1, 100$, then passenger $n$ should have sat there.
Claim 2: if at any time a passenger other than the final passenger finds her seat occupied, then both the seat assigned to the first and to the final passenger will be free.
Proof: If not, then there is a nonzero probability that after this passenger makes a decision, both the first and last seats will be occupied. This contradicts Claim 1.
Claim 3: There is a bijection between the set of admissible seatings in which the final passenger gets his seat and the set where he doesn't.
Proof: Suppose for an admissible seating $S$ that passenger $n$ is the first to choose one of {first passenger's seat, last passenger's seat}. By claim $2$, there is a unique admissible seating $T$ which agrees with $S$ except that passenger $n$ and the final passenger make the opposite decision ($T$ matches $S$ until passenger $n$ sits, then by Claim 2, $T$ must continue to match $S$ until the final passenger).
-
|
27 views
Any continuous function from the open unit interval $(0, 1)$ to itself has a fixed point.
asked in Calculus | 27 views
|
For MGDrivE2 simulations of mosquito lifecycle dynamics in a single node or metapopulation network, this function sums over the male mate genotype to get population trajectories of adult female mosquitoes by their genotype.
summarize_females(out, spn_P)
## Arguments
out the output of sim_trajectory_R the places of the SPN, see details
## Value
a 3 to 5 column dataframe for plotting with ggplot2
## Details
The places (spn_P) object is generated from one of the following: spn_P_lifecycle_node or spn_P_lifecycle_network.
The return object depends on the data provided. If the simulation was only 1 node, then no node designation is returned. If only one repetition was performed, no rep designation is returned. Columns always returned include: time, genotype, and value.
For examples of using this function, this or any vignette which visualizes output: vignette("lifecycle-node", package = "MGDrivE2")
|
3 removed spurious "quantum"
General relativity is nonrenormalizable.
What this actually means is that there is not a semigroup parametrized by some scale (length or wavenumber) that allows the equations of gravity at one scale to be rewritten as identical-looking equations with different parameters at another scale. The existence of such a semigroup is what renormalizability means. The semigroup is called the renormalization group.
The best way to understand renormalization intuitively is to consider the real-space renormalization of the 1D Ising model (or the 2D Ising model on a triangular lattice), or even simpler examples: for instance, an charged particle in an electrolyte attracts a shell of counterions, which in turn attract another shell of counter-counter-ions, etc. The net effect is to transform the normal scale-invariant potential into a scale-dependent potential. This particular example is called Debye screening.
Although many nonabelian gauge theories (such as pure $SU(N)$ Yang-Mills or the Standard Model) are renormalizable (although this has not been proven at a strictly mathematical degree of rigor, but see, e.g., BRST) and general relativity is also a (classical) gauge theory, the space of field configurations or gauge equivalence classes $A/G$ in GR is not well understood, principally because the gauge group (diffeomorphisms) is infinite-dimensional. This also complicates attempts to take the approach of lattice gauge theory for GR.
Regarding the lattice approach: for a "nice" nonabelian quantum gauge theory (such as the Standard Model) the gauge equivalence classes are better understood, but as I mentioned above, still not perfectly. Indeed, showing that discretized SU(2) quantum gauge theory has a well-defined limit (in particular, one that depends only on the size of the discretization and not its detailed structure) is half of a Millennium Problem. This has only been done at the level of rigor of mathematical physics, not mathematics.
Returning to the original focus, the easiest way to see that gravity is nonrenormalizable is the appearance of higher-order (< 4) terms in the action (this is called "power counting").
What this actually means is that there is not a semigroup parametrized by some scale (length or wavenumber) that allows the equations of gravity at one scale to be rewritten as identical-looking equations with different parameters at another scale. The existence of such a semigroup is what renormalizability means. The semigroup is called the renormalization group.
The best way to understand renormalization intuitively is to consider the real-space renormalization of the 1D Ising model (or the 2D Ising model on a triangular lattice), or even simpler examples: for instance, an charged particle in an electrolyte attracts a shell of counterions, which in turn attract another shell of counter-counter-ions, etc. The net effect is to transform the normal scale-invariant potential into a scale-dependent potential. This particular example is called Debye screening.
Although many nonabelian gauge theories (such as pure $SU(N)$ Yang-Mills or the Standard Model) are renormalizable (although this has not been proven at a strictly mathematical degree of rigor, but see, e.g., BRST) and general relativity is also a (classical) gauge theory, the space of field configurations or gauge equivalence classes $A/G$ in GR is not well understood, principally because the gauge group (diffeomorphisms) is infinite-dimensional. This also complicates attempts to take the approach of lattice gauge theory for GR.
Regarding the lattice approach: for a "nice" nonabelian quantum gauge theory (such as the Standard Model) the gauge equivalence classes are better understood, but as I mentioned above, still not perfectly. Indeed, showing that discretized SU(2) quantum gauge theory has a well-defined limit (in particular, one that depends only on the size of the discretization and not its detailed structure) is half of a Millennium Problem. This has only been done at the level of rigor of mathematical physics, not mathematics.
Returning to the original focus, the easiest way to see that gravity is nonrenormalizable is the appearance of higher-order (< 4) terms in the action (this is called "power counting").
1
General relativity is nonrenormalizable.
|
# THE $^{18}O_{3}$ ABSOPTION SPECTRUM $3.3 \mu m$.
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/17980
Files Size Format View
1989-TE-09.jpg 111.2Kb JPEG image
dc.creator Flaud, J.- M. en_US dc.creator Camy-Peyret, C. en_US dc.creator Rinsland, C. P. en_US dc.creator Devi, V. Malathy en_US dc.creator Smith, M. A. H. en_US dc.creator Barbe, A. en_US dc.date.accessioned 2006-06-15T18:24:00Z dc.date.available 2006-06-15T18:24:00Z dc.date.issued 1989 en_US dc.identifier 1989-TE-9 en_US dc.identifier.uri http://hdl.handle.net/1811/17980 dc.description Author Institution: Laboratoire de Physique Mol\'eculaire et Atmosph\'erique, Tour 13. Universit\'e Pierre et Marie Curie et CNRS; Atmospheric Sciences Division, MS 401A, NASA Langley Research Center; Department of Physics, College of William and Mary; Groupe de Spectrom\'etrie Mol\'eculaire et Atmosph\'erique, Facult\'e des Sciences en_US dc.description.abstract Spectra of ozone generated from a 99.98\% pure oxygen sample were recorded covering the $2800-3500 cm^{-1}$ at $0.010 cm^{-1}$ resolution with the McMath Fourier transform interferometer at the National Solar Observatory on Kitt Peak. The first extensive analysis of the 4 bands $3\nu_{3}, 2\nu_{3} +\nu_{1}, 2\nu_{1} +\nu_{3}$ and $3\nu_{1}$ absorbing in this spectral region has been performed leading to a precise set of rotational energy levels for the 4 vibrational interacting states ((003), (201), (102), (300)). These experimental levels were then reproduced with the aid of a Hamiltonian taking into account the various resonances (Coriolis, Darling-Dennison) affecting the levels. Moreover, using the equivalent width method, intensities of lines belonging to the four bands were measured with a relative uncertainty of about 10\% and used to determine the corresponding transition moment constants. Finally, from the vibrational energies, rotational and coupling constants as well as transition moment constants a complete and precise list of the line positions and intensities of the $3\nu_{3}, 2\nu_{3} + \nu_{1}, \nu_{3} + 2\nu_{1}$ and $3\nu_{1}$ bands of $^{16}O_{3}$ has been generated. en_US dc.format.extent 113886 bytes dc.format.mimetype image/jpeg dc.language.iso English en_US dc.publisher Ohio State University en_US dc.title THE $^{18}O_{3}$ ABSOPTION SPECTRUM $3.3 \mu m$. en_US dc.type article en_US
|
1. ## Double Integral Problem
Hey there I need some help with this problem:
given: R is the region of surface
f(m,n):=R (x^m)(y^n) dxdy
evaluate:
limm,nf(m,n)
please explain each step you did so I can understand how to do this problem thanks!
2. ## Re: Double Integral Problem
What do you mean by region of surface?
3. ## Re: Double Integral Problem
hey never mind i figured it out thanks for your help.
|
## Optimality and duality for multiple-objective optimization under generalized type I univexity.(English)Zbl 1090.90173
Summary: We extend the classes of generalized type I vector-valued functions introduced by B. Aghezzaf and M. Hachimi [J. Glob. Optim. 18, No. 1, 91–101 (2000; Zbl 0970.90087)] to generalized univex type I vector-valued functions and consider a multiple-objective optimization problem involving generalized type I univex functions. A number of Kuhn-Tucker type sufficient optimality conditions are obtained for a feasible solution to be an efficient solution. The Mond-Weir and general Mond-Weir type duality results are also presented.
### MSC:
90C29 Multi-objective and goal programming 90C46 Optimality conditions and duality in mathematical programming
### Keywords:
multiple-objective optimization; optimality conditions
Zbl 0970.90087
Full Text:
### References:
[1] Aghezzaf, B.; Hachimi, M., Generalized invexity and duality in multiobjective programming problems, J. global optim., 18, 91-101, (2000) · Zbl 0970.90087 [2] Antczak, T., ($$p, r$$)-invex sets and functions, J. math. anal. appl., 263, 355-379, (2001) · Zbl 1051.90018 [3] Antczak, T., On $$(p, r)$$-invexity type nonlinear programming problems, J. math. anal. appl., 264, 382-397, (2001) · Zbl 1052.90072 [4] Antczak, T., Multiobjective programming under d-invexity, European J. oper. res., 137, 28-36, (2002) · Zbl 1027.90076 [5] Bector, C.R.; Suneja, S.K.; Gupta, S., Univex functions and univex nonlinear programming, (), 115-124 · Zbl 0802.90092 [6] Brandao, A.J.V.; Rojas-Medar, M.A.; Silva, G.N., Optimality conditions for Pareto nonconvex programming in Banach spaces, J. optim. theory appl., 103, 65-73, (1999) · Zbl 0945.90080 [7] Brandao, A.J.V.; Rojas-Medar, M.A.; Silva, G.N., Invex nonsmooth alternative theorem and applications, Optimization, 48, 239-253, (2000) · Zbl 0960.90082 [8] Chen, X., Optimality and duality for the multiobjective fractional programming with the generalized ($$F, \rho$$) convexity, J. math. anal. appl., 273, 190-205, (2002) · Zbl 1121.90409 [9] Craven, B.D., Invex functions and constrained local minima, Bull. austral. math. soc., 24, 357-366, (1981) · Zbl 0452.90066 [10] Egudo, R.R., Efficiency and generalized convex duality for multiobjective programs, J. math. anal. appl., 138, 84-94, (1989) · Zbl 0686.90039 [11] Hanson, M.A., On sufficiency of the kuhn – tucker conditions, J. math. anal. appl., 80, 545-550, (1981) · Zbl 0463.90080 [12] Hanson, M.A.; Mond, B., Necessary and sufficient conditions in constrained optimization, Math. programming, 37, 51-58, (1987) · Zbl 0622.49005 [13] Hanson, M.A.; Pini, R.; Singh, C., Multiobjective programming under generalized type I invexity, J. math. anal. appl., 261, 562-577, (2001) · Zbl 0983.90057 [14] Jeyakumar, V.; Mond, B., On generalized convex mathematical programming, J. austral. math. soc. ser. B, 34, 43-53, (1992) · Zbl 0773.90061 [15] Kaul, R.N.; Suneja, S.K.; Srivastava, M.K., Optimality criteria and duality in multiple objective optimization involving generalized invexity, J. optim. theory appl., 80, 465-482, (1994) · Zbl 0797.90082 [16] Kim, D.S.; Kim, A.L., Optimality and duality for nondifferentiable multiobjective variational problems, J. math. anal. appl., 274, 255-278, (2002) · Zbl 1035.49026 [17] Kim, D.S.; Lee, W.J., Symmetric duality for multiobjective variational problems with invexity, J. math. anal. appl., 218, 34-48, (1998) · Zbl 0899.90141 [18] Kim, M.H.; Lee, G.M., On duality for nonsmooth Lipschitz optimization problems, J. optim. theory appl., 110, 669-675, (2001) · Zbl 0987.90072 [19] Kim, D.S.; Lee, G.M.; Lee, W.J., Symmetric duality for multiobjective variational problems with pseudo-invexity, (), 106-117 · Zbl 0925.90336 [20] Kuk, H.; Lee, G.M.; Tanino, T., Optimality and duality for nonsmooth multiobjective fractional programming with generalized invexity, J. math. anal. appl., 262, 365-375, (2001) · Zbl 0989.90117 [21] Lai, H.C.; Liu, J.C.; Tanaka, K., Necessary and sufficient conditions for minimax fractional programming, J. math. anal. appl., 230, 311-328, (1999) · Zbl 0916.90251 [22] Maeda, T., Constraint qualification in multiobjective optimization problems: differentiable case, J. optim. theory appl., 80, 483-500, (1994) · Zbl 0797.90083 [23] Mangasarian, O.L., Nonlinear programming, (1969), McGraw-Hill New York · Zbl 0194.20201 [24] Marusciac, I., On fritz John optimality criterion in multiobjective optimization, Anal. numer. theorie approx., 11, 109-114, (1982) · Zbl 0501.90081 [25] S.K. Mishra, V-invex functions and applications to multiobjective programming problems, Ph.D. Thesis, Banaras Hindu University, Varanasi, India, 1995 [26] Mishra, S.K., Generalized proper efficiency and duality for a class of nondifferentiable multiobjective variational problems with v-invexity, J. math. anal. appl., 202, 53-71, (1996) · Zbl 0867.90097 [27] Mishra, S.K., Second order generalized invexity and duality in mathematical programming, Optimization, 42, 51-69, (1997) · Zbl 0914.90239 [28] Mishra, S.K., On multiple-objective optimization with generalized univexity, J. math. anal. appl., 224, 131-148, (1998) · Zbl 0911.90292 [29] Mishra, S.K., Multiobjective second order symmetric duality with cone constraints, European J. oper. res., 126, 675-682, (2000) · Zbl 0971.90103 [30] Mishra, S.K., Second order symmetric duality with F-convexity, European J. oper. res., 127, 507-518, (2000) · Zbl 0982.90063 [31] Mishra, S.K., Pseudoconvex complex minmax programming, Indian J. pure appl. math., 32, 205-213, (2001) · Zbl 0980.90098 [32] Mishra, S.K.; Giorgi, G., Optimality and duality wit generalized semi-univexity, Opsearch, 37, 340-350, (2000) · Zbl 1141.90573 [33] Mishra, S.K.; Mukherjee, R.N., Generalized convex composite multiobjective nonsmooth programming and conditional proper efficiency, Optimization, 34, 53-66, (1995) · Zbl 0855.90115 [34] Mishra, S.K.; Mukherjee, R.N., Generalized continuous nondifferentiable fractional programming problems with invexity, J. math. anal. appl., 195, 191-213, (1995) · Zbl 0846.90108 [35] Mishra, S.K.; Rueda, N.G., Higher-order generalized invexity and duality in mathematical programming, J. math. anal. appl., 247, 173-182, (2000) · Zbl 1056.90136 [36] Mishra, S.K.; Rueda, N.G., Higher-order generalized invexity and duality in nondifferentiable mathematical programming, J. math. anal. appl., 272, 496-506, (2002) · Zbl 1175.90318 [37] Pini, R.; Singh, C., A survey of recent (1985-1995) advances in generalized convexity with applications to duality theory and optimality conditions, Optimization, 39, 311-360, (1997) · Zbl 0872.90074 [38] Rueda, N.G.; Hanson, M.A., Optimality criteria in mathematical programming involving generalized invexity, J. math. anal. appl., 130, 375-385, (1988) · Zbl 0647.90076 [39] Rueda, N.G.; Hanson, M.A.; Singh, C., Optimality and duality with generalized convexity, J. optim. theory appl., 86, 491-500, (1995) · Zbl 0838.90114 [40] Zhian, L.; Qingkai, Y., Duality for a class of multiobjective control problems with generalized invexity, J. math. anal. appl., 256, 446-461, (2001) · Zbl 1016.90043
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
Whom do we consider as moving in special theory of relativity?
1. Jul 7, 2008
jason_bourne
hey guys... i have understood the part of special theory of relativity where they explain why the time slows down for a guy moving with greater speed.. or atleast i think i have understood. But the problem is that suppose two guys are in the space one guy is moving at a high speed and the other one is standing still .. so according to the theory the one whos moving his clock should go slow than the one who is standing in one place... but the point is as to which person should i considered moving or standing or moving according to normal relativity concept.. i may consider the one standing as moving and vice versa .. so wen they compare their watches whose watch should have delayed???? how one decides this????
2. Jul 7, 2008
Fredrik
Staff Emeritus
There's no preferred rest frame. They are both right when they say that the other guy's clock is ticking at a slower rate. (If you think this can't possibly be true, you are wrong. Check out the threads about the twin paradox if you want to know more).
By the way, what a clock measures is the "length" of the path it takes through space-time, with "length" defined in a funny way. You add up contributions of the form $\sqrt{dt^2-dx^2}$ along the path, so movement in "space" makes the path shorter.
3. Jul 7, 2008
HallsofIvy
Staff Emeritus
The whole point of "relativity" is that it does not matter. The laws of physics are the same and will give the same result no matter which frame of reference you take.
(Notice, here, each person saying that the other person's clock is ticking slower is the "same result".)
4. Jul 7, 2008
5. Jul 8, 2008
Mentz114
If it requires a long explanation it's probably wrong. Simultaneity is an observer dependent phenomenon and is an effect of relative motion and position. It is not a cause of anything.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
|
Article | Open | Published:
# A new look at effective interactions between microgel particles
## Abstract
Thermoresponsive microgels find widespread use as colloidal model systems, because their temperature-dependent size allows facile tuning of their volume fraction in situ. However, an interaction potential unifying their behavior across the entire phase diagram is sorely lacking. Here we investigate microgel suspensions in the fluid regime at different volume fractions and temperatures, and in the presence of another population of small microgels, combining confocal microscopy experiments and numerical simulations. We find that effective interactions between microgels are clearly temperature dependent. In addition, microgel mixtures possess an enhanced stability compared to hard colloid mixtures - a property not predicted by a simple Hertzian model. Based on numerical calculations we propose a multi-Hertzian model, which reproduces the experimental behavior for all studied conditions. Our findings highlight that effective interactions between microgels are much more complex than usually assumed, displaying a crucial dependence on temperature and on the internal core-corona architecture of the particles.
## Introduction
Microgels are hybrid particles with a dual colloid-polymer nature, belonging to the class of so-called soft colloids1,2. A microgel consists of a mesoscopic cross-linked polymer network, which can deform, shrink or interpenetrate with another microgel3,4. Often, as a result of the synthesis conditions, a particle possesses a denser core and a more loosely crosslinked corona5,6, which also includes so-called dangling ends7,8. Microgels are considered smart colloidal materials: in response to external parameters such as temperature, pH, ionic strength, light, or electric field (depending on the nature of the polymers) a particle is able to change its size as well as other connected properties such as the polarizability9 or elasticity of the particle10,11. Thus, they are promising for and already employed in several applications, such as photonic crystals12,13, drug delivery systems14,15,16, or nanotechnologies17. In addition, thanks to their high tunability and to their softness, microgels represent ideal model systems to study phase transitions18,19,20,21,22 and glass or jamming transitions in dense colloidal dispersions23,24,25.
In the case of thermoresponsive microgels made of poly(N-isopropylacrylamide) (PNIPAM), the soft colloids are swollen below the volume phase transition temperature (VPTT) of 32 °C26,27,28. At temperatures T > VPTT, the swollen microgel network collapses and expels a significant fraction of water27,29. Thus, temperature is readily used as a convenient parameter to control in situ the size and the volume fraction of microgel samples18,19,20,21,25,30. In doing so, however, one implicitly assumes that such a temperature change does not alter the effective interactions between microgels. Early studies have proposed to model effective interactions between swollen microgel particles (below the VPTT) in terms of a hard-sphere-like potential, with a modified/effective hard sphere diameter5,31,32, whereas for T > VPTT microgels should behave as attractive spheres32. Recent research shows that the interactions between swollen microgels can be more accurately reproduced by a soft Hertzian repulsion in the fluid region of the phase diagram33, while brush-like models can be used for highly packed samples34,35. All these different models point to the surprising fact that there is not yet a unifying picture which can describe microgels’ interactions. Clearly there is the need to carefully characterize the interparticle potential under different experimental conditions across the entire phase space. Such a step is necessary not only for a correct use of microgel systems in their widespread applications but also from a fundamental point of view: only fully characterized systems should be used to work on open problems in condensed matter physics, such as glass transition and jamming.
In this work, we investigate the effective interactions of microgels in a wide region of the fluid regime, and as a function of temperature for T < VPTT, i.e., for swollen microgels. We study both one-component microgel suspensions and binary mixtures in which much smaller microgels are added, inducing an effective depletion on the large ones. For each state point, experimental structural and dynamical information was compared to its simulated counterpart. We confirm the applicability of a soft repulsive Hertzian interaction potential for the one-component system, even at elevated temperatures. However, the Hertzian model predicts the instantaneous aggregation of the large microgels in the mixtures, which experience a depletion attraction. In contrast, all mixtures are stabilized by the core-repulsion of the microgels. Based on numerical calculations of the effective interaction potential, we develop a multi-Hertzian (MH) model, which ascribes a different elasticity to corona–corona interactions—reflecting the simple Hertzian interactions between microgels at moderate packing fractions—and core–corona or core-core interactions. The MH model captures the structure and the dynamical behavior of the studied binary mixtures at all 48 investigated different state points. Evidently, it is imperative to consider the variation of the interparticle potential upon changing temperature and, crucially, the internal structure of the microgels to correctly describe their behavior, particularly for conditions where microgels are forced together—for example, in electric field applications or in the dense glassy regime, which is most widely studied in the microgel literature. Furthermore, our results raise fundamental questions on the widespread practice to tune the volume fraction via a temperature change without accounting for the different nature of the system.
## Results
### Structure and dynamics of one-component microgel systems
We start by analyzing the behavior of one-component microgel suspensions (also referred to as ‘colloid’ samples) with weight fractions wt% = 2.2, 3.3, 4.4. The radial distribution function (g(r)) and mean squared displacements (MSD) of the samples were measured at four different temperatures in the range 15–30 °C. We find, as expected, that an increase in particle concentration leads to an increase in the structural correlations for all T (Fig. 1a). An increase in temperature is associated with the deswelling of particles, which is quantified by additional dynamic light scattering measurements (Supplementary Fig. 1, Supplementary Note 1), and thus causes a decrease of the volume fraction of the sample. This leads to a reduced structural order as well as to the shift of the main peak of the g(r) toward smaller distances. From the trajectories obtained with confocal laser scanning microscopy (CLSM) we also reconstruct the MSDs in a time window within the long-time diffusive regime of the microgels (Fig. 1b). We find that increasing temperature speeds up particle diffusion, due to the reduction in volume fraction as well as to the faster thermal motion and to a reduction in solvent viscosity.
In order to describe the experimental behavior, we use the soft Hertzian-type repulsion which has been previously shown to accurately describe microgel interactions in the fluid phase at 15 °C33,36. The (colloid-colloid) Hertzian potential $$V_{{\mathrm{cc}}}^{\mathrm{H}}(r) = U_{{\mathrm{cc}}}(1 - r/\sigma _{{\mathrm{eff}}})^{5/2}\theta (\sigma _{{\mathrm{eff}}} - r)$$ where θ(r) is the Heaviside step function, depends on two control parameters: the effective colloid diameter σeff and the interaction strength at full overlap Ucc. We fix the former to be equal to 2RH, where RH is the (experimentally determined) hydrodynamic radius of the particles at each considered T (Supplementary Fig. 1, Supplementary Note 1). Next, we adjust the colloid volume fraction ϕeff,c at T = 15 °C around the value predicted by viscometry (see Methods, Supplementary Fig. 2, Supplementary Note 2) and we vary Ucc until a good correspondence is found with the experimental g(r)s.
We find, in line with previous work with slightly different microgels33, that the interaction strength at T = 15 °C is Ucc = 400kBT and the three colloid packing fractions, that will serve as a basis also for the binary mixtures discussed later on, are ϕeff,c = 0.26, 0.37 and 0.49 at 15 °C. To model the variation in temperature, the volume fractions are changed according to the deswelling of the microgels (Supplementary Fig. 1, Supplementary Note 1), and again we vary the interaction strength Ucc until the experimental radial distribution functions are well reproduced also at higher T. Consequently, we find that the numerical and experimental g(r) are in good agreement for all investigated state points. In particular, the positions of all peaks is well-captured and the secondary peaks are quantitatively reproduced, while some deviations are observed close to the main peak. However, there is no systematic trend of such deviations with respect to packing fraction, as shown and described in Supplementary Fig. 3 and Supplementary Note 3. This suggests that the discrepancy is mostly driven by data noise. A systematic worsening of the agreement is found at T = 30 °C, where lower spatial resolution of the CLSM in the z-direction and the rapid Brownian motion of the particles leads to a reduction in the peak height and a broadening of the g(r) data for values to the left of the first peak33.
The agreement between experimental and numerical data indicates that the Hertzian model is able to describe the structure of the swollen microgel system in the range of investigated packing fractions. Furthermore, we find that Ucc approximately follows a linear dependence on temperature (Supplementary Fig. 5, Supplementary Note 5). The fact that raising the temperature makes PNIPAM more hydrophobic might give rise to expectations that by increasing T microgels become more and more attractive29,32. By contrast, our results show that the Hertzian repulsion is an increasing rather than decreasing function of temperature, at least up to 30 °C (Fig. 1c), in agreement with static light scattering experiments previously obtained with microgels with a lower crosslink density32. Only close to the VPT and beyond, outside of the regime explored in this work, attractive interactions become dominant32.
A further test of the Hertzian model can be made by comparing experimental and numerical MSDs. To this aim we use Brownian Dynamics (BD) simulations, which show that that the Hertzian model is also able to reproduce the variation of the MSDs (Fig. 1b) with T and ϕeff,c in the investigated regime. The direct comparison between the numerical and experimental self-diffusion coefficients is reported in Supplementary Fig. 4 and Supplementary Note 4.
It is particularly interesting to directly compare samples with the same effective volume fraction but at different temperatures, i.e. samples with unequal number density, as shown in Fig. 2. Comparing the experimental data for g(r), we find a weak but detectable increase of the correlation with increasing temperature. Indeed, at the higher T the first two peaks increase their height and shift towards larger separations, the in-between minimum deepens and a weak oscillation beyond r/σeff = 2 appears. Not only the structure of the system is affected, but also the dynamics. After appropriately rescaling the data to take into account the different hydrodynamic radii and zero-colloid limit diffusion coefficients at each T, the experimental MSDs show a marked difference between the two samples: the T = 25 °C system is much slower than the one at lower temperature. Thus, at higher T the system is more structured, which is consistent with the stronger Hertzian repulsion that we have determined within our theoretical analysis. However, it is important to stress that the present evidence is based solely on experimental data and does not rely on the particular choice of any model. Our measurements clearly show that the interaction potential between the particles changes as a function of temperature, even within the swollen regime (15–30 °C).
To summarize, we have found that a temperature-dependent Hertzian model can be used to correctly capture both the structure and the dynamics of one-component microgel suspensions in the investigated temperature and packing fraction range. Importantly, the Hertzian repulsion is found to increase with temperature. These findings directly confirm the hypothesis that, by changing the temperature, not only the packing fraction is varied but also the interaction potential is considerably affected. This is particularly important for studies in which the temperature is used as a facile way to tune the effective volume fraction of soft microgels, where these temperature-dependent changes in interparticle interactions should be carefully considered.
### The Hertzian model poorly describes microgel mixtures
We now turn to analyze mixtures of large (colloid) and small (depletant) microgels. The very small size ratio RH,depletant/RH,colloid changes very little, i.e. from 0.055 to 0.060, within the investigated temperature range (Supplementary Fig. 1, Supplementary Note 1). In this framework, it is possible to derive an effective interaction potential for large microgels only, integrating out the small particles’ degrees of freedom37. The small microgels thus induce a depletion interaction between the colloids.
We investigate nine colloid-depletant mixtures with colloid wt% = 2.2, 3.3, 4.4 and depletant wt% = 0.26, 0.54, 0.81. We start by analyzing structural correlations (Fig. 3). In the presence of depletants, the colloidal particles show an increased attraction: the first maximum of the g(r) increases in height and becomes asymmetric. In addition, the nearest neighbor distance, characterized by the position of such maximum, decreases with the addition of depletants.
A striking result from the experiments is that the depletion attraction is not as strong as expected: all studied mixtures are surprisingly stable and fluid-like. Comparing with recent results for a binary mixture of hard spheres (colloids) and microgels (depletants), phase separation was observed well within the currently investigated range of depletant concentrations38. Furthermore, theoretical models for depletion among soft particles would predict a strongly enhanced depletion attraction as compared to that occurring in corresponding hard-sphere systems39. On the contrary, our experimental findings show that the effect of the depletion attraction is small. This is confirmed by the variation of the MSDs with added depletants (Fig. 4): upon increasing depletant concentration we only observe a moderate slowing down of the diffusion.
In order to describe the observed behavior, we start by modeling the binary mixtures at T = 15 °C using again the Hertzian model. Thus, the total interaction in the mixture amounts to $$V_{{\mathrm{tot}}} = V_{{\mathrm{cc}}}^{\mathrm{H}} + V_{{\mathrm{cd}}}^{\mathrm{H}} + V_{{\mathrm{dd}}}^{\mathrm{H}}$$, where the three terms are the direct colloid-colloid interaction, the colloid-depletant interaction and the direct depletant-depletant interaction, respectively. For the first term $$V_{{\mathrm{cc}}}^{\mathrm{H}}$$, we use the previously established model in the absence of depletants, with interaction strength Ucc = 400kBT. To estimate the depletant-depletant term we rely on additional static light scattering measurements for the small microgels (Supplementary Fig. 6, Supplementary Note 6), which lead us to an estimated Hertzian interaction strength at contact of $$U_{{\mathrm{dd}}} \simeq 100k_{\mathrm{B}}T$$.
Assuming additive interactions in the mixture, the cross-interaction strength between large and small microgels would be Ucd = 250kBT. Since simulations of the full binary system are rather costly at the small investigated size ratios, we proceed by assuming ideal depletant-depletant interactions, which simplifies the theoretical description in terms of an (effective) one-component system. This assumption is justified by the small size as well as by the very soft interactions between depletants. The interactions between large microgels can thus be calculated as $$V_{{\mathrm{eff,cc}}}^{\mathrm{H}} = V_{{\mathrm{cc}}}^{\mathrm{H}} + V_{{\mathrm{depl}}}$$, where Vdepl is the additional depletion term induced by the small microgels which depend only on the cross-interactions $$V_{{\mathrm{cd}}}^{\mathrm{H}}$$ and on the depletant volume fraction ϕeff,d, as explained in the Methods.
The resulting interaction potential $$V_{{\mathrm{eff,cc}}}^{\mathrm{H}}$$ is far too attractive (Supplementary Fig. 7a, Supplementary Note 7) even at very low ϕeff,d, independently on the choice of Ucd. Indeed, even considering rather low (and strongly non-additive) depletant-colloid interactions, the resulting effective potential would lead to instantaneous aggregation between the colloids. In contrast, all studied binary mixtures are experimentally stable. Thus, the Hertzian repulsion model dramatically fails in capturing the behavior of the particles once we add even the smallest amount of attractive depletion.
### The multi-Hertzian model for microgel-microgel interactions
The Hertzian model fails to describe binary mixtures, because its soft repulsion is too weak to counteract the depletion attraction. Indeed, for soft, penetrable particles, we have to consider interactions down to r → 0, where the depletion attraction can become very large39. We thus need to model the repulsion between microgels in a more realistic way, taking into account that the density profiles for individual microgels studied in this work show a core-corona structure5,7,8. Hence, the addition of depletion interactions allows us to reveal the ‘hidden’ effect of the microgel core even without directly probing too dense regimes.
In a recent numerical work, some of us have addressed the question of the validity of the Hertzian model by performing numerical simulations of realistic in silico microgel particles8,40. We have shown that the Hertzian predictions only hold up to repulsion strengths of ≈ 6kBT and to packing fractions of order unity. These results confirm that, for one-component microgels in the range of ϕ investigated here, we can successfully describe the system properties with the Hertzian model, with the strength of the repulsion being linked to the elastic moduli of the microgels, which can be computed independently40. For smaller separations, when the repulsion between two microgels sensibly exceeds the thermal energy, the interaction acquires a clear non-Hertzian nature, as shown in Fig. 5a. Interestingly, here we find that the full dependence of the effective interaction on the microgel-microgel separation can be fitted to a cascade (three in the example shown in Fig. 5) of Hertzian potentials. The very good quality of the fit can be understood in terms of the microscopic architecture of the microgel, which can be considered to be composed by a sequence of more and more dense shells, each of them corresponding to a different internal elasticity and thus to a different Hertzian contribution. Thus, while the Hertzian model is only able to capture the interactions between the outer parts of the coronas of the two microgels, stronger repulsions need to be considered in order to include core-corona and core-core contributions. Such a multi-Hertzian model is able to describe the numerically calculated potential up to the smallest simulated particle-particle distances, which correspond to strengths of order 200kBT for the considered microgel.
We thus apply the MH model (schematically illustrated in Fig. 5) to the investigated binary mixtures, fixing most of the model parameters according to experimental data as described in Methods. It turns out that we need to take into account four successive shells, reading as
$${\begin{array}{*{20}{l}} {V_{{\mathrm{cc}}}^{{\mathrm{MH}}}(r)} \hfill & = \hfill & {U_{{\mathrm{cc}}}(1 - r/\sigma _{{\mathrm{eff}}})^{5/2}\theta (\sigma _{{\mathrm{eff}}} - r) + U_{{\mathrm{corona}}}(1 - r/\sigma _{{\mathrm{corona}}})^{5/2}\theta (\sigma _{{\mathrm{corona}}} - r)} \hfill \\ {} \hfill & {} \hfill & { + U_{{\mathrm{mid}}}(1 - r/\sigma _{{\mathrm{mid}}})^{5/2}\theta (\sigma _{{\mathrm{mid}}} - r) + U_{{\mathrm{core}}}(1 - r/\sigma _{{\mathrm{core}}})^{5/2}\theta (\sigma _{{\mathrm{core}}} - r)} \hfill \end{array}}$$
(1)
where the outermost shell, extending up to σeff, coincides with the Hertzian model of strength Ucc. This ensures that, in the investigated regime, the behavior of the one-component microgel system is the same for the MH model and for the Hertzian model (Supplementary Fig. 8, Supplementary Note 8). The size of the innermost shell is set by σcore, which is the experimentally determined core size and indicates the onset of core-core interactions with a very large repulsion strength Ucore. Because the transition from the core to the corona is gradual, we introduce an intermediate shell at the midpoint σmid, signaling core-corona interactions. We find that the introduction of an additional elasticity within the outer shell (starting at σcorona, the midpoint of the corona) is necessary to reproduce the experimental data to differentiate the contribution of the dangling ends7 of the order of $$\sim k_{\mathrm{B}}T$$ from the corona one. This turns out to be slightly different from the numerical result in Fig. 5a, probably due to the small size of the investigated microgels in simulations and to the absence of true dangling ends in this representation. At each of the characteristic lengths of the potential (see Fig. 5b), an associated interaction strength is estimated by simple arguments (see Methods), except for Ucorona, which is adjusted to match the experimental data. The obtained strengths are in qualitative agreement with those resulting from the MH fit of the calculated effective potential.
### Developing the multi-Hertzian model at 15 °C
We apply the MH model to binary mixtures of large (‘colloid’) and small (‘depletant’) microgels. The resulting effective potential is now the sum of the multi-Hertzian model for the direct colloid-colloid interactions and the depletion term, as $$V_{{\mathrm{eff,cc}}}^{{\mathrm{MH}}} = V_{{\mathrm{cc}}}^{{\mathrm{MH}}} + V_{{\mathrm{depl}}}$$. The latter term contains the cross-interactions between the two types of microgels (colloid-depletant interactions), which for consistency should also take a multi-Hertzian form. However, we have explicitly checked that its inclusion makes no significant difference to using a simple Hertzian. Rather, it complicates the description, so that we stick to the simple Hertzian $$V_{{\mathrm{cd}}}^{\mathrm{H}}$$, whose strength Ucd has yet to be determined. Also, we need to determine the effective depletant volume fraction ϕeff,d.
We start by considering T = 15 °C and simultaneously vary the free parameters Ucorona, Ucd and ϕeff,d until we find an optimal agreement to reproduce the measured g(r). The resulting effective potential which best describes the experimental data is found for Ucorona = 8.25 × Ucc = 3300kBT and Ucd = 80kBT (Supplementary Fig. 7b, Supplementary Note 7). Effective depletant volume fractions are ϕeff,d = 0.18, 0.26, 0.30 at 15 °C. The calculated g(r)s are shown as lines in Fig. 3 and are found to reproduce the behavior of the measured data for all depletant and colloid volume fractions at the examined temperature. Particularly noteworthy is the development of an asymmetric main peak of g(r) at the highest studied volume fractions, which is accurately captured by the MH model.
These findings point out the strongly non-additive character of the interactions in microgel mixtures: indeed the colloid-depletant Ucd is significantly lower than the average of the two individual interactions for colloid and depletant microgels. This is probably due to the ability of soft particles to deform or overlap with each other, differently from hard particles. A possible explanation is that the very small depletant here involved can quite freely interpenetrate within the corona of the large ones, modifying the cross-interactions. The non-additivity is thus the key ingredient which allows us to explain the surprising stability of our soft binary mixtures41,42. Indeed, thanks to this feature, particles are able to experience a much more moderate depletion attraction than what is observed in hard colloid-soft depletant mixtures38 and in additive soft ones39. Hence our soft mixtures will eventually phase separate only at much larger depletant concentrations.
### Using the multi-Hertzian model at higher temperatures
The incorporation of the temperature dependence is a first real test to the robustness of the MH model. With increasing temperature, the interactions between the colloids in the binary mixtures change. Increasing the temperature has an effect not only on all interactions in the MH model, but also on the colloid-depletant cross interaction Ucd and on the effective volume fractions ϕeff,c and ϕeff,d. The two-volume fractions are easily dealt with: the deswelling of the microgels (in both cases) automatically yields the volume fractions at higher temperatures (see Table 1). For the MH model parameter estimate, we use the temperature dependence of the Hertzian term (see also Fig. 1b) for the outermost corona. We further note that the core size is temperature independent, based on previously published experimental data33 and our own unpublished work. Thus, the intermediate shells in the MH model become thinner and their associated strengths are chosen as done for 15 °C. A detailed description of the choice of parameters is given in the Methods section, but it is important to stress that the temperature dependence of the MH model has zero free parameters: everything is fixed based on experimental data and the parameters found for 15 °C. Thus the only parameter left to vary is the colloid-depletant cross interaction Ucd. Once a good agreement with experimental data is found, it is checked a posteriori that the estimated values are very reasonable and obey a roughly linear relation to temperature, analogously to Ucc (Supplementary Fig. 5, Supplementary Note 5).
The experimental g(r)s for the binary mixtures are compared with the simulated data in Fig. 3 for all investigated T. The final model parameters are reported in Table 2. We find that the MH model captures all the distinct features of the depletion attraction: the peak shift, its increase and asymmetry all emerge with increasing ϕeff,d (Fig. 3). It is worth to stress that the agreement of the model with experiments spans 48 different state points and is based essentially on adjusting two parameters: the strength of the second corona shell Ucorona (only determined at 15 °C) for the MH model and the cross-interaction strength Ucd (adjusted at each temperature) for the depletion interaction. Thus the present findings represent a strong test in favor of the validity of the present model.
In order to better visualize the effect of temperature, Fig. 6 shows the results for the state point with the largest colloid and depletant volume fractions (ϕeff,c = 0.49 and ϕeff,d = 0.30 at 15 °C). An increase of T again reduces the structural correlations and also the effect of depletion (due to the smaller effective depletant volume fraction), which manifests itself at each temperature by an increased asymmetry and by a shift of the main peak of the g(r) toward smaller values of r compared to the one-component system. The peak position is found at smaller distances with respect to the hydrodynamic radius of the colloids, clearly indicating that the particles partially overlap. The agreement of the MH model with experiments becomes worse for 30 °C, similarly to the case of the one-component system and probably due to the larger statistical noise in the experimental values. The average deviations between numerical and experimental curves are reported in Supplementary Fig. 3 and Supplementary Note 3.
A second robustness test for the MH model is carried out by calculating the MSD and comparing it with experiments. Similarly to what has been done for the one-component systems, we rely on BD simulations and compare the calculated and measured MSD of large microgels for each of the nine mixtures in the temperature range 15C ≤ T ≤ 30 °C. As shown in Fig. 4, the current model is also able to capture the particle dynamics for all studied state points. This is confirmed in Supplementary Fig. 4 and Supplementary Note 4, where the self-diffusion coefficients for all state points are shown and described. The small deviations between experiments and simulations observed at 30 °C can again be rationalized by the larger tracking errors associated to the rapid Brownian motion of the microgels at this temperature.
## Discussion
In this study, we have presented an extended investigation of microgel suspensions in a three-axis phase diagram. In addition to varying microgel volume fraction and temperature, we also varied the concentration of a second component in the suspension, namely smaller microgels, which act as depletants. We investigate one-component and binary mixtures of microgels in a wide range of control parameters, amounting to 48 different state points. Through the combination of confocal microscopy experiments and simulations, we provide a systematic and comprehensive characterization of both static and dynamic observables in the form of radial distribution functions and mean-squared displacements of the large microgels. Based on explicit calculations of the effective potential between two microgels, we have been able to develop a new interaction potential that, with a single set of experiment-informed parameters, is able to reproduce the statics and dynamics of real microgel suspensions, accounting for the dependence on microgel volume fraction, temperature and depletant concentration. Although microgels are nowadays a widely studied model system, such an extensive study was crucially missing. The several novel findings reported here will change the approach to the use of microgels as model systems in future work. Indeed, these soft particles appear to be much more complex systems than naively thought.
First of all, we have provided evidence that the effect of temperature on microgel-microgel effective interactions is not negligible, even within the swollen regime only. The soft Hertzian repulsion between the particles becomes steeper with increasing T. This seemingly straightforward result is not obvious since, for T > VPTT, microgels become attractive due to the increased van der Waals and additional hydrophobic interactions. Therefore, an increase of repulsion goes in the opposite direction. The trend can be rationalized by thinking of the microgels only in physical terms (ignoring polymer-solvent interactions which are not yet dominant): as the particles become smaller, they also become more compact and hence somewhat less penetrable. A further change in interactions at high T is however hinted by the present results, as the simple repulsive model that we have adopted shows increasing deviations at the highest studied T = 30 °C, approaching the VPTT at 32 °C. Close to the VPTT, a much more careful evaluation, also in terms of charge effects which could become important as shown by our and others preliminary measurements43, will be required. For the examined T-interval, the present findings clearly show that the variation of volume fraction that is obtained by changing T, a commonly used method in experiments to efficiently explore a larger portion of the phase diagram, should be done with caution, as doing so significantly affects the effective interactions between the particles. Previous works have already pointed out this important aspect through indirect observations29, but here for the first time we provide a direct evidence and quantify the change of behavior with T across the swollen regime.
Secondly, we have shown that a simple structureless model such as the Hertzian repulsion does not work to describe conditions where overlaps between particles and/or deformations start to be probed. These effects are an important physical ingredient that deeply affect the behavior of soft colloids in general and of microgels in particular, at the heart of a large research activity on glass transition and jamming of soft particles. Even without directly exploring dense conditions, the use of depletants has allowed us to probe the effective interactions between microgels at short separation distances, finding evidence of the importance of the internal microgel architecture. We have thus transferred our previous knowledge from a simple Hertzian model to a multi-Hertzian one, which is confirmed by explicit calculations of the effective potential between two microgels, that involves the inclusion of inner shells of different elasticity. Interestingly, we find that to successfully describe the experimental data it is important not only to differentiate between core and corona, but also to take into account the heterogenous character of the corona, further differentiating the contribution of the dangling ends7,8. Given the numerous studies where different synthesis protocols have been implemented to obtain other internal structures and crosslink density distributions, see e.g. refs 44,45,46,47, it will be interesting to systematically study and quantify how the crosslink density and internal structure of the microgels influence their effective interactions in future studies.
The multi-Hertzian model that we have designed is based on numerical evidence and on available experimental parameters. The comparison with experimental data has allowed us to determine the unknown parameters, most importantly the cross-interactions between small and large microgels. This is a key player in the depletion interaction, and the very low strength that we have determined for cross interactions does explain the striking finding that soft microgel mixtures are much more stable, up to very high depletant concentrations, than expected. Indeed, previous works with additive soft mixtures have shown how softness enhances depletion attraction39. Here we show that this does not happen, because softness allows deformation and interpenetration, which translates to strongly non-additive interactions. It will be interesting to confirm these findings also for other soft mixtures and in particular, for the more studied classical case of soft colloids and non-adsorbing polymers acting as depletants.
Finally, our phenomenological approach will have to be generalized to deal with different conditions such as even higher T or larger microgel volume fractions, approaching the glass transition. However, the reported evidence clearly shows that future studies will have to explicitly take into account temperature dependence and internal microgel structure to meaningfully describe microgel behavior and to use them as model systems for exploring phase transitions and glassy dynamics.
## Methods
### Synthesis
PNIPAM particles were synthesized via precipitation polymerization27,28. NIPAM was re-crystallized in hexane and all other chemicals were used as received. For the large fluorescent microgels (referred to as colloids), 2.004 g N-isopropylacrylamide (NIPAM, Acros Organics) was dissolved in 82.83 g of water. 0.136 g (4.98 mol% with respect to NIPAM) of the cross-linker N,N-methylenebis(acrylamide) (BIS, Sigma-Aldrich) was added. 0.002 g methacryloxyethyl thiocarbonyl Rhodamine B dissolved in 10 g of water was added to the reaction mixture to covalently incorporate fluorescent sites. The reaction mixture was heated to 80 °C and bubbled with nitrogen for 30 min. The reaction was then kept under a nitrogen atmosphere. To start the reaction, 0.1 g KPS in 5 g water was injected to the mixture. The reaction was then left for 4 h before the heat was turned off and the solution was left to cool down under constant stirring.
For the small non-fluorescent particle synthesis (referred to as depletants), we followed the same procedure. We combined 1.471 g of NIPAM, 0.0647 g (3.2 mol% with respect to NIPAM) of BIS in 96.29 g water. 0.1929 g of sodium dodecyl sulphate (SDS, Duchefa Biochemie) was also added to induce the formation of particles with smaller radii. The mixture was heated to 70 °C, bubbled with nitrogen and 0.0539 g of KPS in 2.0145 g of water was added to start the reaction. The reaction was then left for 6 h under a nitrogen atmosphere.
The particle suspensions were cleaned by three centrifugation and re-dispersion series before the suspensions were freeze-dried to remove all water.
### Sample preparation
All samples were prepared using the freeze-dried microgels and deionised water (purified with a MilliQ system), as this allows us to control the weight concentration. In order to ensure homogeneous dispersions, samples were thoroughly mixed by vortexing and sonication followed by placing the dispersion on a tumbler for two weeks prior to any experiment.
Using this approach, samples with a wt%-range from 0.1 to 1 wt% of colloids and samples with wt%-range from 0.1 to 0.8 wt% of depletants were prepared for viscometry experiments. Very dilute colloid and very dilute depletant suspensions (<0.1 wt%) were made for DLS characterization. Suspensions were diluted until almost completely transparent to avoid multiple scattering. For the SLS measurements, samples with a wt%-range from 0.05 to 0.65 wt% depletant were prepared. For the CLSM experiments, we aimed for binary mixtures with effective colloid volume fraction ϕeff,c = 0.2, 0.3, 0.4, and with additional effective depletant volume fraction ϕeff,d = 0, 0.1, 0.2, 0.3 at 15 °C. As an initial guess for the packing fraction of the samples, we used the shift factor k = ϕ/wt% as determined from viscometry measurements on colloid-only and depletant-only samples (see below for the experimental k-values). The binary mixtures contained colloid wt% 2.2, 3.3 and 4.4 and depletant wt% 0, 0.26, 0.54, 0.81. Final ϕeff were determined by fitting g(r) curves, as discussed in the manuscript.
### Experiments
The viscosity of colloid-only and depletant-only samples with known wt%-concentration was recorded using an Ubbelohde viscometer at 15 and 30 °C. Flow times were measured 5–6 times, averaged and divided by the flow time of a water sample to extract the relative viscosity of the samples. The relative viscosity was fitted to the well-known Batchelor equation which holds for colloids in the dilute regime48: $$\eta _{{\mathrm{rel}}} = 1 + 2.5\phi _{{\mathrm{eff}}} + 5.9\phi _{{\mathrm{eff}}}^2$$ with ϕeff = k × wt%. From these fits the shift factor k was determined for 15 and 30 °C. For 20, 25 °C, the data was interpolated. kcolloid = 0.091, 0.0758, 0.061, 0.046 wt%−1, kdepletant = 0.332, 0.292, 0.253, 0.214 wt%−1 for 15–30 °C respectively. The shift factor was used in sample preparation to estimate ϕeff.
Microgels were characterized using dynamic light scattering (DLS) with a goniometer-based light scattering instrument that employs pseudo 2D-cross correlation (3D DLS Spectrometer, LS instruments, Switzerland) with laser wavelength λ = 660 nm. DLS measurements were performed over a range of 15–30 °C resulting in a swelling curve for both colloids and depletants. The hydrodynamic radii were extracted using a first order cumulant analysis averaged over an angular range of 60–100°, and measured every 10°. To probe the interactions between depletants, static light scattering experiments were performed at several packing fractions, and the small wavevector limit S(0) of the static structure factor S(q) for the small microgels was also obtained.
The binary mixtures were imaged in a Leica SP5 confocal microscope at a frame rate of 13.9 Hz in the range of 15–30 °C. An excitation wavelength of 543 nm was used in combination with an oil immersion objective at ×100 magnification and numerical aperture 1.4. The confocal microscope is housed in a temperature regulated box which provides a temperature control with a stability of ± 0.2 °C over the range of temperatures used. Because scanning in the z-direction would have been too slow, we made xyt-videos. Such videos of 512 × 512 × 4000 frames were obtained for at least five different positions in the sample to minimize the effects of local density fluctuations. Videos were taken at $$\gtrsim 5$$ particle diameters away from the glass to avoid wall influences. The accuracy of the coordinates is estimated to be Δx ≈ Δy ≈ 11 nm33. Using standardized image analysis and particle tracking routines49, the 2D g(r)s and 2D mean square displacements (MSDs) (〈x2 + y2〉) were obtained. To ensure the 2D g(r) corresponds to the 3D g(r), the approach as described in Mohanty et al. was employed33. In brief, a thinner ‘slice’ of data is created by rejecting out of focus particles, i.e. we only take particles with z = 0. Even so, there will always remain some variation in the z-position of the tracked particles. This has been taken into account in the numerical calculations by adding a suitable noise along one of the axes.
### Model and theory
We consider two systems: one-component microgel systems and binary mixtures. Colloids experience a direct colloid-colloid interaction that we model as Hertzian or multi-Hertzian (MH) as described in the manuscript. The presence of depletants leads to an additional attractive interaction between the colloids. Thus, the effective colloid-colloid interaction potential Veff,cc is the sum of two contributions: Vcc (direct colloid-colloid interaction) and Vdepl (depletion interaction), i.e. Veff,cc(r) = Vcc(r) + Vdepl(r). We assume that depletants are ideal. Under this assumption, the Fourier components of the additional depletion term can be calculated for any colloid-depletant interaction Vcd as in ref. 50
$$- \beta \tilde V_{{\mathrm{depl}}}(k) = \rho _d\left[ {{\int} {\mathrm{d}}{\mathbf{r}}\left( {e^{ - \beta V_{{\mathrm{cd}}}({\mathrm{r}})} - 1} \right)e^{{\mathrm{i}}{\mathbf{k}} \cdot {\mathbf{r}}}} \right]^2$$
(2)
Here ρd is the reservoir depletant number density and β = 1/(kBT). In our study, we consider Vcd(r) to be a Hertzian potential. After Fourier transforming Eq. (2) we obtain βVdepl(r) which is added to βVcc at each considered temperature for the binary mixtures to obtain the total interaction potential βVeff,cc. We further check that the use of a multi-Hertzian model for Vcd(r) does not yield a noticeable change on the obtained results.
The MH model is built up as follows. The outer shell corresponds to the Hertzian soft repulsion: Ucc = 400, 520, 640, 760 kBT for 15, 20, 25, 30 °C which sets in at r = σeff, where σeff = 2RH is of course temperature dependent (see also Supplementary Fig. 1 and Supplementary Note 1). The inner shell corresponds to the core and is temperature independent. We estimate the core diameter as σcore = 0.7σeff thanks to available experimental SAXS data (and related fuzzy-sphere model fits) for similar microgels33. We fix Ucore = 104kBT, a value compatible with elasticity arguments1 which takes into account the high crosslink density in the core. In this way, the innermost and outermost Hertzian terms are completely specified based on experimental data. Since the border between the dense core and loosely crosslinked corona is not so well-defined, we introduce an intermediate point at the midpoint between these two lengths, i.e. σmid = 0.5(σcore + σeff) = 0.85σeff. The associated repulsion strength is arbitrarily chosen to be Umid = 10 × Ucc. We find that it is crucial to describe the corona by using two distinct shells, introducing a second intermediate point σcorona = 0.5(σmid + σeff) = 0.925σeff with its associated strength Ucorona. The latter is determined at 15 °C to be 8.25 × Ucc by comparing simulation results with the experimental g(r)s. This relation and that for Umid = 10 × Ucc are then kept throughout the temperature range.
The effective potential Veff,cc is then calculated at several depletant volume fractions starting with our initial guess from viscometry. Because of the uncertainty in the experimental packing fraction, we adjust ϕeff,d in the simulations at 15 °C until we find good agreement with the experimental data. As described in the manuscript, ϕeff,c was adjusted in the Hertzian simulations at 15 °C. The thus obtained parameter values for the volume fractions (ϕeff,c, ϕeff,d) and the parameters for the MH model are summarized in Tables 1 and 2, respectively.
To justify the assumption of ideal behavior of small microgels and the non-additive character of interactions in the mixture, we have further quantified the interactions between small microgels assuming a Hertzian repulsion. To calibrate its strength, we have computed S(0) by solving the Ornstein-Zernike equation within the Rogers-Young closure, finding that a very soft interaction between depletants, i.e. $$U_{{\mathrm{dd}}} \simeq 100k_{\mathrm{B}}T$$, captures the small microgels behavior. This estimate is consistent with scaling arguments of the Hertzian model as a function of particle size51,52 with respect to the large microgels.
### Numerical calculation of the effective potential
Microgel configurations are built as disordered, fully-bonded networks generated as in ref. 8,40 using ≈5000 monomers of diameter σm in a spherical confinement of radius Z = 25 σm and crosslinker concentration c = 5%. Monomers are in the swollen regime and interact through the classical bead-spring model for polymers53.
To calculate the microgel-microgel effective interactions we combine two methods, as described in ref. 40. We perform umbrella sampling simulations in which we add an harmonic biasing potential acting on the centres of mass of the two microgels54 for small separation distances r. The resulting effective potential is calculated as V(r) = −kBTln[g(r)]. At larger values of r, we employ a generalized Widom insertion scheme55 which is very efficient in sampling the small-deformation regime.
### Bulk simulations
We perform Langevin dynamics simulations of N = 2000 colloid particles of mass m interacting with the generated effective interaction potential Veff,cc. The units of length, energy, and mass are σeff, kBT and m respectively. Time is measured in units of $$\sqrt {m\sigma _{{\mathrm{eff}}}^2/k_{\mathrm{B}}T}$$. The integration time-step is fixed to 0.001. With this scheme, particles after an initial microscopic time follow a Brownian Dynamics (BD) due to the interactions with a fictitious solvent56. The solvent effective viscosity enters in the definition of the zero-colloid limit self-diffusion coefficient D0, which is the key parameter in BD simulations.
Since the viscosity of the real samples changes for each T and for each ϕeff,d, we have estimated the experimental D0 at ϕeff,d = 0 by means of Stokes-Einstein relations using the measured hydrodynamic radii at each T. Furthermore, we have also estimated D0 in the presence of depletant thanks to the viscosity measurements described above. Ideally, one would need to directly use these values in the simulations, but this leads to an incorrect description of the system at low enough colloid packing fractions because BD simulations do not include hydrodynamic interactions, whose effects are strongest in this regime. Hence we have adopted the following strategy: at fixed ϕeff,c and for each T and ϕeff,d (i.e., for 16 of our samples), we performed several BD runs in order to select the values of D0 providing a good agreement for the long-time MSD with experiments. We thus find a unique shift factor on the time axis for all simulated state points, needed in order to convert simulation time into experimental time. Then, the estimated D0 values were kept fixed for all studied ϕeff,c, i.e., for the remaining 32 studied samples no further adjustment was made. The estimated D0 can thus be considered the effective bare self-diffusion coefficients of our approximated BD approach, and was finally compared to the experimental estimates, finding a good agreement at large depletant concentrations and low temperatures (as reported in Supplementary Fig. 9 and Supplementary Note 9), that is, for the state points where hydrodynamic interactions are less important.
Simulations were performed with particles possessing a polydispersity of 4% with a Gaussian distribution, similar to the experimental system. Slices through configurations of 100 independent state points were used to calculate the radial distribution function g(r) of the 3D data with sufficient statistics. The z-position of particles is randomly displaced by Gaussian noise with a standard deviation of 0.005. In this way, g(r)s can be successfully compared to the 2D-g(r) obtained from experiments, as demonstrated in ref. 33.
### Code availability
The computer codes used for the current study are available from the corresponding authors on reasonable request.
## Data availability
The authors declare that all data supporting the findings of this study are available within the article and its Supplementary Information files. All other relevant data supporting the findings of this study are available from the corresponding authors on request.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
Vlassopoulos, D. & Cloitre, M. Tunable rheology of dense soft deformable colloids. Curr. Opin. Colloid In. 19, 561–574 (2014).
2. 2.
van der Scheer, P., van de Laar, T., van der Gucht, J., Vlassopoulos, D. & Sprakel, J. Fragility and strength in nanoparticle glasses. ACS Nano 11, 6755–6763 (2017).
3. 3.
Conley, G. M., Aebischer, P., Nöjd, S., Schurtenberger, P. & Scheffold, F. Jamming and overpacking fuzzy microgels: deformation, interpenetration, and compression. Sci. Adv. 3, e1700969 (2017).
4. 4.
Mohanty, P. S. et al. Interpenetration of polymeric microgels at ultrahigh densities. Sci. Rep. 7, 1487 (2017).
5. 5.
Stieger, M., Pedersen, J. S., Lindner, P. & Richtering, W. Are thermoresponsive microgels model systems for concentrated colloidal suspensions? a rheology and small-angle neutron scattering study. Langmuir 20, 7283–7292 (2004).
6. 6.
Rey, M. et al. Isostructural solid–solid phase transition in monolayers of soft core–shell particles at fluid interfaces: structure and mechanics. Soft Matter 12, 3545–3557 (2016).
7. 7.
Boon, N. & Schurtenberger, P. Swelling of micro-hydrogels with a crosslinker gradient. Phys. Chem. Chem. Phys. 19, 23740–23746 (2017).
8. 8.
Gnan, N., Rovigatti, L., Bergman, M. & Zaccarelli, E. In silico synthesis of microgel particles. Macromolecules 50, 8777–8786 (2017).
9. 9.
Mohanty, P. et al. Dielectric spectroscopy of ionic microgel suspensions. Soft Matter 12, 9705–9727 (2016).
10. 10.
Hashmi, S. M. & Dufresne, E. R. Mechanical properties of individual microgel particles through the deswelling transition. Soft Matter 5, 3682–3688 (2009).
11. 11.
Bachman, H. et al. Ultrasoft, highly deformable microgels. Soft Matter 11, 2018–2028 (2015).
12. 12.
Reese, C. E., Mikhonin, A. V., Kamenjicki, M., Tikhonov, A. & Asher, S. A. Nanogel nanosecond photonic crystal optical switching. J. Am. Chem. Soc. 126, 1493–1496 (2004).
13. 13.
Serpe, M. J. et al. Hydrogel Micro and Nanoparticles, chap. (Wiley-VCH Verlag GmbH and Co., KGaA, 2012; 317–336. Color-Tunable Poly (N-Isopropylacrylamide) Microgel-Based Etalons: Fabrication, Characterization, and Applications.
14. 14.
Hamidi, M., Azadi, A. & Rafiei, P. Hydrogel nanoparticles in drug delivery. Adv. Drug Deliv. Rev. 60, 1638–1649 (2008).
15. 15.
Peppas, N., Bures, P., Leobandung, W. & Ichikawa, H. Hydrogels in pharmaceutical formulations. Eur. J. Pharm. Biopharm. 50, 27–46 (2000).
16. 16.
Oh, J. K., Drumright, R., Siegwart, D. J. & Matyjaszewski, K. The development of microgels/nanogels for drug delivery applications. Prog. Polym. Sci. 33, 448–477 (2008).
17. 17.
Fernández-Barbero, A. et al. Gels and microgels for nanotechnological applications. Adv. Colloid Interfac. 147, 88–108 (2009).
18. 18.
Wang, Z., Wang, F., Peng, Y., Zheng, Z. & Han, Y. Imaging the homogeneous nucleation during the melting of superheated colloidal crystals. Science 338, 87–90 (2012).
19. 19.
Hilhorst, J. & Petukhov, A. Variable dislocation widths in colloidal crystals of soft thermosensitive spheres. Phys. Rev. Lett. 107, 095501 (2011).
20. 20.
Peng, Y., Wang, Z., Alsayed, A. M., Yodh, A. G. & Han, Y. Melting of colloidal crystal films. Phys. Rev. Lett. 104, 205703 (2010).
21. 21.
Alsayed, A. M., Islam, M. F., Zhang, J., Collings, P. J. & Yodh, A. G. Premelting at defects within bulk colloidal crystals. Science 309, 1207–1210 (2005).
22. 22.
Mohanty, P. S., Bagheri, P., Nöjd, S., Yethiraj, A. & Schurtenberger, P. Multiple path-dependent routes for phase-transition kinetics in thermoresponsive and field-responsive ultrasoft colloids. Phys. Rev. X 5, 011030 (2015).
23. 23.
Zhang, Z. et al. Thermal vestige of the zero-temperature jamming transition. Nature 459, 230–233 (2009).
24. 24.
Caswell, T. A., Zhang, Z., Gardel, M. L. & Nagel, S. R. Observation and characterization of the vestige of the jamming transition in a thermal three-dimensional system. Phys. Rev. E 87, 012303 (2013).
25. 25.
Yunker, P. J. et al. Physics in ordered and disordered colloidal matter composed of poly (n-isopropylacrylamide) microgel particles. Rep. Prog. Phys. 77, 056601 (2014).
26. 26.
Heskins, M. & Guillet, J. E. Solution properties of poly (n-isopropylacrylamide). J. Macromol. Sci. Chem. 2, 1441–1455 (1968).
27. 27.
Pelton, R. Temperature-sensitive aqueous microgels. Adv. Colloid Interfac. 85, 1–33 (2000).
28. 28.
Pelton, R. & Chibante, P. Preparation of aqueous latices with n-isopropylacrylamide. Colloid Surf. 20, 247–256 (1986).
29. 29.
Romeo, G., Fernandez-Nieves, A., Wyss, H. M., Acierno, D. & Weitz, D. A. Temperature-controlled transitions between glass, liquid, and gel states in dense p-nipa suspensions. Adv. Mater. 22, 3441–3445 (2010).
30. 30.
Heyes, D. & Brańka, A. Interactions between microgel particles. Soft Matter 5, 2681–2685 (2009).
31. 31.
Senff, H. & Richtering, W. Temperature sensitive microgel suspensions: Colloidal phase behavior and rheology of soft spheres. J. Chem. Phys. 111, 1705–1711 (1999).
32. 32.
Wu, J., Huang, G. & Hu, Z. Interparticle potential and the phase behavior of temperature-sensitive microgel dispersions. Macromolecules 36, 440–448 (2003).
33. 33.
Mohanty, P. S., Paloli, D., Crassous, J. J., Zaccarelli, E. & Schurtenberger, P. Effective interactions between soft-repulsive colloids: Experiments, theory, and simulations. J. Chem. Phys. 140, 094901 (2014).
34. 34.
Scheffold, F. et al. Brushlike interactions between thermoresponsive microgel particles. Phys. Rev. Lett. 104, 128304 (2010).
35. 35.
Romeo, G. & Ciamarra, M. P. Elasticity of compressed microgel suspensions. Soft Matter 9, 5401–5406 (2013).
36. 36.
Paloli, D., Mohanty, P. S., Crassous, J. J., Zaccarelli, E. & Schurtenberger, P. Fluid–solid transitions in soft-repulsive colloids. Soft Matter 9, 3000–3004 (2013).
37. 37.
Likos, C. N. Effective interactions in soft condensed matter physics. Phys. Rep. 348, 267–439 (2001).
38. 38.
Bayliss, K., Van Duijneveldt, J., Faers, M. & Vermeer, A. Comparing colloidal phase separation induced by linear polymer and by microgel particles. Soft Matter 7, 10345–10352 (2011).
39. 39.
Rovigatti, L., Gnan, N., Parola, A. & Zaccarelli, E. How soft repulsion enhances the depletion mechanism. Soft Matter 11, 692–700 (2015).
40. 40.
Rovigatti, L., Gnan, N., Ninarello, A. & Zaccarelli, E. On the validity of the hertzian model: the case of soft colloids. Prepr. arXiv 1808.04769 (2018).
41. 41.
Hoffmann, N., Ebert, F., Likos, C. N., Löwen, H. & Maret, G. Partial clustering in binary two-dimensional colloidal suspensions. Phys. Rev. Lett. 97, 078301 (2006).
42. 42.
Angioletti-Uberti, S., Varilly, P., Mognetti, B. M. & Frenkel, D. Mobile linkers on dna-coated colloids: valency without patches. Phys. Rev. Lett. 113, 128303 (2014).
43. 43.
Truzzolillo, D. et al. Overcharging and reentrant condensation of thermoresponsive ionic microgels. Soft Matter 14, 4110–4125 (2018).
44. 44.
Acciaro, R., Gilányi, T. & Varga, I. Preparation of monodisperse poly(n-isopropylacrylamide) microgel particles with homogenous cross-link density distribution. Langmuir 27, 7917–7925 (2011).
45. 45.
Tiwari, R. et al. A versatile synthesis platform to prepare uniform, highly functional microgels via click-type functionalization of latex particles. Macromolecules 47, 2257–2267 (2014).
46. 46.
Wei, J., Li, Y. & Ngai, T. Tailor-made microgel particles: Synthesis and characterization. Colloid Surf. A 489, 122–127 (2016).
47. 47.
Mueller, E. et al. Dynamically cross-linked self-assembled thermoresponsive microgels with homogeneous internal structures. Langmuir 34, 1601–1612 (2018).
48. 48.
Batchelor, G. The effect of brownian motion on the bulk stress in a suspension of spherical particles. J. Fluid. Mech. 83, 97–117 (1977).
49. 49.
Crocker, J. C. & Grier, D. G. Methods of digital video microscopy for colloidal studies. J. Colloid Interf. Sci. 179, 298–310 (1996).
50. 50.
Parola, A. & Reatto, L. Depletion interaction between spheres of unequal size and demixing in binary mixtures of colloids. Mol. Phys. 113, 2571–2582 (2015).
51. 51.
Landau, L. D. & Lifshitz, E. M. Theory of Elasticity V. 7 of Course of Theoretical Physics (Pergamon Press, Oxford, 165 pages, 1959).
52. 52.
Riest, J., Mohanty, P., Schurtenberger, P. & Likos, C. N. Coarse-graining of ionic microgels: Theory and experiment. Z. Phys. Chem. 226, 711–735 (2012).
53. 53.
Grest, G. S. & Kremer, K. Molecular dynamics simulation for polymers in the presence of a heat bath. Phys. Rev. A. 33, 3628 (1986).
54. 54.
Roux, B. The calculation of the potential of mean force using computer simulations. Comput. Phys. Commun. 91, 275–282 (1995).
55. 55.
Mladek, B. M. & Frenkel, D. Pair interactions between complex mesoscopic particles from widom’s particle-insertion method. Soft Matter 7, 1450–1455 (2011).
56. 56.
Allen, M. P. & Tildesley, D. J. Computer simulation of liquids (Oxford University Press, Oxford, 640 pages, 2017).
## Acknowledgements
We thank Sofi Nöjd for particle synthesis and Andrea Ninarello for discussions. M.B. and P.S. acknowledge financial support from the European Research Council (ERC-339678-COMPASS) and the Swedish Research Council (VR 2015-05426). N.G., L.R. and E.Z. acknowledge support from the European Research Council (ERC Consolidator Grant 681597, MIMIC).
## Author information
### Author notes
• Marc Obiols-Rabasa
Present address: CR Competence AB, Naturvetarevägen 14, 22362, Lund, Sweden
• Janne-Mieke Meijer
Present address: Department of Physics, University of Konstanz, PO Box 688, D-78457, Konstanz, Germany
### Affiliations
1. #### Division of Physical Chemistry, Department of Chemistry, Lund University, PO Box 124, SE-22100, Lund, Sweden
• Maxime J. Bergman
• , Marc Obiols-Rabasa
• , Janne-Mieke Meijer
• & Peter Schurtenberger
2. #### CNR-ISC and Department of Physics, Sapienza University of Rome, Piazzale A. Moro 2, 00185, Roma, Italy
• Nicoletta Gnan
• , Lorenzo Rovigatti
• & Emanuela Zaccarelli
### Contributions
E.Z. and P.S. designed and supervised research. M.B. performed all experiments with help from M.O.R. and J.M.M. M.B., N.G., L.R., and E.Z. performed simulations and modeling. All authors contributed to the interpretation and analysis of the data. M.B., L.R., E.Z., and P.S. wrote the manuscript with inputs from all other authors.
### Competing interests
The authors declare no competing interests.
### Corresponding authors
Correspondence to Emanuela Zaccarelli or Peter Schurtenberger.
## Electronic supplementary material
### DOI
https://doi.org/10.1038/s41467-018-07332-5
|
Datatype - Maple Help
DataSeries/Datatype
obtain the data type of a DataSeries
Calling Sequence Datatype(ds)
Parameters
ds - a DataSeries object
Description
• The Datatype command returns the data type of a DataSeries object.
• The data type determines the values that can be stored in the data series. This is similar to the datatype option of an rtable.
• Trying to set an entry of a DataSeries to a value that is not of the specified data type leads to an error.
• The data type can be set using the datatype option in the DataSeries constructor call.
• If no data type is specified in the constructor call, Maple uses the data type of the data argument if that is an rtable or DataSeries, or type anything otherwise.
• If you want to store floating point data in a DataSeries, it will be advantageous to set the data type to float[8]. This uses less memory than the alternatives, and it can yield speed-ups in computations.
• If you want to store true-or-false data in a DataSeries, it will be advantageous to set the data type to truefalse or truefalseFAIL. (FAIL is the third Boolean constant in Maple, signifying unknown or undetermined truth values.) This yields small speed-ups when used in DataSeries indexing or DataFrame indexing.
Examples
The default data type is anything.
> $\mathrm{ds1}≔\mathrm{DataSeries}\left(\left[1,2,3\right]\right)$
${\mathrm{ds1}}{≔}\left[\begin{array}{cc}{1}& {1}\\ {2}& {2}\\ {3}& {3}\end{array}\right]$ (1)
> $\mathrm{Datatype}\left(\mathrm{ds1}\right)$
${\mathrm{anything}}$ (2)
> $\mathrm{v1}≔\mathrm{Vector}\left(\left[2.,3.\right]\right)$
${\mathrm{v1}}{≔}\left[\begin{array}{c}{2.}\\ {3.}\end{array}\right]$ (3)
> $\mathrm{ds2}≔\mathrm{DataSeries}\left(\mathrm{v1}\right)$
${\mathrm{ds2}}{≔}\left[\begin{array}{cc}{1}& {2.}\\ {2}& {3.}\end{array}\right]$ (4)
> $\mathrm{Datatype}\left(\mathrm{ds2}\right)$
${\mathrm{anything}}$ (5)
You can override the default in the DataSeries constructor.
> $\mathrm{ds3}≔\mathrm{DataSeries}\left(\mathrm{v1},\mathrm{datatype}=\mathrm{float}\left[8\right]\right)$
${\mathrm{ds3}}{≔}\left[\begin{array}{cc}{1}& {2.}\\ {2}& {3.}\end{array}\right]$ (6)
> $\mathrm{Datatype}\left(\mathrm{ds3}\right)$
${{\mathrm{float}}}_{{8}}$ (7)
Alternatively, you can make sure that the data defining the DataSeries already has a specified data type.
> $\mathrm{v2}≔\mathrm{Vector}\left(\left[2.,3.\right],\mathrm{datatype}=\mathrm{float}\left[8\right]\right)$
${\mathrm{v2}}{≔}\left[\begin{array}{c}{2.}\\ {3.}\end{array}\right]$ (8)
> $\mathrm{ds4}≔\mathrm{DataSeries}\left(\mathrm{v2}\right)$
${\mathrm{ds4}}{≔}\left[\begin{array}{cc}{1}& {2.}\\ {2}& {3.}\end{array}\right]$ (9)
> $\mathrm{Datatype}\left(\mathrm{ds4}\right)$
${{\mathrm{float}}}_{{8}}$ (10)
Compatibility
• The DataSeries/Datatype command was introduced in Maple 2016.
|
# Fibonacci function or sequence
The Fibonacci sequence is a sequence of numbers, where every number in the sequence is the sum of the two numbers preceding it. The first two numbers in the sequence are both 1. Here are the first few terms:
1 1 2 3 5 8 13 21 34 55 89 ...
Write the shortest code that either:
• Generates the Fibonacci sequence without end.
• Given n calculates the nth term of the sequence. (Either 1 or zero indexed)
You may use standard forms of input and output.
(I gave both options in case one is easier to do in your chosen language than the other.)
For the function that takes an n, a reasonably large return value (the largest Fibonacci number that fits your computer's normal word size, at a minimum) has to be supported.
/* Configuration */
var QUESTION_ID = 85; // Obtain this from the url
// It will be like https://XYZ.stackexchange.com/questions/QUESTION_ID/... on any question page
var COMMENT_FILTER = "!)Q2B_A2kjfAiU78X(md6BoYk";
var OVERRIDE_USER = 3; // This should be the user ID of the challenge author.
/* App */
return "https://api.stackexchange.com/2.2/questions/" + QUESTION_ID + "/answers?page=" + index + "&pagesize=100&order=desc&sort=creation&site=codegolf&filter=" + ANSWER_FILTER;
}
}
jQuery.ajax({
method: "get",
dataType: "jsonp",
crossDomain: true,
success: function (data) {
data.items.forEach(function(a) {
});
comment_page = 1;
}
});
}
jQuery.ajax({
method: "get",
dataType: "jsonp",
crossDomain: true,
success: function (data) {
data.items.forEach(function(c) {
if (c.owner.user_id === OVERRIDE_USER)
});
else process();
}
});
}
var SCORE_REG = /<h\d>\s*([^\n,<]*(?:<(?:[^\n>]*>[^\n<]*<\/[^\n>]*>)[^\n,<]*)*),.*?(\d+)(?=[^\n\d<>]*(?:<(?:s>[^\n<>]*<\/s>|[^\n<>]+>)[^\n\d<>]*)*<\/h\d>)/;
function getAuthorName(a) {
return a.owner.display_name;
}
function process() {
var valid = [];
var body = a.body;
if(OVERRIDE_REG.test(c.body))
body = '<h1>' + c.body.replace(OVERRIDE_REG, '') + '</h1>';
});
var match = body.match(SCORE_REG);
if (match)
valid.push({
user: getAuthorName(a),
size: +match[2],
language: match[1],
});
else console.log(body);
});
valid.sort(function (a, b) {
var aB = a.size,
bB = b.size;
return aB - bB
});
var languages = {};
var place = 1;
var lastSize = null;
var lastPlace = 1;
valid.forEach(function (a) {
if (a.size != lastSize)
lastPlace = place;
lastSize = a.size;
++place;
.replace("{{NAME}}", a.user)
.replace("{{LANGUAGE}}", a.language)
.replace("{{SIZE}}", a.size)
var lang = a.language;
lang = jQuery('<a>'+lang+'</a>').text();
languages[lang] = languages[lang] || {lang: a.language, lang_raw: lang, user: a.user, size: a.size, link: a.link};
});
var langs = [];
for (var lang in languages)
if (languages.hasOwnProperty(lang))
langs.push(languages[lang]);
langs.sort(function (a, b) {
if (a.lang_raw.toLowerCase() > b.lang_raw.toLowerCase()) return 1;
if (a.lang_raw.toLowerCase() < b.lang_raw.toLowerCase()) return -1;
return 0;
});
for (var i = 0; i < langs.length; ++i)
{
var language = jQuery("#language-template").html();
var lang = langs[i];
language = language.replace("{{LANGUAGE}}", lang.lang)
.replace("{{NAME}}", lang.user)
.replace("{{SIZE}}", lang.size)
language = jQuery(language);
jQuery("#languages").append(language);
}
}
body {
text-align: left !important;
display: block !important;
}
width: 290px;
float: left;
}
#language-list {
width: 290px;
float: left;
}
font-weight: bold;
}
table td {
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="language-list">
<h2>Shortest Solution by Language</h2>
<table class="language-list">
<tr><td>Language</td><td>User</td><td>Score</td></tr>
<tbody id="languages">
</tbody>
</table>
</div>
<tr><td></td><td>Author</td><td>Language</td><td>Size</td></tr>
</tbody>
</table>
</div>
<table style="display: none">
</tbody>
</table>
<table style="display: none">
<tbody id="language-template">
</tbody>
</table>
• I am sort of waiting for a response like "f", 1 byte, in my math based golf language. Aug 11, 2020 at 11:57
• @ChrisJesterYoung can we use 1.0 are 1 only? May 11 at 2:45
• @NumberBasher 1.0 is fine. May 20 at 19:10
# R, 40 bytes
Haven't seen a R solution, so:
f=function(n)ifelse(n<3,1,f(n-1)+f(n-2))
• I know this is an old answer, but you can shorten to 38 bytes Aug 3, 2018 at 14:53
# Whispers v3, 35 bytes
> Input
> fₙ
>> 2ᶠ1
>> Output 3
Try it online! (or don't, as this uses features exclusive to v3)
Simply takes the first $$\n\$$ elements of the infinite list of Fibonacci numbers.
• Welcome to Code Gol...hey wait a minute Feb 19, 2021 at 19:00
## GolfScript, 13 chars
2,~{..p@+.}do
(My answer from a previous Stack Overflow question.)
# JavaScript, 4139 33 bytes
(c=(a,b)=>alert(a)+c(b,a+b))(0,1)
• I don't think the function without the parenthesis is still valid. Apr 8, 2013 at 16:22
• I don't believe this is valid because ES6 came after the challenge was created. Even if it was, you could save a byte by making it a function to return fib(n): f=(n,a=0,b=1)=>n?f(n-1,b,a+b):a;
Dec 16, 2016 at 14:33
# jq-n, 30 28 bytes
-2 bytes thanks to Michael Chatiskatzi!
Prints the infinite sequence.
[0,1]|while(1;[last,add])[1]
Try it online!
Start with [0,1].
while(1; ... ) infinite loop, 1 is a truthy value.
[last,add] the new pair is the last value of the old pair and the sum of the old pair.
while returns all intermediate pairs, [1] gets the second element of each pair.
# jq, 35 33 bytes
A recursive filter written for this tip.
def f:(.<2//[.-1,.-2|f]|add?)//.;
Try it online!
• 28 bytes Sep 10, 2021 at 21:45
# bc, 36 chars
r=0;l=1;while(i++<99){r+=l;l+=r;r;l}
# C: 48 47 characters
A really really truly ugly thing. It recursively calls main, and spits out warnings in any sane compiler. But since it compiles under both Clang and GCC, without any odd arguments, I call it a success.
b;main(a){printf("%u ",b+=a);if(b>0)main(b-a);}
It prints numbers from the Fibonacci sequence until the integers overflow, and then it continues spitting out ugly negative and positve numbers until it segfaults. Everything happens in well under a second.
Now it actually behaves quite well. It prints numbers from the Fibonacci sequence and stops when the integers overflow, but since it prints them as unsigned you never see the overflow:
VIC-20:~ Fors$./fib 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229 832040 1346269 2178309 3524578 5702887 9227465 14930352 24157817 39088169 63245986 102334155 165580141 267914296 433494437 701408733 1134903170 1836311903 2971215073 VIC-20:~ Fors$
• Printing out overflowed numbers and/or segfaulting is probably not part of the spec, but nice try. :-) Apr 4, 2013 at 11:53
• Certainly, but it's not the only solution here that segfaults. :) I will edit it so that it behaves more properly, since I got the character count down anyway.
– Fors
Apr 4, 2013 at 13:09
• Yay! Have an upvote. :-) Apr 4, 2013 at 13:56
• I'm pretty sure you could shave off 2 bytes by replacing if(b>0) with b>0&& and yes, I realize this post is over 4 years old :) Aug 24, 2017 at 16:29
# C#: 38 (40 to ensure non-negative numbers)
Inspired by the beauty of Jon Skeet's C# answer and St0le's answer, another C# solution in only 38 characters:
Func<int,int>f=n=>n>2?f(n-1)+f(n-2):1;
Tested with:
for(int i = 1; i <= 15; i++)
Console.WriteLine(f(i));
Yay for recursive Func<>! Incorrect when you pass in negative numbers, however - corrected by the 40 character version, which doesn't accept them:
Func<uint,uint>f=n=>n>2?f(n-1)+f(n-2):1;
Note: as pointed out by @Andrew Gray, this solution doesn't work in Visual Studio, as the compiler rejects the in-line function definition referring to itself. The Mono compiler at http://www.compileonline.com/compile_csharp_online.php, however, runs it just fine. :)
Visual Studio: 45
Func<int,int>f=null;f=n=>n>2?f(n-1)+f(n-2):1;
• looks rather familiar...dunno where I've seen that before... ;) As far as I can tell, though, in C# this is the best way of doing it. However, your way won't work - you have to assign null to your function to use a recursive lambda. As that code stands, it won't compile, with a syntax error 'use of unassigned function f' at the line that your lambda is being defined at. Apr 17, 2013 at 18:14
• Depends on your compiler. :) It does exactly as you say in Visual Studio - but the Mono compiler at compileonline.com/compile_csharp_online.php runs it perfectly as-is. Apr 17, 2013 at 18:45
• Didn't know that. I wonder why VS and Mono went two different directions on this one...or, maybe the Mono guys are just smarter. The answer is beyond me. D: Apr 17, 2013 at 18:49
• Updated to clearly point out our findings. ;) Apr 17, 2013 at 18:53
• Does this handle the F(0)=0 case? It's an easy fix that doesn't cost any extra bytes: just exchange :1 for :n Mar 25, 2016 at 5:59
### Windows PowerShell – 34 30
for($b=1){$a,$b=$b,($a+$b)
$a} • You can save 3 by doing away with defining$a at the start (assuming $a is not already defined in the environment), and moving the echo of$a to the end of the loop.
– Iszi
Nov 19, 2013 at 17:11
• I can even save one more by including the initialisation in the loop header.
– Joey
Nov 19, 2013 at 22:13
• Wow. I never actually ran this until today for some reason. It's interesting that, past around 1E+308, PowerShell just gives up and calls it Infinity.
– Iszi
Nov 27, 2013 at 21:57
• I put together a solution, somewhat based on this, that accepts user input and outputs the nth number. Came out to 45 characters. You want that here, or as a separate answer?
– Iszi
Nov 27, 2013 at 22:06
• @Iszi, give a separate answer, I guess. It solves a different problem, after all.
– Joey
Nov 28, 2013 at 5:55
GNU Octave: 19 chars
@(x)([1,1;1,0]^x)(1)
This solution has the distinction of running in O(log n) time.
• Edited it to a language that I can test. Nov 18, 2015 at 5:43
# Cy, 3331 30 bytes (non-competing)
This is going for the function option (takes N, outputs F(N))
0 1 :>i {1 - $&+ times} &if :< Ungolfed/explanation: 0 1 # first two fibs are 0, 1 :>i # read input as integer (let's call it N) { 1 - {&+} # add the last two values times # repeat N-1 times ^ } &if # if N is non-zero ^ :< # output the last calculated value (if N is 0, that would be 0) # Detour (non-competing), 8 bytes [$<<]!S.
Try it online!
This one is shorter than the word "fibonacci"
[$<<]!S. Fibonacci explanation: [ ] # while n > 0$<< # replace n with [n-1, n-2]
!S. # invert, output
Just for fun, here's one that will always take exactly 19 ticks to terminate, whether given 0 or 1474. On my really old macbook, it on average terminates after 7ms.
# Detour, 30 28 bytes
$Q{G<!d}seQ .{5Vg>d}se-$G_c!
Try it online! This is the way of expressing (((1+sqrt(5))/2)^n-((1-sqrt(5))/2)^n)/sqrt(5)
Old way:
# Detour (non-competing), 10 9 bytes
<Q>S.
;$< Try it online! This is non-competing: I just pushed the required version of the language about 10 minutes ago. Detour works like befunge, fish etc. except for one crucial difference: where those languages redirect the instruction pointer, detour redirects data. Input is pushed in at the beginning of the middle line (in this case the first). < decrements a number, > increments it. Q sends it down if a number is greater than 0, forward otherwise. the line ;$< is the same as $<; because edges wrap. What it does is take the number it is given, then push that number and 1 less than that number to the input. This is how detour does recursion. S reduces with addition, and . outputs the result. For a better explanation, visit the site and it will give a visual representation of all the numbers. # AsciiDots, 2221201716 15 bytes /.{+}-\ \#$*#1)
Prints the Fibonacci sequence. Outgolfs the sample by 12 13 14 17 18 19 bytes. This is now just 1 byte longer than exactly as long as a simple counter! Try it online!
# AsciiDots, 31 30 bytes
/#$\ .>*[+] /{+}* ^-#$)
\1#-.
Here's a faster version. It prints out the Fibonacci sequence at a rate of 1 number per 5 ticks, compared to the maximally golfed version's 1 per 8 10 8 12 14 ticks. It's twice as fast as the sample and is still shorter by 3 4 bytes! Try it online!
# Symbolic Python, 34 31 bytes
-3 bytes thanks to H.PWiz!
__('__=_/_;'+'_,_=_+__,__;_'*_)
Try it online!
Returns the nth element of the Fibonacci, 1-indexed, starting from 1,1,2,3,5....
### Explanation:
__( ) # Eval as Python code
'__=_/_;' # Set __ to 1
+' '*_ # Then repeat input times
_,_=_+__,__; # On the first iteration, set _ to __ (1)
;_ # On future iterations, prepend a _
__,_=_+__,__; # Set __ to the next fibonacci number
# And set _ to the old value of __
# Implicitly output _
Or, H.PWiz's version:
__('_=__=_/'+'_;__,_=_+__,_'*_)
Try it online!
### Explanation:
__('_=__=_/'+'_;__,_=_+__,_'*_)
__( ) # Eval as Python code
'_=__=_/'+'_; # Set both _ and __ to 1
' '*_ # Repeat input times
__,_=_+__,__ # Set __ to the next fibonacci number
# And set _ to the old value of __
__,_=_+__,_ # Except on the last iteration
# Implicitly output _
• This is possible in 31 bytes. See if you can see how :) Dec 17, 2018 at 7:33
# Alchemist, 104 87 bytes
-10 bytes thanks to ASCII-only!
_->b+c+m
m+b->m+a+d
m+0b->n
n+c->n+b+d
n+0c->Out_a+Out_" "+o
o+d->o+c
o+0d+a->o
o+0a->m
Produces infinitely many Fibonacci numbers, try it online!
## Ungolfed
_ -> b + c + s0
# a,d <- b
s0 + b -> s0 + a + d
s0 + 0b -> s1
# b,d <- c
s1 + c -> s1 + b + d
s1 + 0c -> Out_a + Out_" " + s2
# c <- d & clear a
s2 + d -> s2 + c
s2 + 0d+ a -> s2
s2 + 0a -> s0
Try it online!
• 95? too lazy to check if algo is shorter Jan 29, 2019 at 11:08
• 94 and left sides in better order Jan 30, 2019 at 6:29
• @ASCII-only: Nice! Noticed I can also output $\infty$ many terms, saved another 7 bytes.. but Alchemist indeed needs some work done (atm. it only works when properly killing the process due to some buffering issues). Jan 31, 2019 at 15:48
• lol > bytes bytes Jan 31, 2019 at 23:37
# Intel 8087 FPU, 13 bytes
Binary:
00000000: d9e8 d9ee dcc1 d9c9 e2fa df35 c3 ...........5.
Listing:
D9 E8 FLD1 ; push initial 1 into ST(1)
D9 EE FLDZ ; push initial 0 into ST
FIB_LOOP:
DC C1 FADD ST(1), ST ; ST(1) = ST(1) + ST
D9 C9 FXCH ; Exchange ST and ST(1)
E2 FA LOOP FIB_LOOP ; loop until n = 0
DF 35 FBSTP [DI] ; store result as BCD to [DI]
As a callable function, input n in CX, output to a 10 byte little-endian packed BCD representation at [DI]. This will compute up to Fibonacci n=87 using the Intel x87 math-coprocessor using 80-bit extended-precision floating point arithmetic.
Run using DOS DEBUG with n = 9, result 34:
n = 87 (0x57), result 679891637638612258:
# convey, 8 bytes
Generates the sequence.
v+"}
1"1
Try it online!
The values (initially 1 and 1) follow the conveyor belts indicated by the arrow heads. " duplicates the input into both outputs, + adds them, and } writes them to the output.
# COBOL (GNU), 170 bytes
I am surprised at the lack of COBOL answers on this site. Well, it is ancient after all.
This outputs the fibonacci sequence correctly up to 38 digits.
PROGRAM-ID.H.DATA DIVISION.LOCAL-STORAGE SECTION.
1 a PIC 9(38).
1 b PIC 9(38).
PROCEDURE DIVISION.G.COMPUTE a=0**b+b -a
DISPLAY b(38- FUNCTION LOG10(b):)GO G.
Try it online!
## Explanation
We just need two variables, a and b to compute the whole fibonacci sequence. Here is pseudocode of what this would look like:
a = 0
b = 1
loop {
a = b - a
b += a
print(a)
}
Translating the pseudocode above in COBOL is relatively short and simple. But we see that variables in COBOL are set to 0 by default, and having one of them set to a 1 is kind of (8 bytes) long, so we hack it out like so:
a = 0
b = 0
loop {
a = b - a + 0 ** b
b += a
print(b)
}
The 0 ** b ensures that a = 1 on the first iteration. From then on, the logic is the same as in our first pseudocode implementation (since 0 ** (any number greater than 0) = 0). The change from print(a) to print(b) is just to ensure that the numbers are outputted in the correct order.
### Ungolfed
PROGRAM-ID. H.
DATA DIVISION.
LOCAL-STORAGE SECTION.
1 a PIC 9(38). // Declare a variable named a (a = 0)
1 b PIC 9(38). // Declare a variable named b (b = 0)
PROCEDURE DIVISION.
G. // Define a label named G
COMPUTE a=0**b+b -a // a = b - a + 0 ** b
ADD a TO b // b += a
DISPLAY b(38- FUNCTION LOG10(b):) // Print b without trailing zeros
GO G. // Jump to the label named G (4 lines above)
# Vyxal, 2 bytes
ÞF
Try it Online!
Before you go saying that the online link doesn't match the submission here, that's because the extra , is needed to actually make the output appear online. If you use the offline version, then you will see that the above works just fine. Also, the 5 flag makes sure that the online interpreter times out after 5 seconds.
## Explained
ÞF # Push every Fibonacci number
And now for the non-trivial version
## Vyxal5, 6 bytes
⁽+dk≈Ḟ
Try it Online!
Once again, discrepancies between online link and actual version are for the purposes of making it work online.
### Explained
⁽+dk≈Ḟ
⁽+d # lambda x, y: x + y
k≈ # the list [0, 1]
Ḟ # Create an infinite sequence based on the function and the initial list.
Fun fact: the infinite sequence function you see was inspired by the sequence blocks of the golfing language Arn by ZippyMagician.
• Fun fact: I was inspired by Raku when I added sequences Apr 11, 2021 at 2:36
# Quipu, 33 bytes
1&0&\n
[][]/\
^^/\0&
--++??
1&
++
Attempt This Online!
Saved 4 bytes thanks to Jo King.
It prints the Fibonacci sequence separated by newlines.
Equivalent pseudocode:
a = [0, 0, 0] // implicitly
0:
a[0] = a[1] - a[0] + 1
1:
print a[0]
a[1] = a[0] + a[1]
2:
print "\n"
goto 0
• Kaogu. (15chrs)
– null
Apr 15 at 14:39
# Fig, commit df1d8a1, $$\5\log_{256}(97)\approx\$$ 4.125 bytes
G:1'+
New language! Yay! This is a fractional byte language I've been advertising on TNB for a while now. It's pure printable ASCII, and has a 97 char codepage. Although the spec is mostly written by now, this commit only has the bare minimum implemented for this challenge. To run this, download the source and then run in the root directory:
./gradlew run --args="code.txt"
It will print Fibonacci numbers until your computer runs out of RAM. Explanation:
G:1'+ - Takes no input
G - Generate an infinite list using initial terms...
:1 - [1, 1]...
' - And the generating function...
# Trianguish, 152 135 bytes
00000000: 0c05 10d8 0201 40d7 0401 4110 4102 a060
00000010: 2c02 b080 2c02 8050 20e4 0211 0710 e209
00000020: 1110 4028 0d00 6020 2902 10c3 0802 a107
00000030: 02a1 0502 8027 0910 290b 1110 403b 0890
00000040: 204d 03d0 503c 0790 602a 1071 02a0 9027
00000050: 0280 b110 8111 0402 70e2 0501 402a 0202
00000060: 9106 1107 0291 0b11 0902 702b 1040 2a10
00000070: 6110 2102 9050 2802 70b1 1071 1104 1102
00000080: 02a1 0502 802c 05
Try it online!
Trianguish is my newest language, a cellular automaton sort of thing which uses a triangular grid of "ops" (short for "operators"). It features self-modification, a default max int size of 216, and an interpreter which, in my opinion, is the coolest thing I've ever created (taking over forty hours and 2k SLOC so far).
This program consists of two precisely timed loops, each taking exactly 17 11 ticks. The first, in the top right is what actually does the fibonacci part; two S-builders are placed in exactly the right position such that two things occur in exactly the same number of ticks:
1. The left S-builder, x, copies its contents to y
2. The sum of x and y is copied to x
Precise timing is required, as if either of these occurred with an offset from the other, non-fibonacci numbers would appear in brief pulses, just long enough to desync everything. Another way this could have been done is with T-switches allowing only a single tick pulse from one of the S-builders, which would make precise timing unneeded, but this is more elegant and likely smaller.
The second loop, which is also 11 ticks, is pretty simple. It starts off with a 1-tick pulse of 1n, and otherwise is 0n, allowing an n-switch and t-switch to allow the contents of x to be outputted once per cycle. Two S-switches are required to make the clock use an odd number of ticks, but otherwise it's just a loop of the correct number of wires.
This program prints infinitely many fibonacci numbers, though if run with Mod 216 on, it will print them, as you might guess, modulo'd by 216 (so don't do that :p).
### J - 20
First n terms:
(+/@(2&{.),])^:n i.2
# Common Lisp, 48 Chars
(defun f(n)(if(< n 2) n(+(f(decf n))(f(1- n)))))
• Is left-to-right evaluation order guaranteed in CL? If not, your solution won't work. (There is no such guarantee in Scheme, and many implementations are right-to-left.) Apr 5, 2011 at 2:12
• Left-to-right is in the standard so since these are all built-in functions it is reliable. (Macros can of course do stupid things :-) Apr 29, 2011 at 18:00
• This is actually 47; you can get rid of the space between (< n 2) and n. Nov 17, 2015 at 20:54
• And a slight modification is 46: (defun f(n)(if(< n 2)n(+(f(1- n))(f(- n 2))))). Nov 17, 2015 at 20:55
## BrainFuck, 172 characters
>++++++++++>+>+[[+++++[>++++++++<-]>.<++++++[>--------<-]+<<<]>.>>[[-]<[>+<-]>>[<<+>+>-]<[>+<-[>+<-[>+<-[>+<-[>+<-[>+<-[>+<-[>+<-[>+<-[>[-]>+>+<<<-[>+<-]]]]]]]]]]]+>>>]<<<]
Credit goes to Daniel Cristofani
# PowerShell: 42 or 75
Find nth Fibonacci number - 42
A spin-off of Joey's answer, this will take user input and output the nth Fibonacci number. This retains some weaknesses also inherent to Joey's original code:
• Technically off by 1, since it starts the Fibonacci sequence at 1,1 instead of the more proper 0,1.
• Only valid for Fibonacci numbers which will fit into int32, because this is PowerShell's default type for integers.
• Example: Due to the int32 limitation, the highest input that will return a valid report is 46 (1,836,311,903) and this is technically the 47th Fibonacci number since zero was skipped.
Golfed:
($b=1)..(read-host)|%{$a,$b=$b,($a+$b)};$a Un-Golfed & Commented: # Feed integers, from 1 to a user-input number, into a ForEach-Object loop. # Initialize$b while we're at it.
($b=1)..(read-host)|%{ # Using multiple variable assignment... # ...current$b is put into new $a, and... # ...sum of current$b and current $a are put into new$b.
$a,$b=$b,($a+$b) }; # When loop exits, output$a.
$a # Variable cleanup, not included in golfed code. rv a,b List Fibonacci numbers - 75 Another derivative of Joey's answer, but with some improvements: • Zero is included in the output, as it should be according to OEIS. • Goes up to the maximum Fibonacci number that can be handled as uint64 instead of the default int32. (Highest Fibonacci number in uint64 is 12,200,160,415,121,876,738.) • Output stops once the maximum value is reached, instead of looping through 'Infinity' or continuously throwing errors. Golfed: for($a,$b=0,1;$a+$b-le[uint64]::MaxValue){$a;$a,$b=$b,[uint64]($a+$b)}$a;$b Un-Golfed & Commented: # Start Fibonacci loop. for ( # Begin with$a and $b at zero and one.$a,$b=0,1; # Continue so long as the sum fits in uint64.$a+$b-le[uint64]::MaxValue ) { # Output current$a.
$a; # Using multiple variable assignment... # ...current$b becomes new $a, and... # ...sum of current$b and current $a is forced to uint64 and stored in new$b.
$a,$b=$b,[uint64]($a+$b) } # Output$a and $b one more time.$a;$b # Variable cleanup - not included in golfed code. rv a,b • One thing that bugs me a little in PowerShell: Read-Host always reads interactively and won't pick up things you pipe into the script (or process), whereas $input (which is what I tend to use) only picks up piped input (for obvious reasons; that's how it's defined) but cannot be used interactively. Which means that you can write a PowerShell script that either works interactively or one that works with piped input, but not both at the same time (at least not for golfing).
– Joey
Nov 28, 2013 at 20:21
• Yeah, and I personally prefer my scripts to be interactive whether the challenge calls for it or not. Wait... Did you just golf the un-golfed code? And not just any part of it, but particularly the bit that's not at all in the golfed code?
– Iszi
Nov 28, 2013 at 22:57
• I merely optimized it, since Remove-Variable takes a string[]. There is no need to have two calls ;-)
– Joey
Nov 29, 2013 at 6:13
• I meant to say I found it amusing that of all the code to be optimized, you had to go and fix the bit that wasn't even part of the golfed solution. It's like you had an OCD moment or something.
– Iszi
Nov 29, 2013 at 7:25
• Sometimes I do ;-). I don't see anything that makes the golfed code smaller either. For an algorithm this simple there aren't many options and range|% is often the shortest (but also the slowest) way.
– Joey
Nov 29, 2013 at 7:27
## Forth - 38 33 bytes
: f dup . 2dup + 2 pick recurse ;
Generates and prints a Fibonacci series recursively until it runs out of stack space.
Usage:
1 1 f
Or to generate Fn, where n>=1 (66 bytes):
: f dup 3 < if 1 nip else dup 1- recurse swap 2 - recurse + then ;
Example of usage:
9 f .
output:
34
• It does work, but like I said it doesn't terminate itself. It should generate correct output up until 46! at least, and after that it will just keep on going and output "garbage". And since that online compiler doesn't appear to have any way of halting the execution without clearing the console output it gets pretty hard to see the correct output at the beginning. Oct 14, 2015 at 15:23
• So it just runs so fast that all I can see is the zeros? Oct 14, 2015 at 18:39
• Right. If you run it in Win32Forth you can scroll up and get it to stay at the top so that you actually can see the correct output for Fn up to n=46. Oct 14, 2015 at 19:37
• Also, if I'm not mistaken : f over . 2dup + recurse ; is shorter (27 bytes). This way, the first number is printed first, and the numbers are in order on the stack, so we don't need 2 pick. Oct 14, 2015 at 20:19
• Yup, that seems to generate the same sequence as my version. Oct 14, 2015 at 20:53
# Java, 41 bytes
There are a couple other Java answers here, but I'm surprised nobody has posted this simple one:
int f(int n){return n<2?n:f(n-1)+f(n-2);}
For an extra byte you can extend the range up to long.
# TeaScript, 4 bytes
F(x)
F(x) //Find the Fibonacci number at the input
Compile online here (DOES NOT WORK IN CHROME). Enter input in the first input field.
• This entry fails the creating a compiler after a challenge has been posted loophole and therefore isn't valid. I can't accept this answer, sorry. Nov 3, 2015 at 19:14
• With TeaScript 3, you can just do Fß. Jan 29, 2016 at 6:03
# J, 9 bytes
+/@:!&i.-
Gets the nth Fibonacci number by finding the sums of the binomial coefficients C(n-i-1, i) for i from 0 to n-1.
Also, a short way using 12 bytes to generate the first n Fibonacci numbers is
+/@(!|.)\@i.
It uses the same method as above but works by operating on prefixes of the range [0, 1, ..., n-1].
### Usage
f =: +/@:!&i.-
f 10
55
f 17
1597
### Explanation
+/@:!&i.- Input: n
- Negate n
&i. Form the ranges [n-1, n-2, ..., 0] and [0, 1, ..., n-1]
! Find the binomial coefficient between each pair of values
+/@: Sum those binomial coefficients and return
• Whoa. Just whoa. Sep 24, 2016 at 0:38
|
# Mathematician:Levi ben Gershon
(Redirected from Mathematician:Leo Hebraeus)
## Mathematician
French Jewish philosopher, Talmudist, mathematician, physician and astronomer/astrologer.
Notable for publishing an early proof using the principle of mathematical induction.
Anticipated Galileo's error theory.
One of the first astronomers to estimate the distance of the fixed stars to a reasonable degree of accuracy (of the order of $100$ light years).
Refuted Claudius Ptolemy's model of astronomy by direct observation, paving the way for the new model of Nicolaus Copernicus more than $2$ centuries later.
Was involved in a lively debate about Euclid's $5$th postulate, and whether it could be derived from the other $4$.
French
## History
• Born: 1288 in Bagnols now Bagnols sur Cèze, Provence, France
• Died: 20 April 1344 in Avignon, France
## Publications
• 1317 -- 1329: Sefer Milhamot Ha-Shem ("The Wars of the Lord")
• 1321: Maaseh Hoshev ("Work of Calculation", "Art of Calculation" or "Art of the Computer", or, by means of a Hebrew pun: "Clever Work") on mathematics
(often confused with Sefer Hamispar ("The Book of Number"), by Abraham ibn Ezra)
• 1342: A portion of Sefer Milhamot Ha-Shem containing a survey of astronomy translated into Latin
• Two geometry books:
• A commentary and introduction to the first five books of Euclid, but not presented axiomatically
• Science of Geometry of which only a fragment has survived
• A number of non-mathematical works
## Also known as
Some sources give his name as Levi ben Gerson, and others as Levi ben Gershom.
Better known by:
the Greek form of his name: Gersonides
his Latinized name (Magister) Leo Hebraeus
in Hebrew as RaLBaG (by the abbreviation of first letters of Rabbi Levi Ben Gershon with vowels added for ease of pronunciation)
Less commonly he can be seen referred to as Gersoni, Leo de Bagnols, Leo de Balneolis or Leo Judaeus.
It could be complicated identifying someone specifically before the invention of surnames.
|
# Getting warning in Sitecore Logs : [Experience Analytics]: Reducing segment
Getting below warning in the Sitecore logs, Current using Sitecore version 8.2 update 4
10688 01:24:44 INFO [Experience Analytics]: Reducing segment: 'd8ba6f18-d0da-4cbb-b51f-8741c0fe9540' on date: 20-06-17 site 679248886. 8272 01:24:45 WARN All caches have been cleared. This can decrease performance considerably.
Could you please advice is this may be the reason of slowdown the CMS/Sitecore site? and how we can correct this?
we also have had such errors with the reduce agent, it is a bug in Sitecore (Ref.No. 21517).
Unfortunately, the bug is not fixed yet and we don't have ETA - but it's likely to be fixed in a major update due to complexity.
Please contact Sitecore Support for a hotfix, I do have a link to the hotfix but this one is especially build für Sitecore 8.2.2:
Please find the hotfix available at the following URL: https://dl.sitecore.net/hotfix/SC%20Hotfix%20167101-2%20Experience%20Analytics%202.0.2.zip Be aware that the hotfix was built specifically for 8.2 Update 2, and you should not install it on other Sitecore versions or in combination with other hotfixes, unless explicitly instructed by Sitecore Support.
In our project the reduce agent tried to reduce segments on 248 configured sites, the process took more than 4,5 hours until it hungs up and backend performance was very slow. Log entries was like:
20940 08:00:02 WARN All caches have been cleared. This can decrease performance considerably. DEBUG entries have to be enabled in log4net to see the corresponding stack trace.
93336 08:00:02 WARN All caches have been cleared. This can decrease performance considerably.
Best regards
Dirk
|
# Find a palindromic string in C
This program tests whether a string is a palindrome or not. Can there be any improvements or shortcuts that can be used in the program. You can give any type of string as input. The program finds does it work.
#include <stdio.h>
#include <string.h>
/*
A palindrome is a string that is same in both forward and backward reading.
Example:
"racecar"
"a man a plan a canal panama"
You will write a program that will test if a given string is a palingdrome or not.
Your program will ask the user to input a string and if the string is a palindrome program
will just print "Yes, it is a Palindrome", otherwise will print "No, not a Palindrome".
1. Your you need to check in case-insensitive way, that means: Madam or madam both should be
detected as Palindrome.
2. There can be (any number of ) spaces in between the words.
"A man a plan a canal panama"
OR
"A man a pla n a cana l Panama"
both the strings must be detected as Palindrome.
3.There can be punctuations in between the words, for this assignments,
we consider only 4 punctuations, . ? ! and ,
Your program will just need to ignore them (treat them as space).
"Cigar? Toss it in a can. It is so tragic."
Should be detected as palindrome.
*** For this assignment I will not write any instructions or guidance, you are free
to implement it with your own way, you can use the string.h functions
Good luck.
*/
/***********************************************************************
Created by Shaik Mohammed Zeeshan
Date - 19 Aug 2018
Objective - Checks whether a given string is palindrome or not
************************************************************************/
int main()
{
char string[100];
char string1[100];//the string without spaces and special characters will be stored in this
printf("Enter a string: ");
scanf("%[^\n]", string);
int isPalindrome = 1; // assign 0 to this if the string is a NOT palindrome
// write code to test if string is a palindrome
int index;
int index1;//index for the second array
for(index=0,index1=0;string[index] != '\0'; index++) //to eliminate spaces, special characters and capital letters from the string
{
if( (string[index] < 'A' || string[index] > 'Z' ) && (string[index] < 'a' || string[index] > 'z' ) ) // checks if the element is a special
continue; // character or space and skips the iteration
if(string[index] >= 'A' && string[index] <= 'Z' )
string[index] += 32;
string1[index1] = string[index];
index1++;
}
string1[index1] = '\0'; // assigning the last element with null character
int i,stringlength = strlen(string1); // storing length after eliminating unecessary characters
for(i=0;i<=stringlength/2;i++) //starts the loop from first element to the middle element
{
if(string1[i] != string1[stringlength-i-1]) //checks if the elements are true
{
isPalindrome = 0; // checks the first element with the first element from the last and second element with the penultimate element and so on
break;
}
}
// at the end you need to test
if (isPalindrome)
printf("Yes, it is Palindrome!\n");
else
printf("No, not a Palindrome\n");
return 0;
}
The amount of comments you use is more distracting than helpful. Good comments are those that explain what the overall logic of a block of code is, they don’t explain each line of code. Code is supposed to be self-documenting, that is, the code should explain itself. You accomplish this by using good variable names and good function names, which explain their own meaning.
Also, you need to use the functionality in the standard library. For example, to test whether a character is a normal letter or not, use isalpha rather than the four comparisons you’re using (against 'a', 'z', etc.)
• I am still learning C language and this was an assignment. I did not know the isalpha function. Thanks for telling. – Shaik Mohammed Zeeshan Aug 20 '18 at 12:56
there are a LOT of problems in the posted code. Here is one of them:
scanf("%[^\n]", string);
There is no limit on the number of characters that the user can input. So the user can cause a buffer overflow, resulting in undefined behavior and possibly an abort of the code.
Suggest using:
scanf("%99[^\n]", string);
As that limits the total number of characters the user can input to 1 less than the length of the input buffer. It needs to be 1 less than the length of the input buffer because the %[...] and %s input format specifiers always append a NUL byte to the input
in the header file: ctypes.h are 'functions' tolower(), toupper(), isalpha(), etc etc. Strongly suggest you make use of them
• I am still learning C language and this was an assignment. Our instructor told us that we can make use of only string.h. Thanks for telling the function names. – Shaik Mohammed Zeeshan Aug 20 '18 at 12:59
|
Contents of this Calculus 3 episode:
Special types of matrices, Square matrix, Diagonal matrix, Identity matrix, Transpose, Symmetric matrix.
Text of slideshow
SQUARE MATRIX
It is a square-shaped matrix with the same number of rows and columns.
Example:
DIAGONAL MATRIX
The diagonal matrix is a square matrix where all elements outside the main diagonal are zero.
Example:
Therefore, in diagonal matrices, only the main diagonal matters, as all the other elements are zero.
That's why some people only indicate the main diagonal elements. This strange symbol
indicates a diagonal matrix.
IDENTITY MATRIX
The identity matrix (or unit matrix), denoted by I, is a matrix where for any , .
The identity matrix is a diagonal matrix where all elements on the main diagonal are equal to one.
INVERSE MATRIX
The inverse matrix is denoted by , and this is a matrix that does this:
(right inverse) (left inverse)
Later we will see that it isn't that easy to figure out the inverse of a matrix.
This inverse thing is a lot easier with real numbers where:
the inverse of is because
the inverse of is because
TRANSPOSE
The transpose matrix is created by swapping the rows and the columns of the matrix. It is indicated by or
ROW COLUMN COLUMN ROW
Example:
OR
A square matrix whose transpose is equal to itself is called a symmetric matrix.
Here is an example of a symmetric matrix:
None of this sounds too exciting right now, but soon will come the time when we will need them.
Now, let's take on vectors!
# Special types of matrices
02
Let's see this
Calculus 3 episode
|
Mathematical and Physical Journal
for High Schools
Issued by the MATFUND Foundation
Already signed up? New to KöMaL?
# KöMaL Problems in Mathematics, December 2015
Show/hide problems of signs:
## Problems with sign 'K'
Deadline expired on January 11, 2016.
K. 481. What is the sum of the numbers in the $\displaystyle 20\times 20$ multiplication table? (The figure shows the $\displaystyle 5\times 5$ multiplication table.)
(6 pont)
solution, statistics
K. 482. In a bicycle factory, the bicycles produced are tested systematically. The brakes are tested on every fifth bike, the gears are tested on every fourth, and the shifter is tested on every seventh one. They manufacture 435 bicycles a day. How many bicycles are issued from the factory per day without anything tested on them?
(6 pont)
solution, statistics
K. 483. In how many different ways is it possible to write the numbers 1, 2, 3, 4, 5, 6, 7, 8, 9 on the circumference of a circle so that no sum of adjacent numbers is a multiple of 3, 5 or 7?
(6 pont)
solution, statistics
K. 484. Every natural number 1 to $\displaystyle n$ is written on a card. What is the smallest $\displaystyle n$ such that no matter how the cards are divided into two packs, there will always be two cards in one of the packs with two numbers that add up to a perfect square?
(6 pont)
solution, statistics
K. 485. Tom Thumb and the giant arrive at the castle of the dragon. Although the giant is 3.5 metres taller than Tom, he still cannot reach the top of the castle wall when he stands on the ground. So he lifts Tom Thumb on his palm over his head. Tom can just climb the wall, which is 6 metres and 20 centimetres high. The giant has long hands: he can reach 40% of his height above the top of his head, while Tom can only reach 20% of his height above the top of his head. How tall is the giant, and how tall is Tom Thumb?
(6 pont)
solution, statistics
K. 486. How many five-digit positive numbers are there in which the sum and the product of the digits are both even?
(6 pont)
solution, statistics
## Problems with sign 'C'
Deadline expired on January 11, 2016.
C. 1322. Three consecutive terms of an arithmetic progression of positive integers are written down in a row to form a single number. Find the largest seven-digit number obtained in this way.
Quantum, 1998
(5 pont)
solution, statistics
C. 1323. Let $\displaystyle T$ denote the intersection of side $\displaystyle BC$ with the angle bisector drawn from vertex $\displaystyle A$ of a right-angled triangle. Let $\displaystyle F$ denote the midpoint of side $\displaystyle BC$, and let $\displaystyle M$ be the intersection of the perpendicular bisector drawn at $\displaystyle F$ with another side. Given that the quadrilateral $\displaystyle ATFM$ is a kite, determine the angles of the triangle. ($\displaystyle A$ may denote any vertex of the triangle.)
(5 pont)
solution, statistics
C. 1324. Agnes is making gingerbread hearts for Christmas. The pastry cutter has the shape of a 6 cm by 6 cm square with two semicircles attached to two adjacent sides. She always rolls the dough the same thickness, forming a square whose side is a whole number of decimetres. (If any dough remains, she gives it to her sister.) She starts cutting the hearts out of the pastry by placing the corner of the cutter to the corner of the pastry square, carefully aligning the sides. Then she continues by placing the cutter next to the cut-out squares with the same orientation, as close as possible. How many squares can Agnes make if she starts out with a 1 m$\displaystyle {}^2$ pastry, and she always kneads together the pastry remaining after cutting out the hearts?
(5 pont)
solution, statistics
C. 1325. Let $\displaystyle a_n$ denote the closest integer to $\displaystyle \sqrt n$. Determine the sum $\displaystyle \frac 1{a_1}+ \frac 1{a_2}+ \frac 1{a_3}+\ldots + \frac 1{a_{484}}$.
(5 pont)
solution, statistics
C. 1326. The perimeter of a right-angled trapezoidal plot is 400 m. One leg of the right angled trapezium makes an angle of $\displaystyle 45^\circ$ with the base. For what length of the base would the area of the plot be a maximum?
(5 pont)
solution, statistics
C. 1327. With line segments drawn from an interior point, dissect an octagon with the properties shown in the diagram into four parts, such that the parts can be put together to form two congruent regular pentagons.
(5 pont)
solution, statistics
C. 1328. Solve the following equation: $\displaystyle 2^{\sin^2 x}= \frac{\sin x + \cos x}{\sqrt2}$.
(5 pont)
solution, statistics
## Problems with sign 'B'
Deadline expired on January 11, 2016.
B. 4750. Ann claims that a random three-digit number is more likely to contain a digit of 6 than a random five-digit number. Bill says it is the other way around. Who is right?
Matlap, Kolozsvár
(3 pont)
solution, statistics
B. 4751. Prove that $\displaystyle 3^{n}+5^{n}$ is not a perfect square for any positive integer $\displaystyle n$.
Proposed by G. Somlai, Budapest
(4 pont)
solution, statistics
B. 4752. Consider circles $\displaystyle k_1$ with center $\displaystyle O_1$ and radius $\displaystyle r_1$, and $\displaystyle k_2$ with center $\displaystyle O_2$ and radius $\displaystyle r_2$. Line $\displaystyle PA$ is tangent to $\displaystyle k_1$ at $\displaystyle A$ and line $\displaystyle PD$ is tangent to $\displaystyle k_2$ at $\displaystyle D$. Segment $\displaystyle AD$ intersects $\displaystyle k_1$ and $\displaystyle k_2$ at $\displaystyle B$ and $\displaystyle C$, respectively. Find $\displaystyle PA/PD$ in terms of $\displaystyle r_1$ and $\displaystyle r_2$ if $\displaystyle AB=CD$.
M&IQ
(4 pont)
solution, statistics
B. 4753. Prove that $\displaystyle \sqrt{2x \sqrt{(2x+1) \sqrt{(2x+2) \sqrt{2x+3}}}} < \frac{15x+6}{8}$ for all $\displaystyle x>0$.
Proposed by I. Deák, Székelyudvarhely
(5 pont)
solution, statistics
B. 4754. Lines $\displaystyle AD$, $\displaystyle BD$ and $\displaystyle CD$ passing through an interior point $\displaystyle D$ of a triangle $\displaystyle ABC$ intersect the opposite sides at $\displaystyle A_{1}$, $\displaystyle B_{1}$ and $\displaystyle C_{1}$, respectively. The midpoints of the segments $\displaystyle A_1B_1$, $\displaystyle B_1C_1$ and $\displaystyle C_1A_1$ are $\displaystyle C_2$, $\displaystyle A_2$ and $\displaystyle B_2$, respectively. Show that the lines $\displaystyle AA_{2}$, $\displaystyle BB_{2}$ and $\displaystyle CC_{2}$ are concurrent.
Proposed by Sz. Miklós, Herceghalom
(5 pont)
solution, statistics
B. 4755. In a triangle $\displaystyle ABC$, the escribed circles $\displaystyle k_A$ and $\displaystyle k_B$ drawn to sides $\displaystyle CB$ and $\displaystyle CA$ touch the appropriate sides at $\displaystyle D$ and $\displaystyle E$, respectively. Show that line $\displaystyle DE$ cuts out equal chords from the circles $\displaystyle k_A$ and $\displaystyle k_B$.
Proposed by K. Williams, Szeged
(4 pont)
solution, statistics
B. 4756. In the interior of a unit cube, there are some spheres with a total surface area of 2015. Show that
$\displaystyle a)$ there exists a line that intersects at least 500 spheres,
$\displaystyle b)$ there exists a plane that intersects at least 600 spheres.
Hungarian Mathematics Competition of Transylvania
(6 pont)
solution, statistics
B. 4757. Let $\displaystyle A_k$ denote the number that consists of $\displaystyle k$ ones in decimal notation. How many positive integers are there that cannot be obtained as the sum of the digits of any multiple of $\displaystyle A_k$?
Proposed by K. Williams, Szeged
(6 pont)
solution, statistics
B. 4758. What is the minimum number of different lines determined by the sides of a (not necessarily convex) 2015-sided polygon?
Proposed by D. Lenger, Budapest
(6 pont)
solution, statistics
## Problems with sign 'A'
Deadline expired on January 11, 2016.
A. 656. Let $\displaystyle p(x)=a_0+a_1x+\dots+a_nx^n$ be a polynomial with real coefficients such that $\displaystyle p(x)\ge0$ for $\displaystyle x\ge0$. Prove that for every pair of positive numbers $\displaystyle c$ and $\displaystyle d$, $\displaystyle a_0 + a_1(c+d) + a_2(c+d)(c+2d) + \dots + a_n(c+d)(c+2d)\dots(c+nd) \ge0$.
(5 pont)
solution, statistics
A. 657. Let $\displaystyle \{x_n\}$ be the van der Korput sequence, that is, if the binary representation of the positive integer $\displaystyle n$ is $\displaystyle n = \sum_i a_i2^i$ ($\displaystyle a_i\in\{0,1\}$), then $\displaystyle x_n = \sum_i a_i2^{-i-1}$. Let $\displaystyle V$ be the set of points $\displaystyle (n,x_n)$ in the plane where $\displaystyle n$ runs over the positive integers. Let $\displaystyle G$ be the graph with vertex set $\displaystyle V$ that is connecting any two distinct points $\displaystyle p$ and $\displaystyle q$ if and only if there is a rectangle $\displaystyle R$ which lies in a parallel position to the axes and $\displaystyle R\cap V = \{p,q\}$. Prove that the chromatic number of $\displaystyle G$ is finite.
Miklós Schweitzer competition, 2015
(5 pont)
solution, statistics
A. 658. We call a bar of width $\displaystyle w$ on the surface $\displaystyle S^2$ of the unit sphere in $\displaystyle 3$-dimension, centered at the origin a spherical zone which has width $\displaystyle w$ and is symmetric with respect to the origin. Prove that there exists a constant $\displaystyle c>0$ such that for every positive integer $\displaystyle n$ the surface $\displaystyle S^2$ can be covered with $\displaystyle n$ bars of the same width so that every point is contained in no more than $\displaystyle c\sqrt{n}$ bars.
Miklós Schweitzer competition, 2015
(5 pont)
solution, statistics
|
# Definition talk:Supremum of Mapping
We may want to take care of the possibility that $(S,\preceq)$ is an ordered set, $f$ is defined on $S' \subseteq S$, and we seek the supremum of $f$ in $S$. --Dfeuer (talk) 21:09, 1 March 2013 (UTC)
We may want to take care of the possibility that $f$ is defined on $S$, and we seek its supremum on $S' \subseteq S$. --Dfeuer (talk) 21:13, 1 March 2013 (UTC)
|
ISC Accounts 2017 Class-12 Previous Year Question Papers Solved for practice. Step by step Solutions with Questions of Section-A (Part-1 and Part-2) and Section-B. By the practice of Accounts 2017 Class-12 Solved Previous Year Question Paper you can get the idea of solving.
Try Also other year except ISC Accounts 2017 Class-12 Solved Question Paper of Previous Year for more practice. Because only ISC Accounts 2017 Class-12 is not enough for complete preparation of next council exam. Visit official website for detail information about ISC Class-12 Accounts.
## ISC Accounts 2017 Class-12 Previous Year Question Papers Solved
Section-A ,Part-I
Section-A Part-II
Section-B
Maximum Marks: 80
Time allowed: Three hours
• Candidates are allowed additional 15 minutes for only reading the paper. They must NOT start writing during this time.
• Answer Question 1 (Compulsory) from Part I and five questions from Part II, choosing two questions from Section A, two questions from Section B and one question from either Section A or Section B.
• The intended marks for questions or parts of questions are given in brackets [ ].
• Transactions should be recorded in the answer book.
• All calculations should be shown clearly.
• All working, including rough work, should be done on the same page as, and adjacent to the rest of the answer.
### Section – APart – I (12 Marks)
ISC Accounts 2017 Class-12 Previous Year Question Papers Solved
Question 1. [6×2]
Answer briefly each of the following questions :
(i) Name the account which is prepared to find the profit and loss of a joint venture, if:
(a) One co-venture records all transactions.
(b) All co-ventures record their own transactions.
(ii) What will be the treatment of loan given to a partner by the firm at the time of its dissolution ?
(iii) Give the adjusting entry for interest on capital allowed to a partner, when the firm follows the fixed capital method.
(iv) State, with reason, whether securities premium reserve can be used to write off bad debts.
(v) Give any two differences between a Company s Balance Sheet and a Firm’s Balance Sheet.
(vi) State where will the non-cash transactions be recorded at the time of issue of shares, if all cash transactions are entered in the Cash Book.
(i)
(a) Joint Co-venture Account
Personal Accounts of other Co-ventures
(b) Memorandum Joint Venture Account
Joint Venture with…. (other Co-venture) Account
(ii) If there is a loan advanced to a partner, the same should be transferred to his capital account thereby reducing the amount of capital repayable to him.
(iii) Interest on Capital A/c Dr.
To Partner’s Current A/c
(Being the interest on capital allowed to partners)
Profit and Loss Appropriation A/c Dr.
To Interest on Capital A/c
(Being the interest on capital transferred to Profit and Loss Appropriation A/c)
(iv) Securities Premium Reserve cannot be used to write off bad debts.
Securities Premium Reserve can be write off for following purposes :
• in paying up unissued shares to be issued as fully paid bonus shares.
• in writing off preliminary expenses.
• for buy – back of shares Under Section 11 A.
• in writing off the expenses etc.
(v) Balance Sheet is prepared as per Schedule III of the Indian Companies Act, 2013. Whereas Balance Sheet is prepared as per Partnership Act, 1932. The details of items of Balance Sheet are to be given in the Notes to Accounts in Company’s Balance Sheet but there is no need to maintain Notes to Accounts in firm’s Balance Sheet.
(vi) Issue of Shares for Consideration other than Cash is shown in the Balance Sheet under the head ‘Share Capital’ and sub-head ‘Subscribed Capital’.
### Part – II (48 Marks)Answer any four questions.
Previous Year Question Papers Solved ISC Accounts 2017 Class-12
Question 2. [12]
Karan, Ali and Deb are partners in a firm sharing profits and losses in the ratio of 3 : 2 : 1. On 31st March, 2016, their Balance Sheet was as under :
Karan died on 1st July, 2016. An agreement was reached amongst Ali, Deb and Karan’s legal representatives that :
(a) Building be revalued at ₹ 93,500.
(b) Furniture be appreciated by ₹ 10,000.
(c) To write off the Provision for Doubtful Debts since all debtors were good. .
(d) Investments be valued ₹ 38,000.
(e) Goodwill of the firm be valued at ₹ 1,20,000.
(f) Karan’s share of profit to the date of his death, to be calculated on the basis of previous year’s profit which was ₹ 25,000.
(g) Interest on capital to be allowed on Karan’s capital @ 6% per annum.
(h) Amount payable to Karan’s legal representative to be transferred to his legal representative’s loan account.
You are required to :
(i) Pass Journal entries on the date of Karan’s death.
(ii) Prepare the Interim Balance Sheet of the reconstituted firm.
Question 3. [12]
Cargo Ltd. invited applications for the issue of 20,000 Equity shares of ? 10 each at a premium of ? 1 per share, payable as follows :
On Application — ₹ 3
On Allotment — The balance (including premium ₹ 1)
Applications were received for 30,000 shares and pro-rata allotment was made to the remaining applicants after refunding application money to 5,000 share applicants.
Nicholas, who was allotted 3,000 shares, failed to pay the allotment money and his shares were forfeited.
Out of these forfeited shares, 1,000 shares were reissued as fully paid-up @ ? 8 per share.
You are required to :
(i) Pass Journal entries in the books of the company.
(ii) Prepare Calls-in-Arrears Account.
(iii) Prepare Share Forfeiture Account.
Question 4.
(A) Following balances have been extracted from the books of Universe Ltd. as at 31st March, 2016 :
Particulars
Equity Share Capital (Fully paid shares of ₹ 100 each) — ₹ 4,00,000
Unclaimed Dividend — ₹ 10,000
Bank Balance — ₹ 40,000
Security Premium Reserve — ₹ 75,000
Statement of Profit and Loss (Dr.) — ₹ 50,000
Tangible Fixed Assets (at cost) — ₹ 3,50,000
Accumulated Depreciation till date — ₹ 25,000
You are required to prepare as at 31st March, 2016 :
(i) The Balance Sheet of Universe Ltd. as per Schedule in of the Companies Act, 2013.
(ii) Notes of Accounts. . [8]
(B) Chrome Ltd. took over assets of ₹ 6,00,000 and liabilities of ₹ 40,000 of Polymer Ltd. at an . agreed value of ₹ 6,30,000. Chrome Ltd. issued 10% Debentures of ₹ 100 each at a discount of 10% to Polymer Ltd. in full satisfaction of the price. Chrome Ltd. writes off any capital losses incurred during a year, at end of that financial year.
You are required to pass the necessary Journal entries to record the above transactions in the books of Chrome Ltd. [4]
Question 5. [12]
Juliet and Rabani are partners in a firm, sharing profits and losses in the ratio of 3 :1. On 31 st March, 2016, their Balance Sheet was as under:
Mike was taken as a partner for 1/4th share, with effect from 1st April, 2016, subject to the following adjustments :
(a) Plant and Machinery was found to be overvalued by ₹ 16,000. It was to be shown in the books at the correct value.
(b) Provision for Doubtful Debts, was to be reduced by ₹ 2,000.
(c) Creditors included an amount of ₹ 2,000 received as commission from Malini. The necessary adjustment was required to be made.
(d) Goodwill of the firm was valued at ₹ 60,000. Mike was to bring in cash, his share of goodwill along with his capital of ₹ 1,00,000.
(e) Capital Accounts of Juliet and Rabani were to be readjusted in the new profit sharing arrangement on the basis of Mike’s capital, any surplus to be adjusted through current account and any deficiency through cash.
You are required to prepare :
(i) Revaluation Account.
(ii) Partners’ Capital Accounts.
(iii) Balance Sheet of the reconstituted firm.
Question 6.
(A) Raslii and Runa jointly imdertake to complete the construction of an auditorium for Pascal Ltd. They agreed to share profits and losses in the ratio of 3 : 2.
The contract price was ₹ 8,00,000 of which ₹ 5,00,000 was to be payable to them in cash and the balance in fully paid shares of the company.
A joint bank account was opened in which Rashi contributed ₹ 2,00,000 while Runa contributed ₹ 3,00,000.
The following expenses were incurred to complete the contract :
Salaries and Wages — ₹ 1,25,000
Purchase of material from a supplier on credit — ₹ 2,00,000
Material supplied by Rashi — ₹ 1,00,000
Legal fees paid by Runa — ₹ 85,000
The contract price was duly received after the completion of the project and the accounts of the venture were closed after the supplier was paid ₹ 1,98,000 in lull and final settlement.
Runa took over the shares at ₹ 2,80,000.
Rashi took over the remaining material at ₹ 45,000.
You are required to prepare:
(i) Joint Venture Account.
(ii) Joint Bank Account.
(iii) Shares Account.
(B) Joseph and Leena entered into a Joint venture to sell edible oil. It was decided that Joseph would record all the transactions of the venture.
Joseph supplied 3,000 litres of edible oil costing ₹ 4,50,000 to be sold by Leena, incurring carriage and insurance-in-transit amounting to ₹ 30,000.
20 litres of oil was lost in transit due to leakage which was considered to be normal. Leena incurred ₹ 2,760 as clearing charges and ₹ 2,000 as godown rent. She was entitled to a commission of 2% on the sales made by her.
Leena was able to sell 2,000 litres of oil at ₹ 170 per litre.
The unsold stock was taken over by Joseph at the original cost plus proportionate non-recurring expenses.
You are required to :
(i) Calculate the value of stock taken over by Joseph.
(ii) Pass the relevant Journal entries in the books of Joseph for :
(a) The stock taken over by Joseph.
(b) Commission due to Leena. [4]
Question 7.
(A) Mita, Rita and Sandra were partners in a firm, sharing profits and losses in the ratio of 2 : 2 : 1. Mita had personally guaranteed that in any year Sandra’s share of profit, after allowing interest on capital to all the partners @ 5% per annum and charging interest on drawings @ 4% per annum, would not be less than ₹ 10,000.
The capitals of the partners on 1st April, 2015 were :
Mita ₹ 80,000, Rita ₹ 50,000 and Sandra ₹ 30,000.
The net profit for the year ended 31st March, 2016, before allowing or charging any interest amounted to ₹ 40,000.
Mita had withdrawn ₹ 4,000 on 1st April, 2015, while Sandra withdrew ₹ 5,000 during the year.
You are required to prepare the Profit and Loss Appropriation Account for the year 2015-16. [8]
(B) Anita, Asha and Bashir are partners sharing profits and losses in the ratio of 3 : 2 : 1 respectively. From 1st April 2016, they decided to change their profit sharing ratio to 2 : 1: 3. Their partnership deed provides that in the event of any change in the profit changing ratio, the goodwill of the firm should be valued at two years’ purchase of the average super profits for the past three years.
The actual profits and losses for the past three years were :
2015-16 Profit ₹ 40,000
2014-15 Profit ₹ 30,000
2013-14 Loss ₹ 10,000
The average capital employed in the business was ₹ 1,10,000; the rate of interest expected from capital invested was 10%.
You are required to:
(i) Calculate the value of goodwill at the time of change in profit sharing ratio. (Show the workings clearly with the formulae.)
(ii) Pass the Journal entry to record the change. [4]
Question 8.
(A) Roshan, Mahesh, Gopi and Jai are partners sharing profits and losses in the ratio of 3 : 3 : 2 : 2. The balances of capital accounts on 1st April, 2015 were : Roshan ₹ 8,00,000, Mahesh ₹ 5,00,000, Gopi ₹ 6,00,000 and Jai ₹ 6,00,000.
After the accounts for the year ended 31st March, 2016 were prepared, it was discovered that interest on capital @ 10% per annum as provided in the partnership deed had not been credited to the partners’ capital accounts before the distribution of profits.
You are required to rectify the error by passing a single adjusting Journal entry. [4]
(B) Mehta and Menon were partners in a firm, sharing profits and losses in the ratio of 7 : 3.
They decided to dissolve their partnership firm on 31st March, 2016. On that date, their books showed the following ledger account balances :
Sundry Creditors ₹ 27,000
Profit and Loss A/c (Dr.) ₹ 8,000
Cash in Hand ₹ 6,000
Bank Loan ₹20,000
Bills Payable ₹ 5,000
Sundry Assets ₹ 1,98,000
Capital A/c
Mehta ₹ 1,12,000
Menon ₹ 48,000
(a) Bills Payable falling due on 31st May, 2016 were retired on the date of dissolution of the firm, at a rebate of 6% per annum.
(b) The bankers accepted the furniture (included in sundry assets) having a book value of ? 18,000 in full settlement of the loan given by them.
(c) Remaining assets were sold for ₹ 1,50,000.
(d) Liability on account of outstanding salary not recorded in the books, amounting to ₹ 15,000 was met.
(e) Menon agreed to take over the responsibility of completing the dissolution work and to bear all expenses of realization at an agreed remuneration of ₹ 2,000. The actual realization expenses were ₹ 1,500 which were paid by the firm on behalf of Menon.
You are required to prepare :
(i) Realization Account.
(ii) Partners’ Capital Accounts. [8]
Section – B
(20 Marks)
### Solved Previous Year Question Papers of Accounts 2017 for ISC Class-12
Question 9.
From the information given below, calculate (up to two decimal places):
(i) Operating Ratio.
(ii) Quick Ratio.
(iii) Debt to Equity Ratio.
(iv) Proprietary Ratio.
(v) Working Capital Turnover Ratio.
Particulars
Net revenue from operations — ₹ 12,00,000
Cost of revenue from operation — ₹ 9,00,000
Operating expenses — ₹ 15,000
Inventory — ₹ 20,000
Other Current Assets — ₹ 2,00,000
Current Liabilities — ₹ 75,000
Paid up Share Capital — ₹ 4,00,000
Statement of Profit and Loss (Dr.) — ₹ 47,500
Total Debt — ₹ 2,50,000
Question 10. [10]
From the following information of Purity Ltd. calculate:
(i) Cash from Operating Activities
(ii) Cash from Financing Activities
During the year 2015-16:
(a) A piece of furniture costing ₹ 30.000 (accumulated depreciation ₹ 5,000) was sold for ₹ 25,000.
(b) Tax of ₹ 9,000 w as paid.
(c) Interim Dividend of ₹ 4,000 was paid.
(d) The company paid ₹ 3,000 as interest on debentures.
Question 11.
(A) What is meant by the term Cash Equivalents as per Accounting Standard 5 ? [2]
(B) The Current Ratio of a company is 2 : 1. State whether the Current Ratio will improve, decline or will not change in the following cases : [2]
(i) Bill Receivable of ₹ 2,000 endorsed to a creditor is dishonoured.
(ii) ₹ 8,000 cash collected from Debtors of ₹ 8,500 in lull and final settlement.
(A) Cash and Cash Equivalents are short-term, highly liquid investments which is easily converted into cash. It includes treasury bills, commercial papers, money market funds etc.
(B)
(i) It will increase the amount of debtors but reduces the amount of bills receivable. So, current assets will remain same. Hence, there will be no change in the current ratio.
(ii) Cash collected from debtors will increase the bank/cash balance but decrease the amount of debtors. Hence, total current assets remain unchanged. Therefore, there will be no . change in current ratio.
-: End of ISC Accounts 2017 Class-12 Solved Paper :-
Thanks
$${}$$
|
## How did our solar system form?
In how did our solar system form? you will learn all about chemical elements, the process of accretion and will explore key features of our solar system.
To get started, read carefully through the How did our solar system form learning goals below. Make sure you tick each of the check boxes to show that you have read all your How did our solar system form? learning goals.
As you read through the learning goals you may come across some words that you haven’t heard before. Please don’t worry. By the time you finish How did our solar system form? you will have become very familiar with them!
You will come back to these learning goals at the end of How did our solar system form? to see if you have confidently achieved them.
## Activity 2.2.1 Objectives
### Learning Goals
• Identify different chemical elements
• Understand the importance of chemical elements
• Understand the role of accretion in the formation of our solar system
### Introduction
Last activity you learned all about the life cycle of stars and how, when they die or go supernova, they produce the buildings blocks of the Universe - chemical elements. In this lesson you’re going to learn more about those amazing chemical elements and how they were brought together with other matter by the process of accretion to create our very own solar system.
## Mission video 10: Matter and the process of accretion
While you watch Mission video 10: Matter and the process of accretion look out for the answers to the following questions:
1. 1. What is a chemical element?
2. 2. Why are chemical elements considered the building blocks of the Universe?
3. 3. What is a chemical compound?
4. 4. What is the process of accretion?
5. 5. Which features of the solar system were created by accretion?
Your teacher will instruct you whether you will answer the questions: as part of a class discussion; as a group/paired discussion; or independently by writing your answers in your Big History School Junior journal (if you have been provided with one).
## Activity 2.2.1 Review
### Conclusion
So where do 'chemical elements' appear on our History of the Universe Timeline?
Either your teacher will help you identify where they appear on the classroom display or you will refer back to the Timeline: history of the Universe worksheet you completed when you did the “3 big mission phase questions” activity. You will find a copy of Timeline: history of the Universe example in Helpful Resources.
Now that you’ve learned a little about chemical elements and how everything in the Universe is made of them - including Earth and us - you will play a simple game in the next activity where you will use the periodic table to identify different chemical elements.
Timeline: history of the Universe example
## Course Glossary
### accretion
The gradual process of matter being pulled together by gravity to make larger and larger clumps of matter.
A special skill or physical feature which helps a species to survive and thrive in its environment. For example, a chameleon changing colour to camouflage itself.
### aerial view
A view of something from the sky looking down.
### agriculture
Also referred to as farming, agriculture is the practice of growing crops and raising animals. It is an innovation which has allowed human societies to expand and thrive.
### AI
Artificial Intelligence (AI) is a type of technology which can perceive things, interpret them and make decisions in a similar way to humans.
### amphibian
Animals that evolved from fish to have gills so that they can live in water and also live and breathe on land.
### anthropologist
A scientist who studies humans and human behaviour.
### asteroids
Rocky bodies which are too small to be called planets.
### astronomer
A scientist who studies the Universe and everything in it.
### atmosphere
A thin layer of gases, otherwise known as air, that surrounds Earth and other planets.
### atoms
Tiny particles which make up everything in the Universe.
### authority
Someone who knows a lot about a subject and whose views are respected.
### battery storage
A large battery that stores electrical energy which can then be used when other energy sources are not available.
### Big Bang theory
Theory about how the Universe began 13.8 billion years ago. All matter, time, space and energy came from the Big Bang.
### Big History
The history of the entire Universe beginning 13.8 billion years ago.
### biochemist
A scientist who studies the chemistry of living things.
### biologist
A scientist who studies living things.
### black hole
An area in space where gravity is so strong that nothing can escape from it – not even light.
### brainstorming
A creative strategy for thinking about and sharing ideas to solve a challenge or task.
### CBR
Cosmic Background Radiation (CBR) is the radiation left over from the initial energy of the Big Bang. It can be seen through powerful space telescopes.
### chemical compounds
Chemical elements which have combined with different chemical elements. For example, hydrogen can combine with oxygen to create the chemical compound water (H2O).
### chemical elements
Pure substances which are made from a single type of atom. For example, Helium.
### chemist
A scientist who studies the substances that make up all the matter in the Universe.
### claim
Information which is presented as fact – not an opinion.
### cognitive
To do with mental activity such as thinking, using logic or remembering.
### collective learning
The human ability to store and share and build on information from generation to generation.
### comets
Balls of frozen gases, rock and dust which orbit the Sun.
### community
A group of people who live together. They help each other and work together to solve problems.
### compare
To look at what two or more things have in common with each other.
### continental drift theory
A theory which states that the Earth’s continents were once joined together in one supercontinent, then broke up and slowly drifted apart.
### contrast
To look at how two or more things are different to each other.
### convergent boundary
Where two tectonic plates move towards each other.
### cosmologist
A scientist who studies the structure and history of the Universe.
### creative thinking
Thinking of new ways to solve problems, generate new explanations and/or create something original.
### Critical thinking
Thinking which doesn’t rely on simply accepting what someone has said. It involves questioning, using logic and seeking information from experts before drawing a conclusion.
### cross section
A view of something as if it has been sliced through with a knife.
### digital technology
A term which covers electronic technologies such as computers, tablets and mobile phones.
### disciplines
Different areas of knowledge, for example, natural sciences.
### divergent boundary
Where two tectonic plates slide apart from each other.
### Earth’s core
At its centre, Earth contains a solid inner core and a liquid outer core made of iron and nickel.
### Earth’s crust
The layer that floats on top of the mantle and is made of lighter weight rocks and minerals.
### electrical technology
Technologies which use electricity as their main power source, for example, light bulbs, electric motors and television.
### energy sources
A resource which can be used to provide power. For example, fossil fuels like coal and oil; renewable resources like solar and wind or uranium for nuclear power.
### engineer
An expert who designs and builds machines and structures.
### evidence
Information which may support or disprove a claim.
### evolution
The theory of evolution explains how all the species alive today generated from the first simple life forms on Earth.
### exoplanets
Planets which orbit stars outside of our solar system.
### expert
A person with a special skill or knowledge in a particular area.
### flyby
A path followed by a spacecraft which has been sent close enough to a planet to record scientific data.
### fossil fuels
A carbon- based material such as coal, oil, or natural gas that can be used as an energy source. Fossil fuels were originally formed when the remains of living organisms were buried and broken down by intense heat and pressure over millions of years.
### gas giants
The four large outermost planets – Neptune, Uranus, Saturn and Jupiter – which are mostly made of lighter chemical elements like Hydrogen and Helium.
### geologist
A scientist who analyses rocks, minerals and landforms.
### Goldilocks conditions
The ‘just right’ conditions for life to exist. For example, Earth has the right temperature range, a protective atmosphere and liquid water.
### gravity
The energy force which tries to pull two objects toward each other. The bigger an object is, the stronger its gravitational pull.
### Homo sapiens
Modern humans who first appeared 300,000 years ago. We are homo sapiens.
### hunters and gatherers
Human societies which move from place to place to hunt meat and gather fruit and vegetables to survive.
### industrial technology
Machines which operate on a large scale by using energy sources such as water, steam power, oil and coal.
### innovation
Using existing knowledge to come up with new technologies or new ways of doing things.
### intelligent life
Beings from other planets who are able to think, learn and understand. Scientists continue to search for intelligent life out in the Universe.
### intuition
A ‘gut feeling’ that a claim may be true or false.
### Jovian planets
The term Jovian planets refers to the large gassy planets furthest from the Sun - Neptune, Uranus, Saturn and Jupiter. They are also known as gas giants.
### Karman line
An imaginary line 100 kms above the Earth’s crust where it has been internationally agreed the Earth’s atmosphere ends and space begins.
### KWHLAQ chart
A visible framework which uses a series of step-by-step questions to provide guidance through the creative thinking process.
### lander
A spacecraft which has been designed to make a soft landing on a planet or moon etc.
### logic
Carefully thinking about a claim to decide whether it makes sense.
### mantle
The layer that surrounds the Earth’s core and is made of minerals and rocks which slowly flow in a sludge of melted iron.
### matter
Everything around us that has weight and takes up space. All matter is made up of atoms.
### meteoroids
Otherwise known as shooting stars, meteoroids are small space rocks which burn up as they enter Earth’s atmosphere.
### module
A self-contained unit which can be joined together with other units to build something more complex.
### multi-planetary species
A species which lives on more than one planet. Humans could become the first known multi-planetary species by establishing a human habitat on Mars.
### multicellular organisms
A complex organism which is made up of more than one cell. For example, animals and plants.
### natural selection
The process by which individuals in a species who have more successful adaptations have more children, therefore passing their successful adaptations on to future generations.
### nuclear fusion
The process of hydrogen atoms being crushed together in a star’s hot centre, releasing heat and energy for billions of years.
### orbiter
A spacecraft designed to orbit a planet and collect scientific data over a long period of time.
### overpopulation
When a population grows too big for the available resources, for example, food. Humans have, in the past, solved potential problems through innovations such as agriculture.
### ozone layer
An invisible layer in Earth’s upper atmosphere which helps to protect us from the Sun’s harmful ultra-violet rays.
### periodic table
A diagram of all the chemical elements in the Universe. It was created by a Russian chemist named Dmitri Mendeleev.
### quasars
Quasi Stellar Objects (Quasars) are believed to be the brightest and most distant objects in the Universe.
The transfer of energy (heat, sound or light) through waves. It can come from cosmic rays or from the Earth. Too much exposure to radiation is harmful to humans.
### redshift
When a star or galaxy moves away, its light waves are stretched out and it has a red glow. This is called redshift and provides evidence that the Universe is expanding.
### robotics
A type of technology which allows machines to be programmed to move and complete set tasks.
### rocky planets
The four small inner planets – Mercury, Venus, Earth and Mars – which are mostly made of heavier chemical elements like iron.
### rover
A moving robot which is sent to the surface of another planet to explore, collect scientific data and samples.
### self-sustaining
Being able to exist for a long time without outside help by using resources responsibly.
### single-celled organisms
A simple organism which is made up of only one cell. For example, simple bacteria.
### singularity
The extremely small point which contained the ingredients for everything in the Universe. Everything was crushed together in this singularity at the moment of the Big Bang.
### sol
The name of a solar day on Mars, which is 24.65 hours.
### star
A massive sphere of very hot gas which makes its own light and energy through nuclear fusion.
### supernova
The spectacular explosion which occurs when a massive star dies. It blows chemical elements out into the Universe.
### survive
To be able to continue to live. For example, having enough food to avoid dying of starvation.
### technology
New tools or methods, developed through the use of scientific knowledge, which can be used to solve problems.
### tectonic plates
The large solid-rock moving pieces which make up the Earth’s crust.
### thrive
To be able to grow, be successful and become stronger. For example, humans thrive when they are part of a connected community.
### timeline
A graphic which includes a list of events placed in the order that they happened.
### transform boundary
Where two tectonic plates meet and try to move past each other.
### uranium
A chemical element which is found in the Earth’s crust and is used as an energy source in nuclear power plants.
### venn diagram
A visual graphic which can be used to compare and contrast two different things.
### white dwarf
When a non-massive star runs out of fuel for nuclear fusion it collapses into itself. The leftover core is a compact star called a white dwarf.
### x-ray telescope
A type of telescope which works by receiving x-ray signals. It is mainly used to observe space objects and events such as the Sun, stars and supernovae.
### Yucatan Peninsula
Location of the Chicxulub Crater where a giant meteor landed 66 million years ago. Scientists think this meteor strike led to the extinction of the dinosaurs.
### zinc
One of the most common chemical elements in the Earth’s crust.
## Activity 2.2.2 Objectives
### Learning Goals
• Identify different chemical elements
• Understand the importance of chemical elements
### Introduction
You may have heard the phrase ‘chemical elements’ many times before but thought that it’s just some ‘science-y’ concept that isn’t really relevant to you or your life. If you do think that, prepare to be surprised!
In this activity you will be using the periodic table of elements to play a fun trivia-style game where you identify different chemical elements. Some of the chemical element names will sound very familiar to you...
## Game: chemical elements 'Who Am I?'
The Wlonk Illustrated Periodic Table of the Elements uses pictures and words to give you information about each of the chemical elements.
Your teacher will either display the periodic table; provide you with a printed copy; or instruct you to access it on the internet. You will find a link in Helpful Resources.
Take a moment to look closely at the periodic table and see how many elements you immediately recognise.
Your teacher will let you know whether you will be playing the game of Chemical Elements ‘Who am I?’ on your own; as a pair or in a group.
Use the Wlonk Periodic Table and the Game: chemical elements ‘Who am I?’ worksheet to answer the following:
1. 1. I am the first element on the periodic table.
2. 2. My atomic number is 8 and I am an important element in the air we breathe.
3. 3. I am used to build airplanes.
4. 4. I am used in thermometers to measure temperature.
5. 5. I am used to fill up balloons to make them float.
6. 6. My atomic number is 19 and I am found in fruit and vegetables.
7. 7. I am used to keep swimming pools free of harmful bacteria.
The individual or team that answers the most questions correctly wins the game.
http://elements.wlonk.com/ElementsTable.htm
## Activity 2.2.2 Review
### Conclusion
Now that you have had a closer look at the periodic table you will have become familiar with the names of individual chemical elements. To learn more about chemical elements and how they combine to create everyday chemical compounds, view the fun animated music video, ‘Meet the Elements,’ by They Might be Giants. You will find a link in Helpful Resources.
## Course Glossary
### accretion
The gradual process of matter being pulled together by gravity to make larger and larger clumps of matter.
A special skill or physical feature which helps a species to survive and thrive in its environment. For example, a chameleon changing colour to camouflage itself.
### aerial view
A view of something from the sky looking down.
### agriculture
Also referred to as farming, agriculture is the practice of growing crops and raising animals. It is an innovation which has allowed human societies to expand and thrive.
### AI
Artificial Intelligence (AI) is a type of technology which can perceive things, interpret them and make decisions in a similar way to humans.
### amphibian
Animals that evolved from fish to have gills so that they can live in water and also live and breathe on land.
### anthropologist
A scientist who studies humans and human behaviour.
### asteroids
Rocky bodies which are too small to be called planets.
### astronomer
A scientist who studies the Universe and everything in it.
### atmosphere
A thin layer of gases, otherwise known as air, that surrounds Earth and other planets.
### atoms
Tiny particles which make up everything in the Universe.
### authority
Someone who knows a lot about a subject and whose views are respected.
### battery storage
A large battery that stores electrical energy which can then be used when other energy sources are not available.
### Big Bang theory
Theory about how the Universe began 13.8 billion years ago. All matter, time, space and energy came from the Big Bang.
### Big History
The history of the entire Universe beginning 13.8 billion years ago.
### biochemist
A scientist who studies the chemistry of living things.
### biologist
A scientist who studies living things.
### black hole
An area in space where gravity is so strong that nothing can escape from it – not even light.
### brainstorming
A creative strategy for thinking about and sharing ideas to solve a challenge or task.
### CBR
Cosmic Background Radiation (CBR) is the radiation left over from the initial energy of the Big Bang. It can be seen through powerful space telescopes.
### chemical compounds
Chemical elements which have combined with different chemical elements. For example, hydrogen can combine with oxygen to create the chemical compound water (H2O).
### chemical elements
Pure substances which are made from a single type of atom. For example, Helium.
### chemist
A scientist who studies the substances that make up all the matter in the Universe.
### claim
Information which is presented as fact – not an opinion.
### cognitive
To do with mental activity such as thinking, using logic or remembering.
### collective learning
The human ability to store and share and build on information from generation to generation.
### comets
Balls of frozen gases, rock and dust which orbit the Sun.
### community
A group of people who live together. They help each other and work together to solve problems.
### compare
To look at what two or more things have in common with each other.
### continental drift theory
A theory which states that the Earth’s continents were once joined together in one supercontinent, then broke up and slowly drifted apart.
### contrast
To look at how two or more things are different to each other.
### convergent boundary
Where two tectonic plates move towards each other.
### cosmologist
A scientist who studies the structure and history of the Universe.
### creative thinking
Thinking of new ways to solve problems, generate new explanations and/or create something original.
### Critical thinking
Thinking which doesn’t rely on simply accepting what someone has said. It involves questioning, using logic and seeking information from experts before drawing a conclusion.
### cross section
A view of something as if it has been sliced through with a knife.
### digital technology
A term which covers electronic technologies such as computers, tablets and mobile phones.
### disciplines
Different areas of knowledge, for example, natural sciences.
### divergent boundary
Where two tectonic plates slide apart from each other.
### Earth’s core
At its centre, Earth contains a solid inner core and a liquid outer core made of iron and nickel.
### Earth’s crust
The layer that floats on top of the mantle and is made of lighter weight rocks and minerals.
### electrical technology
Technologies which use electricity as their main power source, for example, light bulbs, electric motors and television.
### energy sources
A resource which can be used to provide power. For example, fossil fuels like coal and oil; renewable resources like solar and wind or uranium for nuclear power.
### engineer
An expert who designs and builds machines and structures.
### evidence
Information which may support or disprove a claim.
### evolution
The theory of evolution explains how all the species alive today generated from the first simple life forms on Earth.
### exoplanets
Planets which orbit stars outside of our solar system.
### expert
A person with a special skill or knowledge in a particular area.
### flyby
A path followed by a spacecraft which has been sent close enough to a planet to record scientific data.
### fossil fuels
A carbon- based material such as coal, oil, or natural gas that can be used as an energy source. Fossil fuels were originally formed when the remains of living organisms were buried and broken down by intense heat and pressure over millions of years.
### gas giants
The four large outermost planets – Neptune, Uranus, Saturn and Jupiter – which are mostly made of lighter chemical elements like Hydrogen and Helium.
### geologist
A scientist who analyses rocks, minerals and landforms.
### Goldilocks conditions
The ‘just right’ conditions for life to exist. For example, Earth has the right temperature range, a protective atmosphere and liquid water.
### gravity
The energy force which tries to pull two objects toward each other. The bigger an object is, the stronger its gravitational pull.
### Homo sapiens
Modern humans who first appeared 300,000 years ago. We are homo sapiens.
### hunters and gatherers
Human societies which move from place to place to hunt meat and gather fruit and vegetables to survive.
### industrial technology
Machines which operate on a large scale by using energy sources such as water, steam power, oil and coal.
### innovation
Using existing knowledge to come up with new technologies or new ways of doing things.
### intelligent life
Beings from other planets who are able to think, learn and understand. Scientists continue to search for intelligent life out in the Universe.
### intuition
A ‘gut feeling’ that a claim may be true or false.
### Jovian planets
The term Jovian planets refers to the large gassy planets furthest from the Sun - Neptune, Uranus, Saturn and Jupiter. They are also known as gas giants.
### Karman line
An imaginary line 100 kms above the Earth’s crust where it has been internationally agreed the Earth’s atmosphere ends and space begins.
### KWHLAQ chart
A visible framework which uses a series of step-by-step questions to provide guidance through the creative thinking process.
### lander
A spacecraft which has been designed to make a soft landing on a planet or moon etc.
### logic
Carefully thinking about a claim to decide whether it makes sense.
### mantle
The layer that surrounds the Earth’s core and is made of minerals and rocks which slowly flow in a sludge of melted iron.
### matter
Everything around us that has weight and takes up space. All matter is made up of atoms.
### meteoroids
Otherwise known as shooting stars, meteoroids are small space rocks which burn up as they enter Earth’s atmosphere.
### module
A self-contained unit which can be joined together with other units to build something more complex.
### multi-planetary species
A species which lives on more than one planet. Humans could become the first known multi-planetary species by establishing a human habitat on Mars.
### multicellular organisms
A complex organism which is made up of more than one cell. For example, animals and plants.
### natural selection
The process by which individuals in a species who have more successful adaptations have more children, therefore passing their successful adaptations on to future generations.
### nuclear fusion
The process of hydrogen atoms being crushed together in a star’s hot centre, releasing heat and energy for billions of years.
### orbiter
A spacecraft designed to orbit a planet and collect scientific data over a long period of time.
### overpopulation
When a population grows too big for the available resources, for example, food. Humans have, in the past, solved potential problems through innovations such as agriculture.
### ozone layer
An invisible layer in Earth’s upper atmosphere which helps to protect us from the Sun’s harmful ultra-violet rays.
### periodic table
A diagram of all the chemical elements in the Universe. It was created by a Russian chemist named Dmitri Mendeleev.
### quasars
Quasi Stellar Objects (Quasars) are believed to be the brightest and most distant objects in the Universe.
The transfer of energy (heat, sound or light) through waves. It can come from cosmic rays or from the Earth. Too much exposure to radiation is harmful to humans.
### redshift
When a star or galaxy moves away, its light waves are stretched out and it has a red glow. This is called redshift and provides evidence that the Universe is expanding.
### robotics
A type of technology which allows machines to be programmed to move and complete set tasks.
### rocky planets
The four small inner planets – Mercury, Venus, Earth and Mars – which are mostly made of heavier chemical elements like iron.
### rover
A moving robot which is sent to the surface of another planet to explore, collect scientific data and samples.
### self-sustaining
Being able to exist for a long time without outside help by using resources responsibly.
### single-celled organisms
A simple organism which is made up of only one cell. For example, simple bacteria.
### singularity
The extremely small point which contained the ingredients for everything in the Universe. Everything was crushed together in this singularity at the moment of the Big Bang.
### sol
The name of a solar day on Mars, which is 24.65 hours.
### star
A massive sphere of very hot gas which makes its own light and energy through nuclear fusion.
### supernova
The spectacular explosion which occurs when a massive star dies. It blows chemical elements out into the Universe.
### survive
To be able to continue to live. For example, having enough food to avoid dying of starvation.
### technology
New tools or methods, developed through the use of scientific knowledge, which can be used to solve problems.
### tectonic plates
The large solid-rock moving pieces which make up the Earth’s crust.
### thrive
To be able to grow, be successful and become stronger. For example, humans thrive when they are part of a connected community.
### timeline
A graphic which includes a list of events placed in the order that they happened.
### transform boundary
Where two tectonic plates meet and try to move past each other.
### uranium
A chemical element which is found in the Earth’s crust and is used as an energy source in nuclear power plants.
### venn diagram
A visual graphic which can be used to compare and contrast two different things.
### white dwarf
When a non-massive star runs out of fuel for nuclear fusion it collapses into itself. The leftover core is a compact star called a white dwarf.
### x-ray telescope
A type of telescope which works by receiving x-ray signals. It is mainly used to observe space objects and events such as the Sun, stars and supernovae.
### Yucatan Peninsula
Location of the Chicxulub Crater where a giant meteor landed 66 million years ago. Scientists think this meteor strike led to the extinction of the dinosaurs.
### zinc
One of the most common chemical elements in the Earth’s crust.
## Activity 2.2.3 Objectives
### Learning Goals
• Understand the importance of chemical elements
• Understand the role of accretion in the formation of our solar system
### Introduction
Now that you are more familiar with chemical elements, you will conduct a simple demonstration that shows how the process of accretion brought chemical elements together into larger clumps of matter to become everything in our Universe, including the planets in our solar system.
In this activity you are going to use a transparent cup, stirrer, water and pepper.
## Demo: accretion instruction sheet
Do you remember, from watching Mission video 10: Matter and the process of accretion, what accretion is?
It’s the gradual process of matter being pulled together by gravity to make larger and larger clumps of matter.
Can you think when you may have seen something like the process of accretion in an everyday situation? For example, when making a chocolate drink you may have found that when you stir the powdered chocolate into the milk the chocolate granules clumped together in the middle?
You are now going to create a simple demonstration of how accretion works. Your teacher will instruct you whether you will be doing this individually or in pairs or a group.
Follow the instructions on the Demo: accretion instruction sheet:
Step 1. Fill your cup halfway with water.
Step 2. Grind some pepper into your cup.
Step 3. Before the next step, Predict what you think will happen to the pepper when you rotate the water in the cup with a stirrer in a circular motion. Write your prediction in the Predict box on the worksheet.
Some questions to think about: Have you ever seen this in real life before? Think about times when you have sand in a bucket at the beach and you swirl the water around, what happens to the sand?
Step 4. Explain why you made this prediction in the Explain box on the worksheet.
Step 5. Carefully rotate the water in your cup with a stirrer in a circular motion.
Step 6. Observe what happens to the pepper granules after you have rotated the water in the cup for a while. Record your observations using labelled diagrams in the Observe box on the worksheet. Complete a “before rotation” diagram of what the pepper looked like beforehand and an “after rotation” diagram of what the pepper looked like afterwards. Write at least one sentence to explain your observation.
Step 7. To finalise the process, Explain what you observed in this demonstration and how it is similar to the process of accretion in the final Explain box on the worksheet.
## Activity 2.2.3 Review
### Conclusion
If you are part of a class, your teacher will ask you and your classmates to share your responses to the ‘Explain’ questions in the final box on the worksheet:
1. 1. What did the pepper represent?
2. 2. What did the rotating motion represent?
3. 3. What is the process of accretion?
## Course Glossary
### accretion
The gradual process of matter being pulled together by gravity to make larger and larger clumps of matter.
A special skill or physical feature which helps a species to survive and thrive in its environment. For example, a chameleon changing colour to camouflage itself.
### aerial view
A view of something from the sky looking down.
### agriculture
Also referred to as farming, agriculture is the practice of growing crops and raising animals. It is an innovation which has allowed human societies to expand and thrive.
### AI
Artificial Intelligence (AI) is a type of technology which can perceive things, interpret them and make decisions in a similar way to humans.
### amphibian
Animals that evolved from fish to have gills so that they can live in water and also live and breathe on land.
### anthropologist
A scientist who studies humans and human behaviour.
### asteroids
Rocky bodies which are too small to be called planets.
### astronomer
A scientist who studies the Universe and everything in it.
### atmosphere
A thin layer of gases, otherwise known as air, that surrounds Earth and other planets.
### atoms
Tiny particles which make up everything in the Universe.
### authority
Someone who knows a lot about a subject and whose views are respected.
### battery storage
A large battery that stores electrical energy which can then be used when other energy sources are not available.
### Big Bang theory
Theory about how the Universe began 13.8 billion years ago. All matter, time, space and energy came from the Big Bang.
### Big History
The history of the entire Universe beginning 13.8 billion years ago.
### biochemist
A scientist who studies the chemistry of living things.
### biologist
A scientist who studies living things.
### black hole
An area in space where gravity is so strong that nothing can escape from it – not even light.
### brainstorming
A creative strategy for thinking about and sharing ideas to solve a challenge or task.
### CBR
Cosmic Background Radiation (CBR) is the radiation left over from the initial energy of the Big Bang. It can be seen through powerful space telescopes.
### chemical compounds
Chemical elements which have combined with different chemical elements. For example, hydrogen can combine with oxygen to create the chemical compound water (H2O).
### chemical elements
Pure substances which are made from a single type of atom. For example, Helium.
### chemist
A scientist who studies the substances that make up all the matter in the Universe.
### claim
Information which is presented as fact – not an opinion.
### cognitive
To do with mental activity such as thinking, using logic or remembering.
### collective learning
The human ability to store and share and build on information from generation to generation.
### comets
Balls of frozen gases, rock and dust which orbit the Sun.
### community
A group of people who live together. They help each other and work together to solve problems.
### compare
To look at what two or more things have in common with each other.
### continental drift theory
A theory which states that the Earth’s continents were once joined together in one supercontinent, then broke up and slowly drifted apart.
### contrast
To look at how two or more things are different to each other.
### convergent boundary
Where two tectonic plates move towards each other.
### cosmologist
A scientist who studies the structure and history of the Universe.
### creative thinking
Thinking of new ways to solve problems, generate new explanations and/or create something original.
### Critical thinking
Thinking which doesn’t rely on simply accepting what someone has said. It involves questioning, using logic and seeking information from experts before drawing a conclusion.
### cross section
A view of something as if it has been sliced through with a knife.
### digital technology
A term which covers electronic technologies such as computers, tablets and mobile phones.
### disciplines
Different areas of knowledge, for example, natural sciences.
### divergent boundary
Where two tectonic plates slide apart from each other.
### Earth’s core
At its centre, Earth contains a solid inner core and a liquid outer core made of iron and nickel.
### Earth’s crust
The layer that floats on top of the mantle and is made of lighter weight rocks and minerals.
### electrical technology
Technologies which use electricity as their main power source, for example, light bulbs, electric motors and television.
### energy sources
A resource which can be used to provide power. For example, fossil fuels like coal and oil; renewable resources like solar and wind or uranium for nuclear power.
### engineer
An expert who designs and builds machines and structures.
### evidence
Information which may support or disprove a claim.
### evolution
The theory of evolution explains how all the species alive today generated from the first simple life forms on Earth.
### exoplanets
Planets which orbit stars outside of our solar system.
### expert
A person with a special skill or knowledge in a particular area.
### flyby
A path followed by a spacecraft which has been sent close enough to a planet to record scientific data.
### fossil fuels
A carbon- based material such as coal, oil, or natural gas that can be used as an energy source. Fossil fuels were originally formed when the remains of living organisms were buried and broken down by intense heat and pressure over millions of years.
### gas giants
The four large outermost planets – Neptune, Uranus, Saturn and Jupiter – which are mostly made of lighter chemical elements like Hydrogen and Helium.
### geologist
A scientist who analyses rocks, minerals and landforms.
### Goldilocks conditions
The ‘just right’ conditions for life to exist. For example, Earth has the right temperature range, a protective atmosphere and liquid water.
### gravity
The energy force which tries to pull two objects toward each other. The bigger an object is, the stronger its gravitational pull.
### Homo sapiens
Modern humans who first appeared 300,000 years ago. We are homo sapiens.
### hunters and gatherers
Human societies which move from place to place to hunt meat and gather fruit and vegetables to survive.
### industrial technology
Machines which operate on a large scale by using energy sources such as water, steam power, oil and coal.
### innovation
Using existing knowledge to come up with new technologies or new ways of doing things.
### intelligent life
Beings from other planets who are able to think, learn and understand. Scientists continue to search for intelligent life out in the Universe.
### intuition
A ‘gut feeling’ that a claim may be true or false.
### Jovian planets
The term Jovian planets refers to the large gassy planets furthest from the Sun - Neptune, Uranus, Saturn and Jupiter. They are also known as gas giants.
### Karman line
An imaginary line 100 kms above the Earth’s crust where it has been internationally agreed the Earth’s atmosphere ends and space begins.
### KWHLAQ chart
A visible framework which uses a series of step-by-step questions to provide guidance through the creative thinking process.
### lander
A spacecraft which has been designed to make a soft landing on a planet or moon etc.
### logic
Carefully thinking about a claim to decide whether it makes sense.
### mantle
The layer that surrounds the Earth’s core and is made of minerals and rocks which slowly flow in a sludge of melted iron.
### matter
Everything around us that has weight and takes up space. All matter is made up of atoms.
### meteoroids
Otherwise known as shooting stars, meteoroids are small space rocks which burn up as they enter Earth’s atmosphere.
### module
A self-contained unit which can be joined together with other units to build something more complex.
### multi-planetary species
A species which lives on more than one planet. Humans could become the first known multi-planetary species by establishing a human habitat on Mars.
### multicellular organisms
A complex organism which is made up of more than one cell. For example, animals and plants.
### natural selection
The process by which individuals in a species who have more successful adaptations have more children, therefore passing their successful adaptations on to future generations.
### nuclear fusion
The process of hydrogen atoms being crushed together in a star’s hot centre, releasing heat and energy for billions of years.
### orbiter
A spacecraft designed to orbit a planet and collect scientific data over a long period of time.
### overpopulation
When a population grows too big for the available resources, for example, food. Humans have, in the past, solved potential problems through innovations such as agriculture.
### ozone layer
An invisible layer in Earth’s upper atmosphere which helps to protect us from the Sun’s harmful ultra-violet rays.
### periodic table
A diagram of all the chemical elements in the Universe. It was created by a Russian chemist named Dmitri Mendeleev.
### quasars
Quasi Stellar Objects (Quasars) are believed to be the brightest and most distant objects in the Universe.
The transfer of energy (heat, sound or light) through waves. It can come from cosmic rays or from the Earth. Too much exposure to radiation is harmful to humans.
### redshift
When a star or galaxy moves away, its light waves are stretched out and it has a red glow. This is called redshift and provides evidence that the Universe is expanding.
### robotics
A type of technology which allows machines to be programmed to move and complete set tasks.
### rocky planets
The four small inner planets – Mercury, Venus, Earth and Mars – which are mostly made of heavier chemical elements like iron.
### rover
A moving robot which is sent to the surface of another planet to explore, collect scientific data and samples.
### self-sustaining
Being able to exist for a long time without outside help by using resources responsibly.
### single-celled organisms
A simple organism which is made up of only one cell. For example, simple bacteria.
### singularity
The extremely small point which contained the ingredients for everything in the Universe. Everything was crushed together in this singularity at the moment of the Big Bang.
### sol
The name of a solar day on Mars, which is 24.65 hours.
### star
A massive sphere of very hot gas which makes its own light and energy through nuclear fusion.
### supernova
The spectacular explosion which occurs when a massive star dies. It blows chemical elements out into the Universe.
### survive
To be able to continue to live. For example, having enough food to avoid dying of starvation.
### technology
New tools or methods, developed through the use of scientific knowledge, which can be used to solve problems.
### tectonic plates
The large solid-rock moving pieces which make up the Earth’s crust.
### thrive
To be able to grow, be successful and become stronger. For example, humans thrive when they are part of a connected community.
### timeline
A graphic which includes a list of events placed in the order that they happened.
### transform boundary
Where two tectonic plates meet and try to move past each other.
### uranium
A chemical element which is found in the Earth’s crust and is used as an energy source in nuclear power plants.
### venn diagram
A visual graphic which can be used to compare and contrast two different things.
### white dwarf
When a non-massive star runs out of fuel for nuclear fusion it collapses into itself. The leftover core is a compact star called a white dwarf.
### x-ray telescope
A type of telescope which works by receiving x-ray signals. It is mainly used to observe space objects and events such as the Sun, stars and supernovae.
### Yucatan Peninsula
Location of the Chicxulub Crater where a giant meteor landed 66 million years ago. Scientists think this meteor strike led to the extinction of the dinosaurs.
### zinc
One of the most common chemical elements in the Earth’s crust.
## Activity 2.2.4 Objectives
### Learning Goals
• Explain the differences between rocky and gassy planets
• Describe other key features of our solar system
### Introduction
In the last activity you conducted a demonstration which explained how the planets in our solar system formed through the process of accretion. In this activity, you will ‘travel’ through the solar system to become more familiar with the rocky and gassy planets, the sun and other fascinating space objects.
## Mission video 11: Our sun and neighbouring planets
While you watch Mission video 11: our sun and neighbouring planets look out for the answers to the following questions:
1. 1. Which galaxy is our solar system a part of?
2. 2. What are the gas planets made of?
3. 3. What is the difference between gas planets and stars?
4. 4. What are the rocky planets made of?
5. 5. What else is in our solar system other than planets?
Your teacher will instruct you whether you will answer the questions: as part of a class discussion; as a group/paired discussion; or independently by writing your answers in your Big History School Junior journal (if you have been provided with one).
## Activity 2.2.4 Review
### Conclusion
So where does the ‘solar system’ appear on our History of the Universe Timeline?
Either your teacher will help you identify where it appears on the classroom display or you will refer back to the Timeline: history of the Universe worksheet you completed when you did the “3 big mission phase questions” activity. You will find a copy of Timeline: history of the Universe example in Helpful Resources.
Finally, to help give you a sense of the size of our solar system and to explore further its key features, click on the link to BBC Interactive in Helpful Resources. It takes you on a rocket trip from Earth right to the edge of our solar system. Take note of at least 3 things you found interesting/surprising on your rocket trip.
Timeline: history of the Universe example
http://www.bbc.com/future/bespoke/20140304-how-big-is-space-interactive/
## Course Glossary
### accretion
The gradual process of matter being pulled together by gravity to make larger and larger clumps of matter.
A special skill or physical feature which helps a species to survive and thrive in its environment. For example, a chameleon changing colour to camouflage itself.
### aerial view
A view of something from the sky looking down.
### agriculture
Also referred to as farming, agriculture is the practice of growing crops and raising animals. It is an innovation which has allowed human societies to expand and thrive.
### AI
Artificial Intelligence (AI) is a type of technology which can perceive things, interpret them and make decisions in a similar way to humans.
### amphibian
Animals that evolved from fish to have gills so that they can live in water and also live and breathe on land.
### anthropologist
A scientist who studies humans and human behaviour.
### asteroids
Rocky bodies which are too small to be called planets.
### astronomer
A scientist who studies the Universe and everything in it.
### atmosphere
A thin layer of gases, otherwise known as air, that surrounds Earth and other planets.
### atoms
Tiny particles which make up everything in the Universe.
### authority
Someone who knows a lot about a subject and whose views are respected.
### battery storage
A large battery that stores electrical energy which can then be used when other energy sources are not available.
### Big Bang theory
Theory about how the Universe began 13.8 billion years ago. All matter, time, space and energy came from the Big Bang.
### Big History
The history of the entire Universe beginning 13.8 billion years ago.
### biochemist
A scientist who studies the chemistry of living things.
### biologist
A scientist who studies living things.
### black hole
An area in space where gravity is so strong that nothing can escape from it – not even light.
### brainstorming
A creative strategy for thinking about and sharing ideas to solve a challenge or task.
### CBR
Cosmic Background Radiation (CBR) is the radiation left over from the initial energy of the Big Bang. It can be seen through powerful space telescopes.
### chemical compounds
Chemical elements which have combined with different chemical elements. For example, hydrogen can combine with oxygen to create the chemical compound water (H2O).
### chemical elements
Pure substances which are made from a single type of atom. For example, Helium.
### chemist
A scientist who studies the substances that make up all the matter in the Universe.
### claim
Information which is presented as fact – not an opinion.
### cognitive
To do with mental activity such as thinking, using logic or remembering.
### collective learning
The human ability to store and share and build on information from generation to generation.
### comets
Balls of frozen gases, rock and dust which orbit the Sun.
### community
A group of people who live together. They help each other and work together to solve problems.
### compare
To look at what two or more things have in common with each other.
### continental drift theory
A theory which states that the Earth’s continents were once joined together in one supercontinent, then broke up and slowly drifted apart.
### contrast
To look at how two or more things are different to each other.
### convergent boundary
Where two tectonic plates move towards each other.
### cosmologist
A scientist who studies the structure and history of the Universe.
### creative thinking
Thinking of new ways to solve problems, generate new explanations and/or create something original.
### Critical thinking
Thinking which doesn’t rely on simply accepting what someone has said. It involves questioning, using logic and seeking information from experts before drawing a conclusion.
### cross section
A view of something as if it has been sliced through with a knife.
### digital technology
A term which covers electronic technologies such as computers, tablets and mobile phones.
### disciplines
Different areas of knowledge, for example, natural sciences.
### divergent boundary
Where two tectonic plates slide apart from each other.
### Earth’s core
At its centre, Earth contains a solid inner core and a liquid outer core made of iron and nickel.
### Earth’s crust
The layer that floats on top of the mantle and is made of lighter weight rocks and minerals.
### electrical technology
Technologies which use electricity as their main power source, for example, light bulbs, electric motors and television.
### energy sources
A resource which can be used to provide power. For example, fossil fuels like coal and oil; renewable resources like solar and wind or uranium for nuclear power.
### engineer
An expert who designs and builds machines and structures.
### evidence
Information which may support or disprove a claim.
### evolution
The theory of evolution explains how all the species alive today generated from the first simple life forms on Earth.
### exoplanets
Planets which orbit stars outside of our solar system.
### expert
A person with a special skill or knowledge in a particular area.
### flyby
A path followed by a spacecraft which has been sent close enough to a planet to record scientific data.
### fossil fuels
A carbon- based material such as coal, oil, or natural gas that can be used as an energy source. Fossil fuels were originally formed when the remains of living organisms were buried and broken down by intense heat and pressure over millions of years.
### gas giants
The four large outermost planets – Neptune, Uranus, Saturn and Jupiter – which are mostly made of lighter chemical elements like Hydrogen and Helium.
### geologist
A scientist who analyses rocks, minerals and landforms.
### Goldilocks conditions
The ‘just right’ conditions for life to exist. For example, Earth has the right temperature range, a protective atmosphere and liquid water.
### gravity
The energy force which tries to pull two objects toward each other. The bigger an object is, the stronger its gravitational pull.
### Homo sapiens
Modern humans who first appeared 300,000 years ago. We are homo sapiens.
### hunters and gatherers
Human societies which move from place to place to hunt meat and gather fruit and vegetables to survive.
### industrial technology
Machines which operate on a large scale by using energy sources such as water, steam power, oil and coal.
### innovation
Using existing knowledge to come up with new technologies or new ways of doing things.
### intelligent life
Beings from other planets who are able to think, learn and understand. Scientists continue to search for intelligent life out in the Universe.
### intuition
A ‘gut feeling’ that a claim may be true or false.
### Jovian planets
The term Jovian planets refers to the large gassy planets furthest from the Sun - Neptune, Uranus, Saturn and Jupiter. They are also known as gas giants.
### Karman line
An imaginary line 100 kms above the Earth’s crust where it has been internationally agreed the Earth’s atmosphere ends and space begins.
### KWHLAQ chart
A visible framework which uses a series of step-by-step questions to provide guidance through the creative thinking process.
### lander
A spacecraft which has been designed to make a soft landing on a planet or moon etc.
### logic
Carefully thinking about a claim to decide whether it makes sense.
### mantle
The layer that surrounds the Earth’s core and is made of minerals and rocks which slowly flow in a sludge of melted iron.
### matter
Everything around us that has weight and takes up space. All matter is made up of atoms.
### meteoroids
Otherwise known as shooting stars, meteoroids are small space rocks which burn up as they enter Earth’s atmosphere.
### module
A self-contained unit which can be joined together with other units to build something more complex.
### multi-planetary species
A species which lives on more than one planet. Humans could become the first known multi-planetary species by establishing a human habitat on Mars.
### multicellular organisms
A complex organism which is made up of more than one cell. For example, animals and plants.
### natural selection
The process by which individuals in a species who have more successful adaptations have more children, therefore passing their successful adaptations on to future generations.
### nuclear fusion
The process of hydrogen atoms being crushed together in a star’s hot centre, releasing heat and energy for billions of years.
### orbiter
A spacecraft designed to orbit a planet and collect scientific data over a long period of time.
### overpopulation
When a population grows too big for the available resources, for example, food. Humans have, in the past, solved potential problems through innovations such as agriculture.
### ozone layer
An invisible layer in Earth’s upper atmosphere which helps to protect us from the Sun’s harmful ultra-violet rays.
### periodic table
A diagram of all the chemical elements in the Universe. It was created by a Russian chemist named Dmitri Mendeleev.
### quasars
Quasi Stellar Objects (Quasars) are believed to be the brightest and most distant objects in the Universe.
The transfer of energy (heat, sound or light) through waves. It can come from cosmic rays or from the Earth. Too much exposure to radiation is harmful to humans.
### redshift
When a star or galaxy moves away, its light waves are stretched out and it has a red glow. This is called redshift and provides evidence that the Universe is expanding.
### robotics
A type of technology which allows machines to be programmed to move and complete set tasks.
### rocky planets
The four small inner planets – Mercury, Venus, Earth and Mars – which are mostly made of heavier chemical elements like iron.
### rover
A moving robot which is sent to the surface of another planet to explore, collect scientific data and samples.
### self-sustaining
Being able to exist for a long time without outside help by using resources responsibly.
### single-celled organisms
A simple organism which is made up of only one cell. For example, simple bacteria.
### singularity
The extremely small point which contained the ingredients for everything in the Universe. Everything was crushed together in this singularity at the moment of the Big Bang.
### sol
The name of a solar day on Mars, which is 24.65 hours.
### star
A massive sphere of very hot gas which makes its own light and energy through nuclear fusion.
### supernova
The spectacular explosion which occurs when a massive star dies. It blows chemical elements out into the Universe.
### survive
To be able to continue to live. For example, having enough food to avoid dying of starvation.
### technology
New tools or methods, developed through the use of scientific knowledge, which can be used to solve problems.
### tectonic plates
The large solid-rock moving pieces which make up the Earth’s crust.
### thrive
To be able to grow, be successful and become stronger. For example, humans thrive when they are part of a connected community.
### timeline
A graphic which includes a list of events placed in the order that they happened.
### transform boundary
Where two tectonic plates meet and try to move past each other.
### uranium
A chemical element which is found in the Earth’s crust and is used as an energy source in nuclear power plants.
### venn diagram
A visual graphic which can be used to compare and contrast two different things.
### white dwarf
When a non-massive star runs out of fuel for nuclear fusion it collapses into itself. The leftover core is a compact star called a white dwarf.
### x-ray telescope
A type of telescope which works by receiving x-ray signals. It is mainly used to observe space objects and events such as the Sun, stars and supernovae.
### Yucatan Peninsula
Location of the Chicxulub Crater where a giant meteor landed 66 million years ago. Scientists think this meteor strike led to the extinction of the dinosaurs.
### zinc
One of the most common chemical elements in the Earth’s crust.
## Activity 2.2.5 Objectives
### Learning Goals
• Create a model of the sizes and distances between the Earth, Moon and Mars
### Introduction
When you hear that the circumference of Earth is 40, 000km or that the solar system stretches for billions of kilometres, can you really even begin to imagine how big these measurements are?
In this activity you will make a simple Earth, Moon and Mars scale model to try to understand how huge the distances are in our solar system - even between our nearest neighbours. First you will model the relative sizes of Earth, the Moon and Mars. Then you will model the relative distances between Earth, the Moon and Mars.
## Model: Earth, Moon and Mars size
When scientists and engineers are working with objects which are too big or too small they will often use scale models to represent them at a size which is easier to work with. Small models are especially useful when dealing with objects the size of moons and planets.
In the first part of this activity you will create a scale model to show the relative size of the Earth, Moon and Mars using the Model: Earth, Moon and Mars size worksheet.
In this model 1cm = 500km.
As a point of reference, Earth has a circumference (length around it) of approx. 40,000km.
Materials:
• Red, white and blue (or 3 different coloured) balloons
• A measuring tape for each group or pair
Blue Balloon (Earth)
Step 1. Inflate your blue balloon until you estimate the circumference of the balloon is approximately 80cm.
Step 2. Using the tape measure, measure around the balloon and let air out or blow more air in until the circumference of your balloon is 80cm.
Step 3. Tie off the balloon. This is a representation of Earth to scale.
Red balloon (Mars)
Step 4. Mars is approximately half the size of Earth so you will inflate your balloon until you estimate the circumference is approximately 40cm.
Step 5. Using the tape measure, measure around the balloon and let air out or blow more air in until the circumference of your balloon is 40cm.
Step 6. Tie off the balloon. This is a representation of Mars to scale.
White balloon (Moon)
Step 7. Earth’s Moon is approximately half the size of Mars so you will inflate your balloon until you estimate the circumference is approximately 20cm.
Step 8. Using the tape measure, measure around the balloon and let air out or blow more air in until the circumference of your balloon is 20cm.
Step 9. Tie off the balloon. This is a representation of Earth’s Moon to scale.
Once you have completed your size model, answer the reflection questions on the Model: Earth, Moon and Mars size worksheet.
## Model: Earth, Moon and Mars distance
You will now create a scale model of the distance between Earth, the Moon and Mars using the Model: Earth, Moon and Mars distance worksheet.
Remember that, as in the size scale model, 1cm = 500km
Materials:
• Earth, Moon and Mars balloons from Size Model
• Metre ruler, measuring tape or trundle wheel
Earth to Moon
The distance between Earth and the moon has been rounded up to 400,000 (4 lakh) kilometres.
Step 1. Choose a starting point and secure the Earth balloon there.
Step 2. Using a metre ruler, measuring tape or trundle wheel, measure out (800 cm) 8 metres and secure the Moon balloon there. This is the distance from Earth to the Moon to scale.
Earth to Mars
The closest Mars ever orbits to Earth is 60 million (6 crore) kilometres (rounded up). To accurately represent the distance of Earth to Mars on this scale, you would have to travel (120,000cm, 1.2 lakh cm) 1200 metres from your Earth starting point.
It’s pretty difficult to estimate how far 1200 metres is! Your teacher may point out a landmark for you that is 1200 metres away. Or, if you have access to Google Maps, you could work out exactly where 1200 metres away is by:
• Right clicking on your address on the map and select "measure distance"
• Clicking on another point on the map to create a path
• Adjusting the length of the path until it is 1200 metres
Once you have completed your distance model, answer the reflection questions on the Model: Earth, Moon and Mars distance worksheet.
## Activity 2.2.5 Review
### Conclusion
If you are part of a class, your teacher will ask you and your classmates to share your responses to the reflection questions on the worksheets:
Model: Earth, Moon and Mars size worksheet:
• How much bigger is Mars than the Moon?
• Why do you think Mars looks smaller in the sky than the Moon even though it is larger?
• Using your scale model as a guide, how much further do you think Mars is than the Moon? Estimate the distances using your balloons.
Model: Earth, Moon and Mars distance worksheet:
• What surprised you most about the distances between Earth and the Moon and Mars?
• How long do you think it would take to travel between Earth and the Moon in a spacecraft?
• How much longer do you think it would take to travel to Mars?
• If Mars is Earth’s neighbour, can you imagine how much the greater distance to the furthest planet, Neptune, must be? How long would you estimate it would take to travel to Neptune?
• Humans have landed on the Moon but have not yet landed on Mars. What extra challenges do you think there are for planning a manned trip to Mars?
## Course Glossary
### accretion
The gradual process of matter being pulled together by gravity to make larger and larger clumps of matter.
A special skill or physical feature which helps a species to survive and thrive in its environment. For example, a chameleon changing colour to camouflage itself.
### aerial view
A view of something from the sky looking down.
### agriculture
Also referred to as farming, agriculture is the practice of growing crops and raising animals. It is an innovation which has allowed human societies to expand and thrive.
### AI
Artificial Intelligence (AI) is a type of technology which can perceive things, interpret them and make decisions in a similar way to humans.
### amphibian
Animals that evolved from fish to have gills so that they can live in water and also live and breathe on land.
### anthropologist
A scientist who studies humans and human behaviour.
### asteroids
Rocky bodies which are too small to be called planets.
### astronomer
A scientist who studies the Universe and everything in it.
### atmosphere
A thin layer of gases, otherwise known as air, that surrounds Earth and other planets.
### atoms
Tiny particles which make up everything in the Universe.
### authority
Someone who knows a lot about a subject and whose views are respected.
### battery storage
A large battery that stores electrical energy which can then be used when other energy sources are not available.
### Big Bang theory
Theory about how the Universe began 13.8 billion years ago. All matter, time, space and energy came from the Big Bang.
### Big History
The history of the entire Universe beginning 13.8 billion years ago.
### biochemist
A scientist who studies the chemistry of living things.
### biologist
A scientist who studies living things.
### black hole
An area in space where gravity is so strong that nothing can escape from it – not even light.
### brainstorming
A creative strategy for thinking about and sharing ideas to solve a challenge or task.
### CBR
Cosmic Background Radiation (CBR) is the radiation left over from the initial energy of the Big Bang. It can be seen through powerful space telescopes.
### chemical compounds
Chemical elements which have combined with different chemical elements. For example, hydrogen can combine with oxygen to create the chemical compound water (H2O).
### chemical elements
Pure substances which are made from a single type of atom. For example, Helium.
### chemist
A scientist who studies the substances that make up all the matter in the Universe.
### claim
Information which is presented as fact – not an opinion.
### cognitive
To do with mental activity such as thinking, using logic or remembering.
### collective learning
The human ability to store and share and build on information from generation to generation.
### comets
Balls of frozen gases, rock and dust which orbit the Sun.
### community
A group of people who live together. They help each other and work together to solve problems.
### compare
To look at what two or more things have in common with each other.
### continental drift theory
A theory which states that the Earth’s continents were once joined together in one supercontinent, then broke up and slowly drifted apart.
### contrast
To look at how two or more things are different to each other.
### convergent boundary
Where two tectonic plates move towards each other.
### cosmologist
A scientist who studies the structure and history of the Universe.
### creative thinking
Thinking of new ways to solve problems, generate new explanations and/or create something original.
### Critical thinking
Thinking which doesn’t rely on simply accepting what someone has said. It involves questioning, using logic and seeking information from experts before drawing a conclusion.
### cross section
A view of something as if it has been sliced through with a knife.
### digital technology
A term which covers electronic technologies such as computers, tablets and mobile phones.
### disciplines
Different areas of knowledge, for example, natural sciences.
### divergent boundary
Where two tectonic plates slide apart from each other.
### Earth’s core
At its centre, Earth contains a solid inner core and a liquid outer core made of iron and nickel.
### Earth’s crust
The layer that floats on top of the mantle and is made of lighter weight rocks and minerals.
### electrical technology
Technologies which use electricity as their main power source, for example, light bulbs, electric motors and television.
### energy sources
A resource which can be used to provide power. For example, fossil fuels like coal and oil; renewable resources like solar and wind or uranium for nuclear power.
### engineer
An expert who designs and builds machines and structures.
### evidence
Information which may support or disprove a claim.
### evolution
The theory of evolution explains how all the species alive today generated from the first simple life forms on Earth.
### exoplanets
Planets which orbit stars outside of our solar system.
### expert
A person with a special skill or knowledge in a particular area.
### flyby
A path followed by a spacecraft which has been sent close enough to a planet to record scientific data.
### fossil fuels
A carbon- based material such as coal, oil, or natural gas that can be used as an energy source. Fossil fuels were originally formed when the remains of living organisms were buried and broken down by intense heat and pressure over millions of years.
### gas giants
The four large outermost planets – Neptune, Uranus, Saturn and Jupiter – which are mostly made of lighter chemical elements like Hydrogen and Helium.
### geologist
A scientist who analyses rocks, minerals and landforms.
### Goldilocks conditions
The ‘just right’ conditions for life to exist. For example, Earth has the right temperature range, a protective atmosphere and liquid water.
### gravity
The energy force which tries to pull two objects toward each other. The bigger an object is, the stronger its gravitational pull.
### Homo sapiens
Modern humans who first appeared 300,000 years ago. We are homo sapiens.
### hunters and gatherers
Human societies which move from place to place to hunt meat and gather fruit and vegetables to survive.
### industrial technology
Machines which operate on a large scale by using energy sources such as water, steam power, oil and coal.
### innovation
Using existing knowledge to come up with new technologies or new ways of doing things.
### intelligent life
Beings from other planets who are able to think, learn and understand. Scientists continue to search for intelligent life out in the Universe.
### intuition
A ‘gut feeling’ that a claim may be true or false.
### Jovian planets
The term Jovian planets refers to the large gassy planets furthest from the Sun - Neptune, Uranus, Saturn and Jupiter. They are also known as gas giants.
### Karman line
An imaginary line 100 kms above the Earth’s crust where it has been internationally agreed the Earth’s atmosphere ends and space begins.
### KWHLAQ chart
A visible framework which uses a series of step-by-step questions to provide guidance through the creative thinking process.
### lander
A spacecraft which has been designed to make a soft landing on a planet or moon etc.
### logic
Carefully thinking about a claim to decide whether it makes sense.
### mantle
The layer that surrounds the Earth’s core and is made of minerals and rocks which slowly flow in a sludge of melted iron.
### matter
Everything around us that has weight and takes up space. All matter is made up of atoms.
### meteoroids
Otherwise known as shooting stars, meteoroids are small space rocks which burn up as they enter Earth’s atmosphere.
### module
A self-contained unit which can be joined together with other units to build something more complex.
### multi-planetary species
A species which lives on more than one planet. Humans could become the first known multi-planetary species by establishing a human habitat on Mars.
### multicellular organisms
A complex organism which is made up of more than one cell. For example, animals and plants.
### natural selection
The process by which individuals in a species who have more successful adaptations have more children, therefore passing their successful adaptations on to future generations.
### nuclear fusion
The process of hydrogen atoms being crushed together in a star’s hot centre, releasing heat and energy for billions of years.
### orbiter
A spacecraft designed to orbit a planet and collect scientific data over a long period of time.
### overpopulation
When a population grows too big for the available resources, for example, food. Humans have, in the past, solved potential problems through innovations such as agriculture.
### ozone layer
An invisible layer in Earth’s upper atmosphere which helps to protect us from the Sun’s harmful ultra-violet rays.
### periodic table
A diagram of all the chemical elements in the Universe. It was created by a Russian chemist named Dmitri Mendeleev.
### quasars
Quasi Stellar Objects (Quasars) are believed to be the brightest and most distant objects in the Universe.
The transfer of energy (heat, sound or light) through waves. It can come from cosmic rays or from the Earth. Too much exposure to radiation is harmful to humans.
### redshift
When a star or galaxy moves away, its light waves are stretched out and it has a red glow. This is called redshift and provides evidence that the Universe is expanding.
### robotics
A type of technology which allows machines to be programmed to move and complete set tasks.
### rocky planets
The four small inner planets – Mercury, Venus, Earth and Mars – which are mostly made of heavier chemical elements like iron.
### rover
A moving robot which is sent to the surface of another planet to explore, collect scientific data and samples.
### self-sustaining
Being able to exist for a long time without outside help by using resources responsibly.
### single-celled organisms
A simple organism which is made up of only one cell. For example, simple bacteria.
### singularity
The extremely small point which contained the ingredients for everything in the Universe. Everything was crushed together in this singularity at the moment of the Big Bang.
### sol
The name of a solar day on Mars, which is 24.65 hours.
### star
A massive sphere of very hot gas which makes its own light and energy through nuclear fusion.
### supernova
The spectacular explosion which occurs when a massive star dies. It blows chemical elements out into the Universe.
### survive
To be able to continue to live. For example, having enough food to avoid dying of starvation.
### technology
New tools or methods, developed through the use of scientific knowledge, which can be used to solve problems.
### tectonic plates
The large solid-rock moving pieces which make up the Earth’s crust.
### thrive
To be able to grow, be successful and become stronger. For example, humans thrive when they are part of a connected community.
### timeline
A graphic which includes a list of events placed in the order that they happened.
### transform boundary
Where two tectonic plates meet and try to move past each other.
### uranium
A chemical element which is found in the Earth’s crust and is used as an energy source in nuclear power plants.
### venn diagram
A visual graphic which can be used to compare and contrast two different things.
### white dwarf
When a non-massive star runs out of fuel for nuclear fusion it collapses into itself. The leftover core is a compact star called a white dwarf.
### x-ray telescope
A type of telescope which works by receiving x-ray signals. It is mainly used to observe space objects and events such as the Sun, stars and supernovae.
### Yucatan Peninsula
Location of the Chicxulub Crater where a giant meteor landed 66 million years ago. Scientists think this meteor strike led to the extinction of the dinosaurs.
### zinc
One of the most common chemical elements in the Earth’s crust.
## How did our solar system form?
In how did our solar system form? you learned all about chemical elements, the process of accretion and the key features of our solar system.
Now it’s time to revisit your How did our solar system form? learning goals and read through them again carefully.
As you read each learning goal, tick the check box beside it if you are confident you have achieved that learning goal.
You’ll find that some learning goals are harder to achieve than others. If you find that there are learning goals that you’re not confident you’ve achieved yet, you may like to re-watch the Mission video which relates to that learning goal and/or ask your teacher for help.
|
# Phenomenology of Λ-CDM Model: A Possibility of Accelerating Universe with Positive Pressure
1 APC - THEORIE
APC - UMR 7164 - AstroParticule et Cosmologie, Center for cosmoparticle physics "Cosmion" - Center for cosmoparticle physics "Cosmion"
Abstract : Among various phenomenological $\Lambda$ models, a time-dependent model $\dot \Lambda\sim H^3$ is selected here to investigate the $\Lambda$-CDM cosmology. Using this model the expressions for the time-dependent equation of state parameter $\omega$ and other physical parameters are derived. It is shown that in $H^3$ model accelerated expansion of the Universe takes place at negative energy density, but with a positive pressure. It has also been possible to obtain the change of sign of the deceleration parameter $q$ during cosmic evolution.
keyword :
Document type :
Journal articles
International Journal of Theoretical Physics, Springer Verlag, 2011, 50, pp.939-951. <10.1007/s10773-010-0639-0>
http://hal.in2p3.fr/in2p3-00193780
Contributor : Simone Lantz <>
Submitted on : Tuesday, December 4, 2007 - 3:44:04 PM
Last modification on : Wednesday, July 27, 2016 - 2:48:48 PM
### Citation
S. Ray, M.Y. Khlopov, P.P. Ghosh, U. Mukhopadhyay. Phenomenology of Λ-CDM Model: A Possibility of Accelerating Universe with Positive Pressure. International Journal of Theoretical Physics, Springer Verlag, 2011, 50, pp.939-951. <10.1007/s10773-010-0639-0>. <in2p3-00193780>
### Metrics
Consultations de la notice
|
# 5th order DE with g(x)=32exp^(2x)
1. Feb 4, 2009
### leyyee
1. The problem statement, all variables and given/known data
Use variations of parameters to find the general solutions of the following differential equations.
y'''''-4y'''=32exp^(2x)
2. Relevant equations
no relevant equations.
3. The attempt at a solution
hey there, I tried solving this question. I got the homogeneous equation.
yh(x)=C1+C2x+C3X2+C4exp^(2x)+C5exp^(-2x)
but after this step.. I am stuck..
Because I am not quite sure whether I am in the correct path to look for the general solutions. After I have this equation, I did the wronskian matrix, I found the determinant of the 5X5 matrix is 512. Am I correct? Please correct me if I am not in the journey to my answer.
Besides that , If I were to use the variations of parameters. The matrix would be 5X5 dimensions. Is it correct?
Thanks
edited due to duplication on the template question.. sorry..
2. Feb 5, 2009
### Unco
You should get into the habit of reducing such a DE into something more familar. Let $$u=y^{(3)}$$, so the DE becomes $$u'' - 4u = 32e^{2x}$$.
Proceed as usual (involves a 2x2 matrix) to solve for u(x). Then you can obtain y(x).
3. Feb 5, 2009
### leyyee
hey there.
i will try out your attempt.. then i will post if i need anymore help..
thanks
4. Feb 5, 2009
### leyyee
hey there, i have tried out your attempt.. but I only can find the general solutions
the wronskians i found was -4, am I in the right path ?
u(x) = uh(x) + up(x) = C1e2x+C2e-2x+8xe2x-2e2x
how should I convert it to y(x) = yh(x) + yp(x)
If i use the equation u gave me u = y(3). Am I suppose to integrate u(x) by 3 times?
any clue for me?
thanks
Last edited: Feb 5, 2009
5. Feb 5, 2009
### Unco
Well done!
No need; you've done the variation of parameters work for u.
Exactly!
6. Feb 5, 2009
### leyyee
Wow.. thanks alot.. I think I will post the proper solutions after I finished the steps ok ?
and I will let you go through my answer to see whether I did any mistakes.. Thanks
|
5
# $3-4$ Use the level curves in the figure to predict the location of the critical points of $f$ and whether $f$ has a saddle point or a local maximum or minimum at e...
## Question
###### $3-4$ Use the level curves in the figure to predict the location of the critical points of $f$ and whether $f$ has a saddle point or a local maximum or minimum at each critical point. Explain your reasoning. Then use the Second Derivatives Test to confirm your predictions.$f(x, y)=3 x-x^{3}-2 y^{2}+y^{4}$
$3-4$ Use the level curves in the figure to predict the location of the critical points of $f$ and whether $f$ has a saddle point or a local maximum or minimum at each critical point. Explain your reasoning. Then use the Second Derivatives Test to confirm your predictions. $f(x, y)=3 x-x^{3}-2 y^{2}+y^{4}$
#### Similar Solved Questions
##### 1Proleus1 Khich1 Famlly Entorobacteriacede 1 Jutul mouja Noncoltorms 1 1 1 1ulls Uhe1
1 Proleus 1 Khich 1 Famlly Entorobacteriacede 1 Jutul mouja Noncoltorms 1 1 1 1 ulls Uhe 1...
##### Use integration by parts to evaluate the integral: @(et 8)dtPreviewEnter your answer a5 an expression: Example: Jx"2+1, */5, (a+b)lc Be_sure your varables match those in the question Get halp: VidcoPoints possible: This is attempt of 3. Message instructor about this questionSubmit
Use integration by parts to evaluate the integral: @(et 8)dt Preview Enter your answer a5 an expression: Example: Jx"2+1, */5, (a+b)lc Be_sure your varables match those in the question Get halp: Vidco Points possible: This is attempt of 3. Message instructor about this question Submit...
##### Define € = exp(z) Xizo R_ Show that this series converges for all z € C. Show that ee" = e+w Define cos 0 {(eie te-i0 , and sin 0 = H(eie _ e-i0 '), so that eie cos 0 +isin0_ Using the series for e show that you obtain the same series expansions for sin and cos that you learned in calculus. Check that cos? 0 + sin? 0 = 1, by multiplying out the definitions_ that 60 point on the unit circle corresponding t0 the cartesian coordinate (cos € , sin 0). (d) Show that |ez | Rez and arg
Define € = exp(z) Xizo R_ Show that this series converges for all z € C. Show that ee" = e+w Define cos 0 {(eie te-i0 , and sin 0 = H(eie _ e-i0 '), so that eie cos 0 +isin0_ Using the series for e show that you obtain the same series expansions for sin and cos that you learned...
##### 33-36 Find an equation of the tangent line to the curve at the given point. 33. > = 2r' _ r2 + 2, (1,3)
33-36 Find an equation of the tangent line to the curve at the given point. 33. > = 2r' _ r2 + 2, (1,3)...
##### Campucc chc Volumsthe solid whcsemthe curVes12 and#xlurzsoual pemondicult
Campucc chc Volums the solid whcse m the curVes 12 and #xlurz soual pemondicult...
##### Question 584% of owned dogs in the United States are spayed or neutered_ Round places. If 31 owned dogs your answers to four decimal are randomly sel lected find the probability thatExactly 26 of them are spayed or neutered At most 25 of them are spayed or neutered. At least 24 of themn are spayed or neutered_ Between 21 and 27 (Including 21 and 27) of them are spayed or neutered,
Question 5 84% of owned dogs in the United States are spayed or neutered_ Round places. If 31 owned dogs your answers to four decimal are randomly sel lected find the probability that Exactly 26 of them are spayed or neutered At most 25 of them are spayed or neutered. At least 24 of themn are spayed...
##### 25.Find the following quantities:(1+2i)(3-1)2+3i F]+i
25.Find the following quantities: (1+2i)(3-1) 2+3i F]+i...
##### Sketch the curve by plotting points. Indicate with an arrow the direction of increasing t_ x=l-t,y=t? _ 2tFind dy / dx without eliminating the parameter t _ Find the equation of the tangent line to the curve at the point (-2,0). X=t_3 y=l-t
Sketch the curve by plotting points. Indicate with an arrow the direction of increasing t_ x=l-t,y=t? _ 2t Find dy / dx without eliminating the parameter t _ Find the equation of the tangent line to the curve at the point (-2,0). X=t_3 y=l-t...
##### 0 ? 8 Tf 1 2 2 0 [ HHF I 7 2 } 8 J { 1 F 2 1 N 8 3 8 1 Ehlr 8 8 Li 0 0 3 1 N N [ 8 0 7 { 1
0 ? 8 Tf 1 2 2 0 [ HHF I 7 2 } 8 J { 1 F 2 1 N 8 3 8 1 Ehlr 8 8 Li 0 0 3 1 N N [ 8 0 7 { 1...
##### Find the volume $V$ of the region.The solid region bounded above by the sphere $x^{2}+y^{2}+$ $z^{2}=2$ and below by the circular paraboloid $z=x^{2}+y^{2}$
Find the volume $V$ of the region. The solid region bounded above by the sphere $x^{2}+y^{2}+$ $z^{2}=2$ and below by the circular paraboloid $z=x^{2}+y^{2}$...
##### Suppose $f$ is differentiable on $\mathbb{R}$. Let $F(x)=f\left(e^{x}\right)$ and $G(x)=e^{f(x)}$. Find expressions for (a) $F^{\prime}(x)$ and (b) $G^{\prime}(x)$.
Suppose $f$ is differentiable on $\mathbb{R}$. Let $F(x)=f\left(e^{x}\right)$ and $G(x)=e^{f(x)}$. Find expressions for (a) $F^{\prime}(x)$ and (b) $G^{\prime}(x)$....
##### Use identities to solve each of the following. Rationalize denominators when applicable. Find $\sin \theta$, given that $\cos \theta=\frac{4}{5}$ and $\theta$ is in quadrant IV.
Use identities to solve each of the following. Rationalize denominators when applicable. Find $\sin \theta$, given that $\cos \theta=\frac{4}{5}$ and $\theta$ is in quadrant IV....
##### 19 Let _ be the function delined below , whereconstun(2x? + 7x - 4 X < -4 +x-12 fm) = kx + 74<*<0x >04-1For what value ofk; ifany, i5 f continuous at X = -4What type of discontinuity does have at xFind all the horizontal asymptoles to the graph ol f Show the work leading to your answer.
19 Let _ be the function delined below , where constun (2x? + 7x - 4 X < -4 +x-12 fm) = kx + 74<*<0 x >0 4-1 For what value ofk; ifany, i5 f continuous at X = -4 What type of discontinuity does have at x Find all the horizontal asymptoles to the graph ol f Show the work leading to your a...
##### Problem 8. Prove that 21|3n 7 + 7n 3+11n for every integer n.
Problem 8. Prove that 21|3n 7 + 7n 3+ 11n for every integer n....
##### An electromagnetic wave with a electric field amplitude of 0.15 Vim has an intensity of 1Sx10-5W/m?0 30x10 * Wim?, 0 10x10 - Wim 0 30x10 * Wim
An electromagnetic wave with a electric field amplitude of 0.15 Vim has an intensity of 1Sx10-5W/m? 0 30x10 * Wim?, 0 10x10 - Wim 0 30x10 * Wim...
##### JENERAL CHEMISTKY I Qiwi 84>Question 9 Not yet answeredWhat is the mass in grams of hydrogen atoms present in molecules of water? Atomic mass of H = 008 amu; Avogadro No. 6.02 x 10 23)Marked out of 1.67Flag questionSelect one 1.34 * 10 23 1.00 x10 ?36.70 2.01 x 10.23 1,67 * 10-28
JENERAL CHEMISTKY I Qiwi 84> Question 9 Not yet answered What is the mass in grams of hydrogen atoms present in molecules of water? Atomic mass of H = 008 amu; Avogadro No. 6.02 x 10 23) Marked out of 1.67 Flag question Select one 1.34 * 10 23 1.00 x10 ?3 6.70 2.01 x 10.23 1,67 * 10-28...
##### Suppose rate of change for Quantity A is proportional toQuantity B, and the Quantity B is proportional to the sum of A andB. How do I represent that as a system of differentialequations?
Suppose rate of change for Quantity A is proportional to Quantity B, and the Quantity B is proportional to the sum of A and B. How do I represent that as a system of differential equations?...
##### If 23.6 g of hydrogen gas reacts with 28.3 grams of nitrogengas, what is the limiting reactant? Support your answer withcalculations. N2 (g) + 3 H2 (g) -> 2 NH3 (g)
If 23.6 g of hydrogen gas reacts with 28.3 grams of nitrogen gas, what is the limiting reactant? Support your answer with calculations. N2 (g) + 3 H2 (g) -> 2 NH3 (g)...
##### What is the major product of the followingreaction?HCl
What is the major product of the followingreaction? HCl...
-- 0.028063--
|
--- title: "ODE Model of Gene Regulation" output: html_document: fig_caption: TRUE code_folding: hide author: | | Martin Modrák | Laboratory of Bioinformatics | Institute of Microbiology of the Czech Academy of Sciences | [email protected], https://www.martinmodrak.cz date: "r format(Sys.time(), '%Y-%m-%d')" bibliography: stancon.bib --- {r setup, message=FALSE,warning=FALSE} library(knitr) opts_chunk$set(fig.height=3, fig.path='Figs/', echo=TRUE, warning=FALSE, message=FALSE) #In addition to the libraries listed below,, the Genexpi-stan package requires: # rstan, deSolve, splines, truncnorm, parallel, loo devtools::load_all() library(cowplot) library(tidyverse) library(here) library(MVN) options(mc.cores = parallel::detectCores()) rstan_options(auto_write = TRUE) # Abstract In determining regulatory interactions within cells, models based on ordinary differential equations (ODE) are a popular choice. Here, we present a reimplementation of one such model in Stan. The model features a spline fit where the resulting spline is a parameter of the ODE. For practical reasons, the model avoids the Stan's ODE solver in favor of a custom solution. We also discuss why the model as traditionally formulated is not well identified and introduce reparametrizations that improve identifiability and make it easier to specify reasonable default priors. This notebook shows first usable version of the model, but there are still several outstanding issues which we highlight. We nevertheless hope that the modelling approach we took is of interest to the Stan community and can provide inspiration for other models. A runnable version of this Rmd file and all supporting code can be found at https://github.com/cas-bioinf/genexpi-stan/tree/stancon2018 # Biological Background In a very simplified form the [central dogma](https://en.wikipedia.org/wiki/Central_dogma_of_molecular_biology) of molecular biology may be paraphrased as follows: The DNA contains all of the instructions needed to run a cell and can be divided into coding and non-coding regions. The coding regions of DNA can be *transcribed* to form messenger RNA (mRNA) molecules containing a mirror-copy of the coding region. The messenger RNA is then *translated* to create protein. Proteins perform most of the actual functions of the cells, including control of transcription and translation. This process is regulated at all stages and involves many feedback loops: most importantly, the rate of transcription of a gene is controlled by the abundance of regulatory proteins binding to specific sequences of the DNA near the coding region. The rate of translation and degradation of mRNA can also be regulated separately by other proteins and the rate of degradation of proteins themselves is also affected by other proteins. There are many exceptions where the above simplification does not hold, but the vast majority of cellular life can be described in these terms. Understanding what are the regulations taking place is thus an important step in understanding how cells work. However, our ability to observe what happens in the cell is very limited: The only commonly available method that can capture information about all genes in a single experiment is measuring the concentration of mRNA which is in turn strongly related to expression (how many mRNAs are transcribed from the gene in a unit of time). Gathering expression data remains relatively expensive and except for the most studied organisms, at most several dozen whole genome expression experiments have been published. It has been shown that mRNA and protein concentration are mostly correlated, however this relationship is imperfect [@Maier09,@Gygi99]. Nevertheless, expression data is the best proxy for protein concentration available at the whole genome level and expression data are thus widely used to infer interactions that control transcription of genes. # Scope The model presented in this notebook aims at modelling and identifying transcriptional regulations from time series data of gene expression. While the model aims to be more general, our particular interest is on determining for which genes does a known [*sigma factor*](https://en.wikipedia.org/wiki/Sigma_factor) initiate transcription. The present work was motivated by reading [@TitsiasHonkela12] who developed a fully Bayesian model of transcriptional regulation, extending a classical regulation model [@Vohradsky2001] with Gaussian Processes. Titsias et al. used a custom Metropolis-within-Gibbs sampler and to our knowledge, there is no other fully Bayesian treatment of the model available. Initially, we tried to directly rephrase the Titsias et al. model in Stan, but we ran into large computational difficulties, including several non-identifiabilities, that we were not able to resolve without modifying the model. In this notebook, we present a model that resulted from resolving those issues and a slight simplification of the original model, as our use case didn't need to model protein dynamics separately. We further show how we determine model fit and use a simple auxiliary expression model to filter out genes whose expression does not identify the parameters of the full model. # Data The data we will work with in this notebook is a time series of 14 microarray measurements of expression of 4008 genes in the bacterium *Bacillus subtilis* during germination (transition from dormant spores to normal metabolism). The data is depositioned in the Gene Expression Omnibus under GSE6865 [@Keijser2007, https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE6865]. The individual samples were taken from a single fermentor where the bacteria grew and represent averages of a large number of cells. The cells are expected to be mostly synchronized by germination initiation at$t=0$and (hopefully) stayed in sync for the duration of the experiment. While recent methods allow expression measurement for individual cells, such data are expensive, usually gathered only at a single time point, contain large amount of noise and introduce additional analytical challenges (I tried and failed). To our knowledge, no succesful inference of transcriptional regulations has ever been made from single cell data at the time of this writing. {r} #Try to download the data, if it was not already present on the computer data_dir <- here("local_data") if(! dir.exists(data_dir)) { dir.create(data_dir) } data_file <- here("local_data","GSE6865_series_matrix.txt.gz") if(!file.exists(data_file)) { download.file("ftp://ftp.ncbi.nlm.nih.gov/geo/series/GSE6nnn/GSE6865/matrix/GSE6865_series_matrix.txt.gz", data_file) } # Read and preprocess the data gse6865_raw_df = read.delim(gzfile(data_file), comment.char = "!") #Intermediate data frame representation # Raw profile data. We scale by 1000 to get to more wieldy values gse6865_raw = as.matrix(gse6865_raw_df[,2:15]) / 1000 rownames(gse6865_raw) = gse6865_raw_df$ID_REF #Times (in minutes) for the individual samples gse6865_raw_time = c(0,5,10,15,20,25,30,40,50,60,70,80,90,100) colnames(gse6865_raw) <- sapply(gse6865_raw_time, FUN = function(x) { paste0(x,"min")}) #There are a few NA values in 1st and 2nd measurement, which we will ge trid of. # For the first measurement, we can expect the value to be 0 (the series are from germination) gse6865_raw[is.na(gse6865_raw[,1]),1] = 0 # For the second measurement we will go with a linear interpolation na2 = is.na(gse6865_raw[,2]) gse6865_raw[na2,2] = 0.5 * gse6865_raw[na2,1] + 0.5 * gse6865_raw[na2,3] #cleanup the intermediate results rm(gse6865_raw_df) rm(na2) rm(data_dir) rm(data_file) Below is a sample of the expression profiles of 20 random genes, notice that most genes have almost zero expression. {r} set.seed(2345277) genes_to_show <- sample(1:nrow(gse6865_raw), 20) ggmatplot(gse6865_raw_time, t(gse6865_raw[genes_to_show,]), main_geom = geom_line(alpha = 0.6), x_title = "Time [min]", y_title = "expression") # Basics of The Model In all of the following, $x_i$ is the true (unknown) expression of a target gene, $y_j$ the true protein concentration of a regulator and $\rho_i$ the regulatory input for the target gene. Both $x_i$, $y_j$ and $\rho_i$ are functions of time while all other parameters are constants. The regulatory input is a linear combination of protein concentrations of the regulators: $$\rho_i = \sum_j{w_{i,j}y_j} + b_i$$ Then the expression of $x_i$ is driven by the following ordinary differential equation (ODE): \begin{align} \frac{\mathrm{d}x_i}{\mathrm{d}t} &= s_i F(\rho_i) - d_i x_i \\ F(\rho_i) &= \frac{1}{1 + e^{-\rho_i}} \end{align} where $F$ is the regulatory response function, in our case the logistic sigmoid. Our goal is to use Stan to infer the parameters of this model ($s_i,w_{i,j},b_i, d_i$), given the observed expression of the target genes and regulators over time. The model assumes constant degradation ($d_i$) over time which is unrealistic, but necessary as most datasets do not contain enough information to identify varying degradation. There are two forms of expression data available with widely different noise models: [microarray](https://en.wikipedia.org/wiki/DNA_microarray) and [RNA-seq](https://en.wikipedia.org/wiki/RNA-Seq). The former produces positive continuous measurements with approximately normal error while the latter produces count data with negative-binomial distribution. The datasets we are interested in come from microarray experiments so we will focus on those, but using RNA-seq means just changing the observation model. For microarrays, the observation model can be treated as a truncated normal: $$\tilde{x_i}(t) \sim N(x_i(t), \sigma_{abs} + x_i(t)\sigma_{rel}) \mathop{|} \tilde{x_i}(t) > 0$$ The two error terms represent the fixed absolute error ($\sigma_{abs}$) which is the result of technical noise in the microarray platform which is constant [@Klebanov2007] and the relative error ($\sigma_{rel}$) wich represents the uncaptured biological variation which tend to be proportional to the expresssion of the genes. For the regulator(s) there are two possible regimes: a) the expression of the regulators is observed and the protein concentration is assumed to be identical to the true expression or b) the $y$ is estimated only through its influence on $x$. In case a), the exact same error model applies to regulators. In the following, we treat both $\sigma_{abs}$ and $\sigma_{rel}$ as given. The model can treat the error terms as parameters, but with the dataset we work with, the resulting $\sigma_{rel}$ values tended to be unrealistically high and most fits unexpectedly poor as a consequence. We are yet to investigate why this is the case. Based on our previous experience with similar models, we expect $\sigma_{abs} = 0.1$ and the biological variation to be around 20% of the expression, suggesting $\sigma_{rel} = 0.1$. # Modelling Regulator Expression To solve the regulation ODE numerically, we need to have $y_j$ available at much finer intervals than the available measurements (1-2 minute intervals have proven sufficient). Initially, we considered Gaussian Processes [as @TitsiasHonkela12]), but those introduced computational issues and we switched to B-splines which have less theoretical appeal, but introduce fewer parameters and were easier to fit. We however need to ensure that $y_j$ are positive. After some early setbacks with the log1pexp transform ($x^+ = ln(1+e^x)$), we settled to simply subtract the minimum of the spline from all points[^1], i.e. given the spline basis $B$ we get: \begin{aligned} \bar{y_j} &= B\alpha_j \\ y_j &= c_{scale}(\bar{y_j} - \min{\bar{y_j}}) + \beta_j \\ \alpha_j &\sim N(0,1) \\ \beta_j &\sim HalfNormal(0, \sigma_\beta) \end{aligned} Where $\alpha_j$ is a column vector of spline coefficients and $\beta_j$ serves as intercept, all are treated as parameters. The scaling constant $c_{scale}$ and $\sigma_\beta$ reflect the range of the data and are given by the user. When the regulator proteins are estimated from the effect on targets only (no expression measurements for regulator), the spline intercept $\beta_j$ is ignored, as it cannot be separated from $b_i$. [^1]: We now believe that the initial problems were not caused by log1pexp but by other parts of the model, however, the $\min$ transform is working well and there is currently no incentive to replace it. Note that $\min{\bar{y_i}}$ is mostly a smooth function of $\alpha_i$. # Solving the ODE It is a bit tricky to use Stan's built-in ODE solver with a spline as one of the parameters driving ODE evolution as it needs the value of $y_i$ at arbitrary time points. The most straightforward way - linearly interpolating between precomputed discrete values of $y_i$ requires some hacks and breaks the solver. In principle, $\alpha_i$ could be given to the ODE solver and the spline basis computed for each timepoint the solver needs, but this seemed cumbersome. Instead we decided to follow the approach of [@TitsiasHonkela12]. First we can observe that the regulation ODE is linear in $x_i$ and can be solved for $t \geq 0$: $$x_i(t) = \eta_i e^{-d_i t} + s_i \int_0^t F(\rho_i(u)) e^{-d_i(t-u)} \mathrm{d}u$$ Where $\eta_i$ is the concentration of the target at $t=0$. We then solve the resulting integral numerically via [trapezoid rule](https://en.wikipedia.org/wiki/Trapezoidal_rule)[^2]. Assuming unit timestep, $x_i$ can be computed in a single for loop: \begin{aligned} \chi_i(0) &= -\frac{1}{2} F(\rho_i(0)) \\ \chi_i(t + 1) &= (\chi_i(t) + F(\rho_i(t)))d_i \\ x_i(t) &= \eta_i e^{-d_i t} + s_i(\chi_i(t) + \frac{1}{2}F(\rho_i(t))) \end{aligned} [^2]: Trapezoid rule agrees well with the results of solving the ODE with RK45, while the simpler midpoint rule has significant bias even when the time steps are small. # Identifiability The model as given above is biologically relevant, but has multiple identifiability issues, all of which can be demonstrated with just a single regulator and a single target, so we will drop parameter indicies for the rest of this section. The first non-identifiability arises with $w$ and $b$ if all values of $\rho$ fall on the tail of the sigmoid, which is approximately linear. In this case, the walue of $F(\rho)$ becomes insensitive to linear transformations of $w$ and $b$: {r} time <- seq(0,2,length.out = 100) regulator <- sin(time * 4) + 1 params1 <- c(degradation = 0.5, bias = -3, sensitivity = 1, weight = 5, basal_transcription = 0, protein = approxfun(time, regulator, rule=2)); target1 <- ode( y = c(x = 0), times = time, func = target_ODE, parms = params1, method = "ode45")[,"x"]; params2 <- c(degradation = 0.5, bias = -6, sensitivity = 1, weight = 10, basal_transcription = 0, protein = approxfun(time, regulator, rule=2)); target2 <- ode( y = c(x = 0), times = time, func = target_ODE, parms = params2, method = "ode45")[,"x"]; params3 <- c(degradation = 0.5, bias = -30, sensitivity = 1, weight = 50, basal_transcription = 0, protein = approxfun(time, regulator, rule=2)); target3 <- ode( y = c(x = 0), times = time, func = target_ODE, parms = params3, method = "ode45")[,"x"]; params_to_legend <- function(params) { paste0("w = ", params["weight"], ", b = ", params["bias"]) } data.frame(time = time, regulator = regulator, target1 = target1, target2 = target2, target3 = target3) %>% gather("profile","expression", -time) %>% ggplot(aes(x = time, y = expression, color = profile)) + geom_line() + scale_color_hue(labels = c("regulator", params_to_legend(params1), params_to_legend(params2), params_to_legend(params3))) + ggtitle("Non-identifiability due to w and b","All targets have s = 1, d = 0.5") Transformations of $s$ and $d$ together may also have very minor effect on the resulting expression. While the resulting expression profiles are more different than in the previous case, those differences are still very small compared to the noise observed in real data. In the following figures, the red dots represent measurements that have approximately equal likelihood for each of the shown true target profiles. {r} time <- seq(0,2,length.out = 100) regulator <- sin(time * 4) + 1 params1 <- c(degradation = 5, bias = -1, sensitivity = 10, weight = 1, basal_transcription = 0, protein = approxfun(time, regulator, rule=2)); target1 <- ode( y = c(x = 1), times = time, func = target_ODE, parms = params1, method = "ode45")[,"x"]; params2 <- c(degradation = 10, bias = -1, sensitivity = 19, weight = 1, basal_transcription = 0, protein = approxfun(time, regulator, rule=2)); target2 <- ode( y = c(x = 1), times = time, func = target_ODE, parms = params2, method = "ode45")[,"x"]; params3 <- c(degradation = 100, bias = -1, sensitivity = 180, weight = 1, basal_transcription = 0, protein = approxfun(time, regulator, rule=2)); target3 <- ode( y = c(x = 1), times = time, func = target_ODE, parms = params3, method = "ode45")[,"x"]; params_to_legend <- function(params) { paste0("s = ", params["sensitivity"], ", d = ", params["degradation"]) } measured_data = data.frame(profile = "measured", time = seq(0,2,length.out = 9), expression = c(1,1.7,1.4,0.85, 0.83,0.35,0.7,1.4,1.5)) data.frame(time = time, regulator = regulator, target1 = target1, target2 = target2, target3 = target3) %>% gather("profile","expression", -time) %>% ggplot(aes(x = time, y = expression, color = profile)) + geom_line() + geom_point(data = measured_data, color = "#ba1b1d", size = 3) + scale_color_hue(labels = c("regulator",params_to_legend(params1), params_to_legend(params2), params_to_legend(params3))) + ggtitle("Non identifiability due to s and d","All targets have w = 1, b = -1") There is also a non-identifiability when $w = 0$ as $s$ and $b$ become redundant. Even worse, some of the solutions with $w = 0$ might not be distinguishable form solutions with $|w| \gg 0$. This might also induce non-identifiability in the initial condition $\eta$ as it is much less constrained by data. Once again the expression profiles are not identical, but are close enough that given the amount of noise, non-negligible posterior mass can be found for all of them: {r} time <- seq(0,2,length.out = 100) regulator <- sin(time * 4) + 1 params1 <- c(degradation = 3, bias = -1, sensitivity = 3, weight = 5, basal_transcription = 0, protein = approxfun(time, regulator, rule=2)); target1 <- ode( y = c(x = 1.5), times = time, func = target_ODE, parms = params1, method = "ode45")[,"x"]; params2 <- c(degradation = 3.4, bias = 0, sensitivity = 6, weight = 0, basal_transcription = 0, protein = approxfun(time, regulator, rule=2)); target2 <- ode( y = c(x = 1.6), times = time, func = target_ODE, parms = params2, method = "ode45")[,"x"]; params3 <- c(degradation = 10, bias = 0, sensitivity = 16, weight = 0, basal_transcription = 0, protein = approxfun(time, regulator, rule=2)); target3 <- ode( y = c(x = 2.5), times = time, func = target_ODE, parms = params3, method = "ode45")[,"x"]; params_to_legend <- function(params) { paste0("s = ", params["sensitivity"],", w = ", params["weight"], ", b = ", params["bias"], ", d = ", params["degradation"]) } measured_data = data.frame(profile = "measured", time = seq(0,2,length.out = 9), expression = c(1.8,0.9,1,0.66, 1.11,0.6,0.7,0.96,1.03)) data.frame(time = time, regulator = regulator, target1 = target1, target2 = target2, target3 = target3) %>% gather("profile","expression", -time) %>% ggplot(aes(x = time, y = expression, color = profile)) + geom_line() + geom_point(data = measured_data, color = "#ba1b1d", size = 3) + scale_color_hue(labels = c("regulator",params_to_legend(params1), params_to_legend(params2), params_to_legend(params3))) + ggtitle("Non identifiability between w=0 and w > 0") Last but not least, it is also possible to get similar solutions with different sign of $w$: {r} time <- seq(0,2,length.out = 100) regulator <- c(0.3296698,0.6667181,1.0083617,1.3518170,1.6943005,2.0330289,2.3652186,2.6880862,2.9988482,3.2947213,3.5729219 ,3.8306665,4.0651718,4.2736542,4.4533304,4.6014168,4.7154822,4.7961600,4.8456619,4.8662125,4.8600365,4.8293585 ,4.7764032,4.7033952,4.6125592,4.5061198,4.3863016,4.2553294,4.1154277,3.9688212,3.8177346,3.6643924,3.5110195 ,3.3598403,3.2130795,3.0725728,2.9385992,2.8110488,2.6898117,2.5747779,2.4658374,2.3628803,2.2657967,2.1744767 ,2.0888102,2.0086873,1.9339981,1.8646326,1.8004810,1.7414331,1.6873792,1.6382092,1.5938132,1.5540812,1.5189034 ,1.4881697,1.4617702,1.4395950,1.4215341,1.4074776,1.3973155,1.3909378,1.3882347,1.3890962,1.3934123,1.4010732 ,1.4119687,1.4259843,1.4428964,1.4623727,1.4840759,1.5076691,1.5328149,1.5591763,1.5864163,1.6141975,1.6421830 ,1.6700355,1.6974179,1.7239932,1.7494242,1.7733737,1.7955046,1.8154798,1.8329622,1.8476146,1.8590998,1.8670809 ,1.8712205,1.8711817,1.8666272,1.8572200,1.8426229,1.8224987,1.7965104,1.7643208,1.7255927,1.6799891,1.6271729 ,1.5668068) * 0.5 params1 <- c(degradation = 10, bias = -8, sensitivity = 15, weight = 10, basal_transcription = 0, protein = approxfun(time, regulator, rule=2)); target1 <- ode( y = c(x = 0), times = time, func = target_ODE, parms = params1, method = "ode45")[,"x"]; params2 <- c(degradation = 1, bias = 3, sensitivity = 100, weight = -10, basal_transcription = 0, protein = approxfun(time, regulator, rule=2)); target2 <- ode( y = c(x = 0), times = time, func = target_ODE, parms = params2, method = "ode45")[,"x"]; params_to_legend <- function(params) { paste0("s = ", params["sensitivity"],", w = ", params["weight"], ", b = ", params["bias"], ", d = ", params["degradation"]) } measured_data = data.frame(profile = "measured", time = seq(0,2,length.out = 9), expression = c(0.05,1.52,1.44,0.83, 0.96,1.01,0.72,1.07,0.65)) data.frame(time = time, regulator = regulator, target1 = target1, target2 = target2) %>% gather("profile","expression", -time) %>% ggplot(aes(x = time, y = expression, color = profile)) + geom_line() + geom_point(data = measured_data, color = "#ba1b1d", size = 3) + guides(color=FALSE) + ggtitle("Non identifiability between w < 0 and w > 0") There are probably many more non-identified scenarios, especially when multiple targets are involved, but the above are of the type we have actually encountered with real data. ## Reparametrization First we realized that when there is a good fit with $w < 0$ and a similarly good one with $w > 0$, there is nothing we can do to make the model identifiable, so we decided to make the user specify the signs of regulatory interactions $I_{i,j} = sgn(w_{i,j})$ as data. This is not a big hindrance as this type of regulation model is mostly employed to test direct regulations by sigma factors which are expected to always have $w > 0$. If the sign of $w$ is important and there is only one regulator, it is possible to fit each target separately with different values of $I_{1,1}$. The only truly problematic case is when fitting a model with multiple regulators and unknown $I$, but since two regulators already often overfit on most practical datasets this is of little practical concern. Additionally, user may set some $I_{i,j} = 0$ to indicate known absence of regulation. Some of the non-identifiabilities arise because certain simple aspects of the resulting expression (e. g. magnitude) are influenced by multiple parameters. To get rid of those complex dependencies, we introduced several reparametrizations. The only parameter we kept directly is the degradation parameter $d_i$, which also has the nice property that it does not depend on the scale of the expression data (only on the length of the timestep). Instead of $s_i,b_i$ and $w_{i,j}$, we introduce $\mu^{\rho}_i$, the mean regulatory input, $\sigma^{\rho}_i$, the sd of the regulatory input, $\gamma_i$ a simplex of relative regulator weights and $a_i$ the asymptotic normalized state. All of those parameters are also decoupled from the actual values of the expression data. Formally: \begin{align} \mu^{\rho}_i &= E(\rho_i) \\ \sigma^{\rho}_i &= \mathrm{sd}(\rho_i) \\ a_i &= \frac{s_i E(F(\rho_i))}{d_i \max{\tilde{x_i}}} \\ \end{align} Where $E$ and $\mathrm{sd}$ correspond to the sample mean and standard deviation. Solving for the original parameters we get: \begin{align} w_{i,j} &= I_{i,j} \gamma_{i,j} \frac{\sigma^{\rho}_i}{\mathrm{sd}(y_j)} \\ b_i &= \mu^{\rho}_i - \sum_j w_{i,j} E(y_j) \\ s_i &= \frac{a_i d_i \max\tilde{x_i} }{ E(F(\rho_i)) } \\ \sum_j \gamma_{i,j} &= 1 , & \gamma_{i,j} > 0 \\ \end{align} The formula for $w_{i,j}$ assumes independence among the regulators which does not hold, but handling the covariance structure correctly would be difficult and could lead to multiple solutions. The interpretion of $a_i$ is that if the expression stays at its mean level, e.g. $F(\rho_i) \simeq E(F(\rho_i))$ and $t \rightarrow \infty$ then $x_i(t) \rightarrow a_i \max\tilde{x_i}$. In other words $a_i$ is related to the hypothetical steady state long time in the future. Nevertheless the cells do not reach steady state for this datasets (and indeed for most datasets). Also note that using $\max\tilde{x_i}$ to scale $s_i$ means the model is not completely generative, as we cannot generate $\tilde{x_i}$ before we generate $s_i$, but this has shown to not be a problem in practice. The reparametrization not only helped identify the posterior but it is also much easier to specify priors for those parameters! We ended up with the following priors: \begin{align} \mu^{\rho}_i &\sim N(0,\tau_{\rho,\mu}) &\\ \sigma^{\rho}_i &\sim N(0,\tau_{\rho,\sigma}) &| \sigma^{\rho}_i > 0 \\ a_i &\sim N(1,\tau_{a}) &| a_i > 0 \\ log(d_i) &\sim N(\nu_d,\tau_d) & \end{align} Where all the hyperparameters are given by user, but sensible defaults can be given as the hyperparameters are decoupled from the scale of the data. Since the sigmoid $F$ is mostly saturated outside $[-5,5]$, we use $\tau_{\rho,\mu} = \tau_{\rho,\sigma} = 5$. Further we expect $a_i$ should lie mostly in $[0,2]$ (e.g. if given more time with similar regulatory input the gene will unlikely raise to more than twice the maximum observed), giving $\tau_a = \frac{1}{2}$ and finally we expect degradation to be non-negligible but less than 1 (e.g. all mRNA degrading in a single unit of time), giving $\nu_d = -2$ and $\tau_d = 1$. ## The constant synthesis model To get rid of the non-identifiabilities connected with $w = 0$, we first try to fit a simpler *constant synthesis* model to each target separately. This model is given by: $$\frac{\mathrm{d}x_i}{\mathrm{d}t} = s'_i - d'_i x_i$$ As in the general model, the constant synthesis model is not always well identified with this representation and we thus reparametrize with $d'_i$ and $a'_i$ as the asymptotic normalized state. $$s'_i = a'_i d'_i \max\tilde{x_i}$$ This also lets us use the same priors for $a'_i$ and $d'_i$ as for $a_i$ and $d_i$ respectively. If the target is "reasonably well" fit with the constant synthesis model, it is not considered for the full model, because it could then be fit by any regulator, simply be setting $w = 0$. The quality of the model fit is assessed with looic, see discussion below for further details. Note that the constant synthesis model also fits all lowly expressed genes. To our knowledge, filtering with a such a simpler model was first proposed in our recent work [@Modrak2018]. # Workflow The assumed workflow when using the model to infer novel regulations proceeds as follows: 1. Gather putative and known targets of the regulators of interest 2. Fit the constant synthesis model to all known & putative targets 3. Assess model fit and discard targets that are fit well 4. *Optional:* Use the known targets to constrain the true expression of the regulators 5. Fit the main model to each putative target separately 6. Compare the fit with the main model to the fit of the constant synthesis model In the following example, we will use 5 known and 4 putative targets of the *sigA* regulator. We use cubic spline with 6 degrees of freedom, $c_{scale} = 5$ as the basis for the main model. {r} #Globally used params for the algorithm measurement_times = gse6865_raw_time + 1 smooth_time <- 1:101#seq(0,100, by = 1) expression_data <- gse6865_raw spline_df <- 6 spline_basis <- bs(smooth_time, degree = 3, df = spline_df) default_spline_params <- spline_params( spline_basis = spline_basis, scale = 5 ) default_params_prior <- params_prior( initial_condition_prior_sigma = 2, asymptotic_normalized_state_prior_sigma = 2, degradation_prior_mean = -2, degradation_prior_sigma = 1, mean_regulatory_input_prior_sigma = 5, sd_regulatory_input_prior_sigma =5, intercept_prior_sigma = 2 ) default_measurement_sigma = measurement_sigma_given(0.1,0.1) {r} putative_targets <- c("kinE","purT", "yhdL", "codV") #The known targets are those predicted and biologically validated in our previous work with the dataset (https://doi.org/10.1016/j.bbagrm.2017.06.003) known_targets <- c("acpA","fbaA","rpmGA","ykpA","yyaF") plot_profiles <- function(expression_data, targets) { expression_data[targets,,drop = FALSE] %>% as.data.frame() %>% rownames_to_column("gene") %>% gather("time","expression",-gene) %>% mutate(time = as.integer(gsub("min","",time, fixed = TRUE))) %>% ggplot(aes(x = time, y = expression, color = gene, linetype = gene)) + geom_line() } plot_profiles(expression_data, "sigA") + ggtitle("Regulator") plot_profiles(expression_data, known_targets) + ggtitle("Known targets") plot_profiles(expression_data, putative_targets) + ggtitle("Putative targets") ## Fit the constant synthesis model Below are posterior samples from fitting the constant synthesis model to the putative targets. In addition to samples from posterior true expression we also show simulated replicates of the measured values, to get a better sense of the level of noise expected by the model. {r cache=TRUE, results = "hide"} csynth_model <- stan_model(file = here('stan','constant_synthesis.stan')) {r csynth_model,cache=TRUE} targets <- c(putative_targets) data_csynth <- list() fits_csynth <- list() loo_csynth <- list() for(t in 1:length(targets)) { data_csynth[[t]] <- list( num_measurements = length(measurement_times), measurement_times = measurement_times, expression = expression_data[targets[t],], measurement_sigma_absolute = default_measurement_sigma$sigma_absolute_data[1], measurement_sigma_relative = default_measurement_sigma$sigma_relative_data[1], initial_condition_prior_sigma = default_params_prior$initial_condition_prior_sigma, asymptotic_normalized_state_prior_sigma = default_params_prior$asymptotic_normalized_state_prior_sigma, degradation_prior_mean = default_params_prior$degradation_prior_mean, degradation_prior_sigma = default_params_prior$degradation_prior_sigma ) fits_csynth[[t]] <- sampling(csynth_model, data_csynth[[t]]) loo_csynth[[t]] <- get_loo_csynth(fits_csynth[[t]]) } {r, cache=TRUE} plots <- list() for(t in 1:length(targets)) { fit <- fits_csynth[[t]] plot1 <- fitted_csynth_plot(fit, data_csynth[[t]], name = targets[t]) plot2 <- fitted_csynth_observed_plot(fit, data_csynth[[t]], name = targets[t]) plot_grid(plot1, plot2, ncol = 2) %>% print() } ## Assesing constant synthesis fit But how to determine which genes are "fit well" by the constant synthesis model? The best way would probably be to use a custom crossvalidation scheme, predicting one measurement into the future using only the previous measurements. However, this is computationally prohibitive as hundreds of genes need to be checked in practice. Instead we approximate leave-one-out crossvalidation with looic using the loo package [@looPackage] which is feasible and aligns well with our intuition of the ordering of the quality of fits. We note that looic reports warnings for almost all of our fits, indicating limited reliability. Here are the looic scores for the constant synthesis fits: {r,cache=TRUE} get_ic_estimate <- function(x) { x %>% map_dbl(function(x) { x$estimates["looic","Estimate"]}) } csynth_table <- tibble(target = targets, looic_csynth = get_ic_estimate(loo_csynth)) csynth_table Our workflow currently assumes that a human inspects the fits and specifies a looic threshold manually. The threshold can be relatively conservative, the only thing that needs to be avoided are non-identifiabilities due to conflict between fits with$w = 0$and fits with$w \gg 0$. In this example, we will set threshold to 0, elminating only *kinE* from further consideration. In practice it is also important to fit the constant synthesis model to the known targets, which we omitted here for brevity (none of the known targets is fit well). # Estimating regulator expression from known targets A problem with the main model is that the data constrain the true expression quite weakly. This is not an issue for the putative targets, but becomes problematic for the expression of the regulators. When trying to determine new regulations, only a single target at a time needs to be fit, because the model assumes that all the regulations actually take place and thus a single regulation that is not, in fact, correct may shift the parameter values (especially the spline params$\alpha$) considerably. While it would be in principle possible to marginalize over the power set of targets to get estimates of probabilities of the individual regulations, this is computationally infeasible. The downside of fitting each target separately is that the estimated expression of regulators may not be consistent across targets, possibly leading to false positives. Similarly to [@TitsiasHonkela12], our model is able to use known regulations to reduce the uncertainty in regulator expression, hopefully elminating any gross inconsistencies between individual fits for putative targets. This is a simple side-effect of the Bayesian treatment, where after fitting the model with all the known targets at once, we can extract the posterior for$y$and use it as input when fitting putative targets. This step is however optional and if no targets are known with high certainty, it can be skipped. First, let us see the posterior samples from fitting the regulator with 0 targets (equivalent to just fitting a spline). The red dots are the measured values: {r load_model, cache = TRUE, results = "hide"} #Load the model regulated_model <- stan_model(file = here('stan','regulated.stan')) {r spline_only, cache = TRUE, results="hide"} set.seed(4127785) source <- "sigA" data <- regulated_model_params( measurement_times = measurement_times, regulator_expression = expression_data[source,], measurement_sigma = default_measurement_sigma, spline_params = default_spline_params, params_prior = default_params_prior ) fit_regulator_spline_only <- sampling(regulated_model, data = data, control = list(adapt_delta = 0.95)) fitted_regulator_plot(fit_regulator_spline_only, data, name = paste0(source, " spline only")) While the exact scaling of the regulator expression should not affect model results much, we see that there is also qualitative uncertainty, for example in the number of local minima and maxima after the 60 minute mark. And this is what the posterior looks like when both regulator expression and 5 known targets are used. Note that unlike using only splines, the profiles are now very similar especially in that local minima and maxima are at almost the same time point: {r all_info, cache = TRUE, results="hide"} set.seed(751235428) data <- regulated_model_params( measurement_times = measurement_times, regulator_expression = expression_data[source,], target_expression = t(expression_data[known_targets,,drop = FALSE]), regulation_signs = matrix(1, 1, length(known_targets)), measurement_sigma = default_measurement_sigma, spline_params = default_spline_params, params_prior = default_params_prior ) fit_regulator_all_info <- sampling(regulated_model, data = data, control = list(adapt_delta = 0.95)) fitted_regulator_plot(fit_regulator_all_info, data, name = paste0(source, " all information")) # for(target in 1:length(training_targets)) { # fitted_target_plot(fit_regulator_all_info, data, target = target, name = training_targets[target]) %>% print() # } ## Transferring the learned expression to other fits To transfer this "learned" expression of the regulator to other fits, we need to make additional assumptions. The most direct way is to treat the fitted spline coefficients$\alpha_j$as samples from a multivariete normal (MVN) distribution. Some caution has to be excersised since the distribution of coefficients is in general not MVN. Although individual components and even pairs may look approximately normal, the distribution is skewed and has high kurtosis: {r,fig.height=5} pairs(fit_regulator_all_info, pars = "coeffs") samples_coeffs <- rstan::extract(fit_regulator_all_info,"coeffs")$coeffs[,,1] mvn_test_results <- mvn(samples_coeffs, mvnTest = "mardia", desc = FALSE, multivariatePlot = "qq") mvn_test_results$multivariateNormality The MVN approximation is still the most practical. For simplicity we fit MVN by the method of moments. In practice, we have found that MVN fitted this way still gives a lot of leeway for the putative target fits to move away from this estimated distribution as it is only half the data used in the model. To combat this effect, we shrink the "learned" distribution by multiplying the covariance matrix by$0.5$. We currently do not understand deeply why the shrinking leads to better behavior, and it is a simple hack which we should improve upon and provide a more principled way to determine appropriate MVN parameters or even use a different multivariate distribution to account for the skew and kurtosis. As suggested by one of the reviewers, we also tried to fit a multivariate mixture to the distribution using the mixtools package, but that did result in a very similar distribution and did not improve behavior. For comparison here are samples from the original posterior and the scaled and unscaled MVN approximation: {r, fig.height=5} n_samples <- 100 mvn_unscaled <- coeffs_prior_from_fit(fit_regulator_all_info, covariance_scale = 1) cov_scale <- 0.5 means_array <- t(array( rep(mvn_unscaled$coeffs_prior_mean,n_samples),c(length(mvn_unscaled$coeffs_prior_mean), n_samples))) samples_learned <- array(rnorm(length(mvn_unscaled$coeffs_prior_mean) * n_samples, 0, 1), c(n_samples, length(mvn_unscaled$coeffs_prior_mean))) %*% chol(mvn_unscaled$coeffs_prior_cov[1,,]) + means_array samples_learned_scaled <- array(rnorm(length(mvn_unscaled$coeffs_prior_mean) * n_samples, 0, 1), c(n_samples, length(mvn_unscaled$coeffs_prior_mean))) %*% chol(mvn_unscaled$coeffs_prior_cov[1,,] * cov_scale) + means_array limits <- ylim(c(0,6.5)) plot1 <- fitted_regulator_plot(fit_regulator_all_info, data, name = paste0(source, " all information"), num_samples = n_samples) + limits measured_geom <- geom_point(data = data.frame(x = data$measurement_times, y = data$regulator_expression), aes(x=x, y=y), inherit.aes = FALSE, color = "#ba1b1d", size = 3) plot2 <- ggmatplot(1:data$num_time, t(samples_learned %*% t(data$spline_basis)) * data$scale, main_geom = default_expression_plot_main_geom) + measured_geom + ggtitle("MVN approximation unscaled") + limits plot3 <-ggmatplot(1:data$num_time, t(samples_learned_scaled %*% t(data$spline_basis)) * data$scale, main_geom = default_expression_plot_main_geom) + measured_geom + ggtitle(paste0("MVN approximation, scale = ", cov_scale)) + limits plot_grid(plot1, plot2, NULL, plot3, nrow = 2) ## Fitting the model for putative targets Now we can fit the actual model for the remaining three putative targets. Since known targets are available, the MVN approximation of the posterior of the spline coefficients$\alpha$is used as a prior for$\alpha$when fitting the putative targets instead of passing the regulator expression to the model. If there are no known targets, the regulator expression is provided directly. Below are samples of the posterior distribution of target expression and simulated replicates of observed expression: {r fitting_regulated,cache=TRUE} targets <- c("purT", "yhdL", "codV") data_regulated <- list() fits_regulated <- list() loo_regulated <- list() coeffs_prior <- coeffs_prior_from_fit(fit_regulator_all_info, covariance_scale = 1) for(t in 1:length(targets)) { data_regulated[[t]] <- regulated_model_params( measurement_times = measurement_times, target_expression = t(expression_data[targets[t],,drop = FALSE]), regulation_signs = matrix(1, 1, 1), measurement_sigma = default_measurement_sigma, spline_params = default_spline_params, params_prior = default_params_prior, coeffs_prior = coeffs_prior ) fits_regulated[[t]] <- sampling(regulated_model, data_regulated[[t]]) loo_regulated[[t]] <- get_loo_genexpi(fits_regulated[[t]], target = 1) } {r,cache=TRUE} for(t in 1:length(targets)) { fit <- fits_regulated[[t]] plot1 <- fitted_target_plot(fit, data_regulated[[t]], name = targets[t]) plot2 <- fitted_target_observed_plot(fit, data_regulated[[t]], name = targets[t]) plot_grid(plot1, plot2, ncol = 2) %>% print() } ## Fitting the free model In addition to the csynth model we also introduce a *free* model which is the same as the regulated model, but the regulator is not observed, letting almost any target profile be fit well. We assume that if the target expression is well explained by the regulator, using the regulator should improve the predictive power of the model over the free variant. Below, we show the fits of the free model and the associated (unconstrained) imaginary regulator profiles. {r fitting_free, cache = TRUE} data_free <- list() fits_free <- list() loo_free <- list() for(t in 1:length(targets)) { data_free[[t]] <- regulated_model_params( measurement_times = measurement_times, target_expression = t(expression_data[targets[t],,drop = FALSE]), regulation_signs = matrix(1, 1, 1), measurement_sigma = default_measurement_sigma, spline_params = default_spline_params, params_prior = default_params_prior ) fits_free[[t]] <- sampling(regulated_model, data_free[[t]]) loo_free[[t]] <- get_loo_genexpi(fits_free[[t]], target = 1) } {r,cache=TRUE} for(t in 1:length(targets)) { fit <- fits_free[[t]] plot1 <- fitted_target_plot(fit, data_regulated[[t]], name = targets[t]) plot2 <- fitted_regulator_plot(fit, data_regulated[[t]], name = targets[t]) plot_grid(plot1, plot2, ncol = 2) %>% print() } ## Comparison to baseline models We can now use looic to compare the fits of the putative regulators to the csynth and free fits. Once again, in practice, hundreds of putative targets might need to be examined. It is assumed that a human will inspect some of the fits and specify a minimal looic improvement over the constant synthesis model and possibly also an absolute looic threshold to consider fits adequate. Both visual inspection and the difference in looic show that *purT* cannot be fit by *sigA* while both *yhdL* and *codV* are fit well. It is however important to keep in mind that a good fit is a necessary but not sufficient condition for the regulation to be considered real. {r} regulated_table <- tibble(target = targets, looic_regulated = get_ic_estimate(loo_regulated), looic_free = get_ic_estimate(loo_free)) %>% left_join(csynth_table, by = "target") regulated_table # Exploratory analysis with multiple regulators Since *purT* is likely not regulated by *sigA*, we might try a combination of multiple regulators. Unless the dataset is much larger than the one we are using here, this is highly speculative: many combinations of two regulators are able to fit a majority of all genes quite well. {r multiple_reg, cache=TRUE} regulators = c("sigA", "sigB") targets = "purT" data_two_reg <- regulated_model_params( measurement_times = measurement_times, regulator_expression = t(expression_data[regulators,,drop = FALSE]), target_expression = t(expression_data[targets,,drop = FALSE]), regulation_signs = matrix(1, 2, 1), measurement_sigma = default_measurement_sigma, spline_params = default_spline_params, params_prior = default_params_prior ) fit_two_reg <- fit_regulated(data_two_reg, regulated_model) Inspecting the fit visually and comparing the looic in this case we see that the regulation by *sigA* and *sigB* is plausible. {r, message=FALSE,warning=FALSE} plot1 <- fitted_target_plot(fit_two_reg, data_two_reg) plot2 <- fitted_target_observed_plot(fit_two_reg, data_two_reg) plot_grid(plot1, plot2, ncol = 2) regulated_table %>% filter(target == "purT") %>% mutate(looic_two_reg = get_loo_genexpi(fit_two_reg)$estimates["looic","Estimate"]) # Conclusions While the model as presented already coversthe full workflow to infer novel regulations, there are some issues that need to be ironed out before we can rely on it. Most notably, we are yet to perform a thorough evaluation and comparison with other tools for the same task. Initial results were actually comparable to a simpler maximum likelihood variant of the model with separate splining and parameter fitting phases [as presented in @Modrak2018]. We are currently working to understand why there is not a marked improvement, as we know that some of the incorrect results of the simpler version stemmed from separate splining. Further workflow steps we are not happy with are the way we fit a multivariete normal distribution to the posterior of regulator expression, the arbitrary way we use looic to make decisions, and convergence issues that sometimes arise when running the model without observed regulator expression on real data. There is also substantial work to be done in encapsulating the model as a package and providing a cleaner interface. # Acknowledgements Huge thanks belong to the Stan development team and the Stan community. Without the help I got from their tranining materials and answers on the forums, I would not be able to move this project forward. This work was supported by C4Sys research infrastructure project (MEYS project No: LM20150055). # Session Info {r} sessionInfo() ` # References
|
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
40m Log, Page 224 of 346 Not logged in
ID Date Author Type Category Subject
16462 Tue Nov 9 18:05:03 2021 Ian MacMillanSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics
[Ian, Tega]
Now that the computer is in its new rack I have copied over the filter two files that I will use in the plant and the controller from pianosa:/opt/rtcds/caltech/c1/chans to the docker system in c1sim:/home/controls/docker-cymac/chans. That is to say, C1SUP.txt -> X1SUP.txt and C1SUS.txt -> X1SUS_CP.txt, where we have updated the names of the plant and controller inside the txt files to match our testing system, e.g. ITMX -> OPT_PLANT in plant model and ITMX -> OPT_CTRL in the controller and the remaining optics (BS, ITMY, PRM, SRM) are stripped out of C1SUS.txt in order to make X1SUS_CP.txt.
Once the filter files were copied over need to add them to the filters that are in my models to do this I run the commands:
$cd docker-cymac$ eval $(./env_cymac)$ ./login_cymac # cd /opt/rtcds/tst/x1/medm/x1sus_cp # medm -x X1SUS_OPT_PLANT_TM_RESP.adl
see this post for more detail
Unfortunately, the graphics forwarding from the docker is not working and is giving the errors:
canAccess('X1SUS_OPT_PLANT_TM_RESP.adl', 4) = 0
can directly access 'X1SUS_OPT_PLANT_TM_RESP.adl'
Error: Can't open display:
This means that the easiest way to add the filters to the model is through the GUI that can be opened through X2go client. It is probably easiest to get that working. graphics forwarding from inside the docker is most likely very hard.
unfortunately again x2go client won't connect even with updated IP and routing. It gives me the error: unable to execute: startkde. Going into the files on c1sim:/usr/bin and trying to start startkde by myself also did not work, telling me that there was no such thing even though it was right in front of me.
16466 Mon Nov 15 15:12:28 2021 Ian MacMillanSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics
[Ian, Tega]
We are working on three fronts for the suspension plant model:
1. Filters
1. We now have the state-space matrices as given at the end of this post. From these matrices, we can derive transfer functions that can be used as filter inputs. For a procedure see HERE. We accomplish this using Matlab's built-in ss(A,B,C,D); function. then we make it discrete using c2d(sys, 1/f); this gives us our discrete system running at the right frequency. We can get the transfer functions of either of these systems using tf(sys);
2. from there we can copy the transfer functions into our photon filters. Tega is working on this right now.
2. State-Space
1. We have our matrices as listed at the end of this post. With those compiled into a discrete system in MatLab we can use the code Chris made called rtss.m to convert this system into a .c file and a .h file.
2. from there we have moved those files under the userapps folder in the docker system. then we added a c-code block to our .mdl model for the plant and pointed it at the custom c file we made. See section 7.2 of T080135-v10
3. We have done all this and this should implement a custom state-space function into our .mdl file. the downside of this is that to change our SS model we have to edit the matrices we can't edit this from an medm screen. We have to recompile every time.
3. Python Check
1. This python check is run by Raj and will take in the state-space matrices which are given then will take transfer functions along all inputs and outputs and will compare them to what we have from the CDS model.
Here are the State-space matrices:
$A=\begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ -\omega_x^2(1+i/Q_{x}) & -\gamma_x & \omega_xb & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \frac{\omega_{\theta}^2}{l+b} & 0 & -\omega_{\theta}^2(1+i/Q_{\theta}) & -\gamma_{\theta} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -\omega_{\phi}^2(1+i/Q_{\phi}) & -\gamma_{\phi} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & -\omega_{y}^2(1+i/Q_{y}) & -\gamma_{y}\end{bmatrix}$
$B=\begin{bmatrix} 0 & 0 & 0 & 0 \\ \frac{1}{m} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & \frac{R_m}{I_{\theta}} & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{R_m}{I_{\phi}} & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{m} \end{bmatrix}$ $C=\begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{bmatrix}$ $D=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}$
A few notes: If you want the values for these parameters see the .yml file or the State-space model file. I also haven't been able to find what exactly this s is in the matrices.
UPDATE [11/16/21 4:26pm]: I updated the matrices to make them more general and eliminate the "s" that I couldn't identify.
The input vector will take the form:
$\begin{bmatrix} x \\ \dot{x} \\ \theta \\ \dot{\theta} \\ \phi \\ \dot{\phi} \\ y \\ \dot{y} \end{bmatrix}$
where x is the position, theta is the pitch, phi is the yaw, and y is the y-direction displacement
16469 Tue Nov 16 17:29:49 2021 Ian MacMillanSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics
[Ian, Tega]
Updated A, B, C, D matrices for the state-space model to remove bugs in the previous estimate of the system dynamics. Updated the last post to represent the current matrixes.
We used MatLab to get the correct time-series filter coefficients in ZPK format and added them to the filters running in the TM_RESP filter matrix.
Get the pos-pos transfer function from the CDS model. Strangely, this seems to take a lot longer than anticipated to generate the transfer function, even though we are mainly probing the low-frequency behavior of the system.
For example, a test that should be taking approximately 6 minutes is taking well over an hour to complete. This swept sine (results below) was on the low settings to get a fast answer and it looks bad. This is a VERY basic system it shouldn't be taking this long to complete a Swept sine TF.
Noticed that we need to run eval $(./env_cymac) every time we open a new terminal otherwise CDS doesn't work as expected. Since this has been the source of quite a few errors already, we have decided to put it in the startup .bashrc script. loc=$(pwd) cd ${HOME}/docker-cymac/ eval$(./env_cymac) cd ${loc} Attachment 1: x_x_TF1.pdf 16477 Thu Nov 18 20:00:43 2021 Ian MacMillanSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics [Ian, Raj, Tega] Here is the comparison between the results of Raj's python model and the transfer function measurement done on the plant model by Tega and me. As You can see in the graphs there are a few small spots of disagreement but it doesn't look too serious. Next we will measure the signals flowing through the entire plant and controller. For a nicer (and printable) version of these plots look in the zipped folder under Plots/Plant_TF_Individuals.pdf Attachment 1: Final_Plant_Testing.zip 16478 Mon Nov 22 16:38:26 2021 TegaSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics [Tega, Ian] TODO 1. Investigate cross-coupling btw the various degrees of freedom (dof) - turn on noise for each dof in the plant model and measure the transfer function of the other dofs. 2. Get a closed-loop transfer function using noise injection and give a detailed outline of the procedure in elog - IN1/IN2 for each TM_RESP filter while the others are turned off. 3. Derive analytic model of the closed-loop transfer functions for comparison. 4. Adapt control filters to fit optimized analytical solutions. 16615 Mon Jan 24 17:10:25 2022 TegaSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics [Ian, Tega] Connected the New SUS screens to the controller for the simplant model. Because of hard-coded links in the medm screen links, it was necessary to create the following path in the c1sim computer, where the new medm screen files are located: /opt/rtcds/userapps/trunk/sus/c1/medm/templates/NEW_SUS_SCREENS We noticed a few problems: 1. Some of the medm files still had C1 hard coded, so we need to replace them with$IFO instead, in order for the custom damping filter screen to be useful.
2. The "Load coefficient" button was initially blank on the new sus screen, but we were able to figure out that the problem came from setting the top-level DCU_ID to 63.
medm -x -macro "IFO=X1,OPTIC=OPT_CTRL,DCU_ID=63" SUS_SINGLE_OVERVIEW.adl
[TODO]
Get the data showing the controller damping the pendulum. This will involve tweaking some gains and such to fine-tune the settings in the controller medm screen. Then we will be able to post some data of the working controller.
[Useful aside]
We should have a single place with all the instructions that are currently spread over multiple elogs so that we can better navigate the simplant computer.
Attachment 1: Screen_Shot_2022-01-24_at_5.33.15_PM.png
16626 Thu Jan 27 16:40:57 2022 TegaSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics
[Ian, Paco, Tega]
Last night we set up the four main matrices that handle the conversion between the degrees of freedom bases and the sensor bases. We also wrote a bash script to automatically set up the system. The script sets the four change of bases matrices and activates the filters that control the plant. this script should fully set up the plant to its most basic form. The script also turns off all of the built-in noise generators.
After this, we tried damping the optic. The easiest part of the system to damp is the side or y motion of the optic because it is separate from the other degrees of freedom in both of the bases. We were able to damp that easily. in attachment 1 you can see that the last graph in the ndscope screen the side motion of the optic is damped. Today we decided to revisit the problem.
Anyways, looking at the problem with fresh eyes today, I noticed the in pit2pit coupling has the largest swing of all the plant filters and thought this might be the reason why the inputs (UL,UR,LR,LL) to the controller was hitting the rails for pit DoF. I reduce the gain of the pit2pit filter then slowly increased it back to one. I also reduced the gain in the OSEM input filter from 1 to 1/100. The attached image (Attachment2) is the output from this trial. This did not solve the problem. The output when all OSEM input filter gain set to one is shown in Attachment2.
We will try to continue to tweak the coefficients. We are probably going to ask Anchal and Paco to sit down with us and really hone in on the right coefficients. They have more experience and should be able to really get the right values.
Attachment 1: simplant_control_1.png
Attachment 2: simplant_control_0.png
16645 Thu Feb 3 17:15:23 2022 TegaSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics
Finally got the SIMPLANT damping to work following Rana's suggestion to try damping one DoF at a time, woo-hoo!
At first, things didn't look good even when we only focus on the POS DoF. I then noticed that the input value (X1:SUS-OPT_PLANT_TM_RESP_1_1_IN1) to the plant was always zero. This was odd bcos it meant the control signal was not making its way to the plant. So I decided to look at the sensor data
(X1:SUS-OPT_PLANT_COIL_IN_UL_OUTPUT, X1:SUS-OPT_PLANT_COIL_IN_UR_OUTPUT, X1:SUS-OPT_PLANT_COIL_IN_LR_OUTPUT, X1:SUS-OPT_PLANT_COIL_IN_LL_OUTPUT)
that adds up via the C2DOF matrix to give the POS DoF and I noticed that these interior nodes can take on large values but always sum up to zero because the pair (UL, LL) was always the negative of (UR,LR). These things should have the same sign, at least in our case where only the POS DoF is excited, so I tracked the issue back to the alternating (-,+,-,+,-) convention for the gains
(X1:SUS-OPT_CTRL_ULCOIL_GAIN, X1:SUS-OPT_CTRL_URCOIL_GAIN, X1:SUS-OPT_CTRL_LRCOIL_GAIN, X1:SUS-OPT_CTRL_LLCOIL_GAIN, X1:SUS-OPT_CTRL_SDCOIL_GAIN)
of the Coil Output filters used in the real system, which we adopted in the hopes that all was well. Anyways, I changed them all back to +1. This also means that we need to change the sign of the gain for the SIDE filter, which I have done also (and check that it damps OK). I decided to reduce the magnitude of the SIDE damping from 1 to 0.1 so that we can see the residuals since the value of -1 quickly sends the error to zero. I also increased the gain magnitude for the other DoF to 4.
When looking at the plot remember that the values actually represent counts with a scaling of 2^15 (or 32768) from the ADC. I switched back to the original filters on FM1 (e.g. pit_pit ) without damping coefficients present in the FM2 filter (e.g. pit_pit_damp).
FYI, Rana used the ETMY suspension MEDM screen to illustrate the working of the single suspension to me and changed maybe POS and PITCH gains while doing so.
Also, the Medify purifier 'replace filter' indicator issue occurred bcos the moonlight button should have been pressed for 3 seconds to reset the 'replace filter' indicator after filter replacement.
Attachment 1: Screen_Shot_2022-02-03_at_8.23.07_PM.png
16654 Wed Feb 9 14:34:27 2022 IanSummaryComputer Scripts / ProgramsSUS Plant Plan for New Optics
Restarted the C1sim machine at about 12:30 to help diagnose a network problem. Everything is back up and running
Attachment 1: SummaryMdemScreen.png
8747 Tue Jun 25 22:50:12 2013 ranaUpdateSUSSUS Screens generation problems?
From the ALS overview screen, opening up the ETMX and ETMY screens gives these white fields. The PV info indicates that the blank fields were made with some macro variable substitution that didn't work well.
Why are these different from the SUS screens I get from the sitemap?
8152 Sun Feb 24 00:14:28 2013 ManasaUpdateSUSSUS Summary
I tried to fix the alarms for sensors on the SUS summary screen. I checked earlier elogs and found the setSensors.py script.
I received errors while running the script and pianosa was refused connection to nds. Yuta suspects problems with the lib directory.
Jamie! Can you fix this?
8154 Sun Feb 24 17:54:34 2013 ranaUpdateSUSSUS Summary
I asked John Z to talk with Jamie and then install a new NDS2 server software for us. Jamie may know if this happened or was foiled by the linux1 RAID failure.
In any case, our pyNDS stuff ought to be able to talk to NDS2 or our old NDS1 stuff, I hope.
5318 Mon Aug 29 16:27:34 2011 ManuelConfigurationSUSSUS Summary Screen
I edited the C1SUS_SUMMARY.adl file and set the channels in alarm mode to show the values in green, yellow and red according to the values of the thresholds (LOLO, LOW, HIGH, HIHI)
I wrote a script in python, which call the command ezcawrite and ezcaread, to change the thresholds one by one.
You can call this program with a button named "Change Thresholds one by one" in the menu come down when you click the button.
I'm going to write another program to change the thresholds all together.
12255 Wed Jul 6 19:36:45 2016 KojiUpdateGeneralSUS Vmon
I wanted to know what this Vmon exactly is. D010001 is telling us that the Vmon channels are HPFed with fc=30Hz (Attachment 1). Is this true?
I checked the quiscent noise spectrum of the ITMX UL coil output (C1:SUS-ITMX_ULCOIL_OUT) and the corresponding VMON (C1:SUS-ITMX_ULVmon). (Attachment 2 Ref curves). I did not find any good coherence. So the nominal quiscent Vmon output is carrying no useful information.
Question: How much do we need to excite the coil output in order to see any meaningful signal?
As I excite the ITMX UL coil (C1:SUS-ITMX_ULCOIL_EXC) with uniform noise of 100-300 counts below 0.3Hz, I eventually could see the increase of the power spectrum and the coherence (Attachment 2). Below 0.1 Hz the coherence was ~1 and the transfer function was measured to be -75dB and flat. But wait, why is the transfer function flat?
In fact, if I inject broadband noise to the coil, I could increase the coil output and Vmon at the same time without gaining the coherence. (Attachment 3). After some more investigation, I suspect that this HPF is diabled (= bypassed) and aliasing of the high freq signal is causing the noise in Vmon.
In order to check this hypothesis, we need to visit the board.
Attachment 1: HPF.png
Attachment 2: 160706_ITMX_VMON2.pdf
Attachment 3: 160706_ITMX_VMON1.pdf
12268 Thu Jul 7 15:23:39 2016 ericqUpdateGeneralSUS Vmon
Based on Koji's observation of a flat TF, it seems more likely the Vmon channels are looking at the path I've highlighted in green (named "EPICS V Mon"), rather than the path in red (named "DAQ Mon") that Koji initially suspected. This path still lacks any AA for the 16Hz EPICS sampling.
12269 Thu Jul 7 16:05:55 2016 KojiUpdateGeneralSUS Vmon
Ah, thanks. That makes sense. In that case, we should remove the texts "30Hz HPF" from the suspension screens.
Now we just need AA LPFs for these channels, or hook them up to the RT system.
4812 Mon Jun 13 19:26:42 2011 Jamie, JoeConfigurationCDSSUS binary IO chassis 2 and 3 moved from 1X5 to 1X4
While preping 1X4 for installation of c1lsc, we removed some old VME crates that were no longer in use. This freed up lots of space in 1X4. We then moved the SUS binary IO chassis 2 and 3, which plug into the 1X4 cross-connect, from 1X5 into the newly freed space in 1X4. This makes the cable run from these modules to the cross connect much cleaner.
4814 Tue Jun 14 09:24:36 2011 steveConfigurationPhotosSUS binary IO chassis 2 and 3 moved from 1X5 to 1X4
Quote: While preping 1X4 for installation of c1lsc, we removed some old VME crates that were no longer in use. This freed up lots of space in 1X4. We then moved the SUS binary IO chassis 2 and 3, which plug into the 1X4 cross-connect, from 1X5 into the newly freed space in 1X4. This makes the cable run from these modules to the cross connect much cleaner.
Are we keeping these?
Attachment 1: P1070891.JPG
Attachment 2: P1070893.JPG
6182 Mon Jan 9 23:52:15 2012 kiwamuUpdateCDSSUS channels not accessible from dataviewer
[John / Kiwamu]
We found that some of the suspensions channels (for example C1:SUS-BS_POS_IN1 and etc) were not accessible from dataviewer for some reasons.
So far it seems none of the channels associated with c1sus are accessible from dataviewer.
4780 Thu Jun 2 16:23:42 2011 JamieUpdateSUSSUS control models updated to use new sus_single_control library part
A new library part was made for the single suspension controller (it was originally made from the c1scx controller), using the following procedure:
1. Opened c1scx model (userapps/trunk/sus/c1/models/c1scx)
2. Cut ETMX subsystem block out of SUS subsystem
3. Pasted ETMX block into new empty library, and renamed it C1_SUS_SINGLE_CONTROL
4. Tweaked names of inputs, and generally cleaned up internals (cosmetically)
5. Saved library to: userapps/trunk/sus/c1/models/lib/sus_single_control.mdl
Once the new sus_single_control library part was made and the library was committed to the cds_user_apps repo, I replaced all sus controller subsystems with this new part, in:
• c1scx
• c1scy
• c1sus (x5 for each vertex mass)
All models were rebuild, installed, and tested, and everything seems to be working fine.
12315 Wed Jul 20 13:58:55 2016 SteveUpdateSUSSUS damping out of vac chamber
Cheater cable to be used in clean room pitch gluing alingment.
Satelite amp needs to be there.
Atm 2-3, The ETMs suspension damping cable are connected at the end racks. All others go to 1X5
Atm 4-5, The other end of this cable in the high cable tray at 1X3 as shown. We'll disconnect the shorty and move the end to ETMX ( or any sus at 1X5 )
Attachment 1: ETMXrack.jpg
Attachment 2: fromETMX-satBox.jpg
Attachment 3: susDampingSatboxCab.jpg
Attachment 4: susDampingSatboxCabl.jpg
5461 Mon Sep 19 15:41:48 2011 JenneUpdateSUSSUS diag stuff... just so I remember what I'm doing
The following optics were kicked:
ETMX
Mon Sep 19 15:39:44 PDT 2011
1000507199
5471 Mon Sep 19 22:47:44 2011 JenneUpdateSUSSUS diag stuff... just so I remember what I'm doing
The last person out tonight should run the following scripts:
In Matlab:
/opt/rtcds/caltech/c1/scripts/SUS/peakFit/writeMultiSUSinmat.m
In command line:
/opt/rtcds/caltech/c1/scripts/SUS/freeswing all
Then in the morning, someone should do a BURT restore to early today (to get the default matricies back), and also restore the watchdogs.
Thanks!
5485 Tue Sep 20 16:45:09 2011 JenneUpdateSUSSUS diag stuff... just so I remember what I'm doing
Has the Q been checked? Still in progress...
Optic POS PIT YAW SIDE ITMX done done done done ITMY done done fine?? done ETMX done done done done ETMY done done done done BS done done done done PRM done done done done SRM done done done done MC1 MC2 MC3
So, update as of 6:17pm: I have tuned the damping gains for all IFO optics. Everything is good, except for ITMY Yaw. It's probably fine, the optic damps okay, but it doesn't look like a nice clean ringdown. I haven't taken the time to go back and look at it again.
I have to go to a dinner, but later (probably in the morning, so I don't disturb evening locking) I'll check the MC Qs.
5493 Wed Sep 21 00:34:29 2011 ranaUpdateSUSSUS diag stuff... just so I remember what I'm doing
ETMX was ringing up when it was mis-aligned for Y arm locking. I restored the input matrix to something more diagonal and its now damping again. Needs more work before we can use the calculated matrix.
11569 Thu Sep 3 19:52:24 2015 ranaSummarySUSSUS drift monitor
Since Andrey's SUS Drift mon screen back in 2007, we've had several versions which used different schemes and programming languages. Diego made an update back in January.
Today I added his stuff to the SVN since it was lost in the NFS disks somewhere. Its in SUS/DRIFT_MON/.
Since we've been updating our userapps directory recently to pull in the screens and scripts from the sites, we also got a copy of the Thomas Abbott drift mon stuff which is better (Diego actually removed the yellow/red functionality as part of the 'upgrade'), but more complicated. For now we have the old one. I updated the good values with all optics roughly aligned just a few minutes ago.
Attachment 1: 07.png
2901 Sun May 9 20:02:23 2010 ranaConfigurationSUSSUS filters deleted again to reduce CPU load on c1susvme2 again
On Friday, I deleted a bunch of filters from the c1susvme2 optics' screens (MC1,2,3 + SRM) so as to reduce the CPU load and keep it from going bonkers.
This first plot shows the CPU trend over the last 40 days and 40 nights. As you can see the CPU_LOAD has dropped by 1 us since I did the deleting.
In the second plot (on the right) you can see the same trend but over 400 days and nights. Of course, we hope that we throw this away soon, but until then it will be nice to have the suspensions be working more often.
16997 Wed Jul 13 12:49:25 2022 PacoSummarySUSSUS frozen
[Paco, JC, Yuta]
This morning, while investigating the source of a burning smell, we turned off the c1SUS 1X4 power strip powering the sorensens. After this, we noticed the MC1 refl was not on the camera, and in general other vertex SUS were misaligned even though JC had aligned the IFO in the morning to almost optimum arm cavity flashing. After a c1susaux modbusIOC service restart and burt restore, the problem persisted.
We started to debug the sus rack chain for PRM since the oplev beam was still near its alignment so we could use it as a sensor. The first weird thing we noticed was that no matter how much we "kicked" PRM, we wouldn't see any motion on the oplev. We repeatedly kicked UL coil and looked at the coil driver inputs and outputs, and also verified the eurocard had DC power on which it did. Somehow disconnecting the acromag inputs didn't affect the medm screen values, so that made us suspicious that something was weird with these ADCs.
Because all the slow channels were in a frozen state, we tried restarting c1susaux and the acromag chassis and this fixed the issue.
5766 Sun Oct 30 23:03:37 2011 SUS_DiagonalizerUpdateSUSSUS input matrix diagonalization complete (EXAMPLE)
New SUS input matrix diagonalization complete. Matrices have been written to the frontend. Summaries for each optic can be viewed below.
(THIS IS AN EXAMPLE---no new matrices have been written.)
Attachment 1: MC1.png
Attachment 2: MC2.png
Attachment 3: MC3.png
Attachment 4: ETMX.png
Attachment 5: ETMY.png
Attachment 6: ITMX.png
Attachment 7: ITMY.png
Attachment 8: PRM.png
Attachment 9: SRM.png
Attachment 10: BS.png
5775 Tue Nov 1 13:46:03 2011 ZachUpdateSUSSUS input matrix diagonalizer REMOVED from crontab
It turns out that nodus doesn't know how to NDS2, so I can't run diagAllSUS as a cron job on nodus. To further complicate things, no other machines can run the elog utility, so I am going to have to do something shifty...
5770 Mon Oct 31 14:06:16 2011 ZachUpdateSUSSUS input matrix diagonalizer added to crontab
I actually tried to do this last night, but I was getting a terminal error from nodus. Jamie told me to just manually redefine the TERM variable and it would work. So, I have added kickAll and diagAllSUS to controls@nodus's crontab:
nodus:~>crontab -l
0 5 * * * /opt/rtcds/caltech/c1/scripts/backup/rsync.backup
0 7 * * * /opt/rtcds/caltech/c1/scripts/backup/check_backup.sh
0 17 * * 0 /cvs/cds/caltech/users/jenne/LIGOX/LIGOX_Pizza_Reminders.sh
0 8 * * 0 /cvs/cds/rtcds/caltech/c1/scripts/SUS/peakFit/kickAll
0 18 * * 0 /cvs/cds/rtcds/caltech/c1/scripts/SUS/peakFit/diagAllSUS
Again, their functionality should be checked before this is allowed to run on Sunday!
15836 Tue Feb 23 23:12:37 2021 KojiSummarySUSSUS invacuum wiring
This is my current understanding of the in-vacuum wiring:
1. Facts
• We have the in-air cable pinout. And Gautam recently made a prototype of D2100014 custom cable, and it worked as expected.
• The vacuum feedthrough is a wall with the male pins on the both sides. This mirrors pinout.
• On the in-vacuum cable stand (bracket), the cable has a female connector.
2. From the above facts, the in-vacuum cable is
• DSUB25 female-female cable
• There is no pinout mirroring
Accuglass has the DSUB25 F-F cable off-the-shelf. However, this cable mirrors the pinout (see the datasheet on the pdf in the following link)
https://www.accuglassproducts.com/connector-connector-extension-cable-25-way-female
3. The options are
- ask Accuglass to make a twisted version so that the pinout is not mirrored.
or
- combine Accuglass female-male cable (https://www.accuglassproducts.com/connector-connector-extension-cable-25-way-femalemale) and a gender changer (https://www.accuglassproducts.com/gender-adapter-25d)
4. The length will be routed from the feedthrough to the table via the stacks like a snake to be soft. So, it will require some extra length.
5. Also, the Accuglass cables don't have a flap and holes to fix the connector to a cable post (tower). If we use a conventional 40m-style DSUB25 post (D010194), it will be compatible with their cables. But this will not let us use a DSUB25 male connector to mate. In the future, the suspension will be upgraded and we will need an updated cable post that somehow holds the connectors without fastening the screws...
Attachment 1: SOS_OSEM_cabling.pdf
6365 Tue Mar 6 16:17:36 2012 JenneUpdateSUSSUS matrix diagonalization status
Has default inmat:
PRM, ITMX
Has fancy inmat:
BS, ITMY, SRM (but side is non-fancy), ETMX, ETMY, MC1, MC2, MC3
So it's likely that the MICH problems (giganto 1Hz peak) Keiko and Kiwamu were seeing last night had to do with ITMX having the non-optimized input matrix. I'll try to figure out where the data from the last freeswing test is, and put in a fancy diagonalized matrix.
9088 Thu Aug 29 17:25:50 2013 JamieUpdateSUSSUS medm screen upgrade
Rana asked me to look at the SUS MEDM screen upgrade situation, and provide an upgrade prescription. Unfortunately there not really a simple prescription that can be used, since our configuration diverges quite a bit from what's at the sites. But here's what I can say:
It looks like we already have the beginnings of an upgrade in place, so I say we just run with that. The new screens are in:
/opt/rtcds/userapps/release/sus/c1/medm/new
The primary screen is:
/opt/rtcds/userapps/release/sus/c1/medm/new/OVERVIEW.adl
This seems to be a copy of the site ASC_TIPTILT screens. (In fact I think I remember putting this here). I went ahead and did some ground work to make it easier to get these new screens into place.
• I cleaned up all the channel name prefixes so that at least the channel prefixes will resolve to our SUS channels.
• I made a link from the sitemap with some of the correct macros to fill some things in appropriately: "IFO SUS/NEW ETMX"
• I fixed the names to the sub-screens, so that it correctly opens the correct sub-screens (although the macros seem to not be passed correctly)
At this point someone needs to just go through and fix all the channel names to match ours, and tweak the screen to our needs (there's no side OSEM, for instance). The subscreens need to be cleaned up as well.
### sed replace string
If there is a specific string you want to replace every instance of in the screen, you can do that easily from the command line like this:
sed -i 's/OLD/NEW/g'
This will replace every instance of the string OLD with the string new in the file path/to/file. Be careful: this will replace EVERY instance of OLD. If OLD matches things you don't want, they will be replaced as well.
This construction is actually "regular expressions", so if you want to get fancy you can match against more complicated strings. But just be careful.
If you leave out the "-i" the string-replaced text will go to stdout, instead of being replaced in the file "in place", so you can check it first.
### query replace in emacs
If you want more fine-grained control of text replace, so that you can see what's being replaced, try using "query-replace" in emacs:
M-x query-replace
You can then type in the original string, followed by the replacement string. When it starts to run it will highlight the string that will be replaced. Hit "space" to accept or "n" to skip and go to the next.
13221 Wed Aug 16 20:01:03 2017 gautamUpdateGeneralSUS model ASC input weirdness
I'm not sure if this has something to do with the model restarts / new RCG, but while I was re-enabling the MC watchdogs, I noticed the RMS sensor voltage channels on ITMX hovering around ~100mV, even though local damping was on (in which configuration I would expect <1mV if everything is working normally). I was confused by this behaviour, and after staring at the ITMX suspension screen for a while, I noticed that the input to the "ASCP" and "ASCY" servos were "-nan", and the outputs were 10^20 cts (see Attachment #1).
Digging a little deeper, I found that the same problem existed on ITMY, ETMX, ETMY, PRM (but not BS or SRM) - reasons unknown for now.
I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible.
After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.
gedit 28 Oct 0026: Seems like this problem is seen at the sites as well. I wonder if the problem is related.
Attachment 1: ITMX_ASC.png
13228 Fri Aug 18 21:58:35 2017 gautamUpdateGeneralSUS model ASC input weirdness
I spent some time today trying to debug this issue.
Jamie and I had opened up the c1sus frontend to try and replace the RFM card before we realized that the problem was in the RCG code generator. During this process, we had disconnected all of the back-panel cabling to this machine (2 ethernet cables, dolphin cable, and RFM cables/fibers). I thought I may have accidentally returned the cables to the wrong positions - but all the status indicator lights indicate that everything is working as it should, and I also confirmed that the cabling is as it is in the pictures of the rack on the wiki page.
Looking at the SimuLink model diagram (see Attachment #1 for example), it looks like (at least some of) these channels are actually on the dolphin network, and not the RFM network (with which we were experiencing problems). This suggests that the problem is something deeper. Although I did see nans in some of the ETMX ASC channels as well, for which the channels are piped over the RFM network. Even more puzzling is that the ASC MEDM screen (Attachment #3) and the SimuLink diagram (Attachment #2) suggest that there is an output matrix in between the input signals and the output angular control signals to the suspensions. As Attachment #4 shows, the rows corresponding to ITMX PIT and YAW are zero (I confirmed using z read <matrixElement>). Attachment #3 shows that the output of all the servo banks except CARM_YAW is zero, but CARM_YAW has no matrix element going to the ITMs (also confirmed with z read <servoOutputChannel>). So 0 x 0 should be 0, but for some reason the model doesn't give this output?
GV Edit: As EricQ just pointed out to me, nan x 0 is still nan, which probably explains the whole issue. Poking a little further, it seems like this is an SDF issue - the SDF table isn't able to catch differences for this hold output channel.
As I was writing this elog, I noticed that, as mentioned above, the CARM_YAW output was "nan". When I restart the model (thankfully this didn't crash c1lsc!), it seems to default to this state. Opening up the filter module, I saw that the "hold output" was enabled.
## Toggling that switch made the nans in all the SUS ASC channels disappear. Mysterious .
All the points above stand - CARM_YAW output shouldn't have been going anywhere as per the output matrix, but it seems to have been responsible? Seems like a bug in any case if a model restarts with a field as "nan".
Anyways the problem seems to have been resolved so I'm going to try locking and dither aligning the arms now.
Rolf mentioned that a simple update could fix several of the CDS issues we are facing (e.g. inability to open up testpoints), but he didn't seem to have any insight into this particular issue. Jamie will try and recompile all the models and then we have to see if that fixes the remaining problems.
Quote: I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible. After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.
Attachment 1: ITMXP.png
Attachment 2: ASC_model_outmatrix.png
Attachment 3: ASC_medm.png
Attachment 4: ASC_outMat.png
4967 Thu Jul 14 15:27:08 2011 steve,UpdateSUSSUS oplev spectras
Quote:
Quote:
Quote: Healthy BS oplev
I repeated the BS oplev spectrum today and I do not understand why it does look different. I did it as Kiwamu describes it in entry#4948 The oplev servo was left ON!
It is working today! Finally I repeated the BS spectra, that we did with Kiwamu last week
The optical levers were centered during these measurements without the reference of locked cavities. They have no reference value now.
SRM sus need some help. ITMX is showing pitch/yaw modes of the pendulum .....OSEM damping is weak?
Attachment 1: BS_oplev.jpg
Attachment 2: PRM_oplev.jpg
Attachment 3: ITMX_oplev.jpg
Attachment 4: ETMX_oplev.jpg
Attachment 5: ETMY_oplev.jpg
Attachment 6: SRM_oplev.jpg
Attachment 7: ITMY_oplev_b.jpg
4972 Fri Jul 15 09:25:02 2011 ranaUpdateSUSSUS oplev spectras
In addition to the OL quadrants, you need to plot the OPLEV_PERROR and OPLEV_YERROR signals since these are the real signals we use for finding the mirror motion. If they're not in the Dataviewer, Jamie should add them as 256 Hz DAQ channels (using these names so that we have the continuity with the past). These DAQ channels correspond to the IN1 channels for the OL filter banks.
Also JPG are banned from the elog - you should put all of the plots into a single, multipage PDF file in honor of the new Wagonga.
16553 Thu Jan 6 22:18:47 2022 KojiUpdateCDSSUS screen debugging
Indicated by the red arrow:
Even when the side damping servo is off, the number appears at the input of the output matrix
Indicated by the green arrows:
The face magnets and the side magnets use different ADCs. How about opening a custom ADC panel that accommodates all ADCs at once? Same for the DAC.
Indicated by the blue arrows:
This button opens a custom FM window. When the pitch gain was modified with a ramping time, the pitch and yaw gain grows at the same time even though only the pitch gain was modified.
Indicated by the orange circle:
The numbers are not indicated here, but they are input-related numbers (for watchdogging) rather than output-related numbers. It is confusing to place them here.
Attachment 1: Screen_Shot_2022-01-06_at_18.03.24.png
16570 Tue Jan 11 10:46:07 2022 TegaUpdateCDSSUS screen debugging
Seen. Thanks.
Red Arrow: The channel was labeled incorrectly as INMON instead of OUTPUT
Green Arrow: OK, I will create a custom medm screen for this.
Blue arrow: Hmm, OK I will look into this. Doing this work remotely is a pain as the medm response is quite slow for poking around.
Orange circle: OK, I'll move this to the left side of the line.
Note to self: I also noticed another error on the side (LPYS blue box just b4 the sum). The channel is pointing to YAW instead of the side, so needs to be fixed as well.
Quote: Indicated by the red arrow: Even when the side damping servo is off, the number appears at the input of the output matrix Indicated by the green arrows: The face magnets and the side magnets use different ADCs. How about opening a custom ADC panel that accommodates all ADCs at once? Same for the DAC. Indicated by the blue arrows: This button opens a custom FM window. When the pitch gain was modified with a ramping time, the pitch and yaw gain grows at the same time even though only the pitch gain was modified. Indicated by the orange circle: The numbers are not indicated here, but they are input-related numbers (for watchdogging) rather than output-related numbers. It is confusing to place them here.
16611 Fri Jan 21 12:46:31 2022 TegaUpdateCDSSUS screen debugging
All done (almost)! I still have not sorted the issue of pitch and yaw gains growing together when modified using ramping time. Image of custom ADC and DAC panel is attached.
Quote:
Seen. Thanks.
Quote: Indicated by the red arrow: Even when the side damping servo is off, the number appears at the input of the output matrix Indicated by the green arrows: The face magnets and the side magnets use different ADCs. How about opening a custom ADC panel that accommodates all ADCs at once? Same for the DAC. Indicated by the blue arrows: This button opens a custom FM window. When the pitch gain was modified with a ramping time, the pitch and yaw gain grows at the same time even though only the pitch gain was modified. Indicated by the orange circle: The numbers are not indicated here, but they are input-related numbers (for watchdogging) rather than output-related numbers. It is confusing to place them here.
16084 Sun Apr 25 21:21:02 2021 ranaUpdateCDSSUS simPlant model
1. I suggest not naming this the LSC model, since it has no LSC stuff.
2. Also remove all the diagnostic stuff in the plant model. We need nothing except a generic filter Module, like in the SUS controller.
3. Also, the TF looks kind of weird to me. I would like to see how you derive that eq.
4. Connect the models and show us some plots of the behavior in physical units using FOTON to make the filters and diaggui/DTT (at first) to make the plots.
16088 Tue Apr 27 15:15:17 2021 Ian MacMillanUpdateCDSSUS simPlant model
The first version of the single filter plant is below. Jon and I went through compiling a model and running it on the docker (see this post)
We activated an early version of the plant model (from about 10 years ago) but this model was not designed to run on its own so we had to ground lots of unconnected pieces. the model compiled and ran so we moved on to the x1sus_single_plant model that I prepared.
This model is shown in the first attachment wasn't made to be run alone because it is technically a locked library (see the lock in the bottom left). It is supposed to be referenced by another file: x1sup.mdl (see the second attachment). This works great in the Simulink framework. I add the x1sus_single_plant model to the path and Matlab automatically attaches the two. but the docker does not seem to be able to combine the two. Starting the cymac it gives these errors:
cymac | Can't find sus_single_plant.mdl; RCG_LIB_PATH=/opt/rtcds/userapps:/opt/rtcds/userapps/lib:/usr/share/advligorts/src/src/epics/simLink/:/usr/share/advligorts/src/src/epics/simLink/lib:/usr/share/advligorts/src/src/epics/simLink cymac | make[1]: *** [Makefile:30: x1sup] Error 2 cymac | make: *** [Makefile:35: x1sup] Error 1
I have tried putting the x1sus_single_plant.mdl file everywhere as well as physically dragging the blocks that I need into the x1sup.mdl file but it always seems to throw an error. Basically, I want to combine them into one file that is not referencing anything other than the CDS library but I cant figure out how to combine them.
Okay but the next problem is the medm screen generation. When we had the original 2010 model running the sitemap did not include it. It included models that weren't even running before but not the model Jon and I had added. I think this is because the other models that were not running had medm screens made for them. I need to figure out how to generate those screens. I need to figure out how to use the tool Chris made to auto-generate medm screens from Simulink but I can't seem to figure it out. And honestly, it won't be much use to me until I can actually connect the plant block to its framework. One option is to just copy each piece over one by one. this will take forever but at this point, I am frustrated enough to try it. I'll try to give another update later tonight.
Attachment 1: x1sus_single_plant.pdf
Attachment 2: x1sup.pdf
16096 Thu Apr 29 13:41:40 2021 Ian MacMillanUpdateCDSSUS simPlant model
To add the required library: put the .mdl file that contains the library into the userapps/lib folder. That will allow it to compile correctly
I got these errors:
Module ‘mbuf’ symvers file could not be found.
Module ‘gpstime’ symvers file could not be found.
***ERROR: IPCx parameter file /opt/rtcds/zzz/c1/chans/ipc/c1.ipc not found
make[1]: *** [Makefile:30: c1sup] Error 2
make: *** [Makefile:35: c1sup] Error 1
I removed all IPC parts (as seen in Attachment 1) and that did the trick. IPC parts (Inter-Process Communication) were how this model was linked to the controller so I don't know how exactly how I can link them now.
I also went through the model and grounded all un-attached inputs and outputs. Now the model compiles
Also, The computer seems to be running very slowly in the past 24 hours. I know Jon was working on it so I'm wondering if that had any impact. I think it has to do with the connection speed because I am connected through X2goclient. And one thing that has probably been said before but I want to note again is that you don't need a campus VPN to access the docker.
Attachment 1: Non-IPC_Plant.pdf
16106 Fri Apr 30 12:52:14 2021 Ian MacMillanUpdateCDSSUS simPlant model
Now that the model is finally compiled I need to make an medm screen for it and put it in the c1sim:/home/controls/docker-cymac/userapps/medm/ directory.
But before doing that I really want to test it using the autogenerated medm screens which are in the virtual cymac in the folder /opt/rtcds/tst/x1/medm/x1sup. In Jon's post he said that I can use the virtual path for sitemap after running $eval$(./env_cymac)
16109 Mon May 3 13:35:12 2021 Ian MacMillanUpdateCDSSUS simPlant model
When the cymac is started it gives me a list of channels shown below.
$Initialized TP interface node=8, host=98e93ecffcca$ Creating X1:DAQ-DC0_X1IOP_STATUS $Creating X1:DAQ-DC0_X1IOP_CRC_CPS$ Creating X1:DAQ-DC0_X1IOP_CRC_SUM $Creating X1:DAQ-DC0_X1SUP_STATUS$ Creating X1:DAQ-DC0_X1SUP_CRC_CPS $Creating X1:DAQ-DC0_X1SUP_CRC_SUM But when I enter it into the Diaggui I get an error: The following channel could not be found: X1:DAQ-DC0_X1SUP_CRC_CPS My guess is that need to connect to the Diaggui to something that can access those channels. I also need to figure out what those channels are. 16118 Tue May 4 14:55:38 2021 Ian MacMillanUpdateCDSSUS simPlant model After a helpful meeting with Jon, we realized that I have somehow corrupted the sitemap file. So I am going to use the code Chris wrote to regenerate it. Also, I am going to connect the controller using the IPC parts. The error that I was having before had to do with the IPC parts not being connected properly. 16122 Wed May 5 15:11:54 2021 Ian MacMillanUpdateCDSSUS simPlant model I added the IPC parts back to the plant model so that should be done now. It looks like this again here. I can't seem to find the control model which should look like this. When I open sus_single_control.mdl, it just shows the C1_SUS_SINGLE_PLANT.mdl model. Which should not be the case. 16124 Thu May 6 16:13:24 2021 Ian MacMillanUpdateCDSSUS simPlant model When using mdl2adl I was getting the error: $ cd /home/controls/mdl2adl $./mdl2adl x1sup.mdl error: set$site and $ifo environment variables to set these in the terminal use the following commands: $ export site=tst $export ifo=x1 On most of the systems, there is a script that automatically runs when a terminal is opened that sets these but that hasn't been added here so you must run these commands every time you open the terminal when you are using mdl2adl. 16126 Fri May 7 11:19:29 2021 Ian MacMillanUpdateCDSSUS simPlant model I copied c1scx.mdl to the docker to attach to the plant using the commands: $ ssh nodus.ligo.caltech.edu [Enter Password] $cd opt/rtcds/userapps/release/isc/c1/models/simPlant$ scp c1scx.mdl controls@c1sim:/home/controls/docker-cymac/userapps
ELOG V3.1.3-
|
# Scoring
## mp_transform optimize with franklin2019 scoring
Category:
Design
Scoring
Enzyme Design
Membrane
Hi All,
I am running some flexible backbone design on a transmembrane four-helix bundle heme protein via RosettaScripts. I'm finding that the membrane residue is moving a lot during design, and I have to optimize the embedding with mp_transform post-design to reposition the mem residue. I have a few questions about this:
Post Situation:
## Setting output values using a database (with relax app)
Category:
Scoring
Hello all,
I noticed rosetta creates 23 tables in MySQL when using it as output backend in the RELAX application.
For my specific research I only need to hold the per-residue energies and the total energies... I don't need the atoms, topology or PDB data to be saved after each cycle, as it consumes a lot of memory.
Post Situation:
## Output "per-residue" energy score to database
Category:
Scoring
Hello all.
I noticed that when I output a relax cycle to a PDB, the residue scores are in the bottom of the PDB.
When I output the results to a database (MySQL), the table structure_scores holds the energy scores of the whole PDB, but I can't find the per-residue values anywhere.
Post Situation:
## BUG REPORT: MySql column protocol.protocol_id must have the AUTO_INCREMENT flag set
Category:
Scoring
I tried running relax compiled with MPI and MYSQL.
MySQL was configured properly. A test run without MPI successfully created tables and wrote data to MySQL.
The protocol file @relax.flags was created with the following options:
-list pdblist.txt
-relax:script relax.script
-relax:bb_move false
-score:output_residue_energies
-score:weights res2015
Post Situation:
## Relaxation vs minimisation
Category:
Structure prediction
Scoring
PyRosetta
Hi all,
I am very new to the computational structural biology community and I have tried to model a structure by using a software which runs MODELLER on the background. However, my result shows a number of steric clashes and a very high fa_rep when I calculate it with PyRosetta. I am therefore trying to improve the structure before moving on with the rest of my analysis.
Post Situation:
## dump_scored_pdb
Category:
Scoring
I am new to Pyrosetta and trying to dump the modified pose along with its total energy.
I have used pose.dump_score_pdb('model.pdb', scorefxn) to have the total score printed out with the PDB file.
However, this method prints out way more information. What would be the equivalent way of having the total score printed out (as the fileters work in the xml file at rosetta scripts)?
Post Situation:
## Distorted metal coordination geometry after relaxation (SetupMetalMover was used, fold tree and constraints were set manually)
Category:
Design
Scoring
Constraints
PyRosetta
Hello,
I am trying to relax Zn containing peptides like zinc fingers, but always got distorted geometries of the coordination site and much higher scores after the relax. Still, the rest of the peptide looks nice.
Post Situation:
## Working through tutorials: expected output scorces differ from calculated results
Category:
Scoring
Hi,
I'm new to Rosetta and currently working through the tutorials.
I often recognized that the the score values produced by the recent version "Rosetta 2019.47" on my computer differ from the ones cited in the tutorials and given expected_output.
e.g.:
in the Scoring Tutorial (subchapter "Changing the Score Function") the command
ROSETTA3/bin/score_jd2.linuxgccrelease @flag_docking
with flags
Post Situation:
## Compaing interface socre of different ligands with the same protein structure
Category:
Scoring
I am trying to dock molecules with the same parent core but different substitution groups into the same PDB structure using ROISE.
I tried to read through some published papers on ResettaLigand but still was not able to find out the answer for:
Since we are only allowed to include on ligand in each run, can we submit a few different runs and compare the interface score directly?
Post Situation:
## Ddg calculation for a metalloprotein using APBS
Category:
Scoring
Hi all,
I am trying to calculate Ddg for a protein complex, which is made of 3 chains: A, B and C.
The chains B and C have a zinc-finger each, which are a zinc ion complexed by four CYS side-chains.
To calculate the Ddg I am using the SetupPoissonBoltzmannPotential mover followed by the Ddg filter, in order to separate chain B from chain A+C and calculate the binding energy.
Post Situation:
|
# What are the products of the following reaction? 2Al(s) + 6H^+ (aq) ->
Jul 2, 2016
Aluminium cations and hydrogen gas.
#### Explanation:
You were given the reactants in the net ionic equation that describes the reaction between aluminium metal, $\text{Al}$, and hydrochloric acid, $\text{HCl}$.
Hydrochloric acid is a strong acid, which means that it dissociates completely in aqueous solution to form hydrogen ions, ${\text{H}}^{+}$, which you'll very often see referred to hydronium cations, ${\text{H"_3"O}}^{+}$, and chloride anions, ${\text{Cl}}^{-}$.
${\text{HCl"_ ((aq)) -> "H"_ ((aq))^(+) + "Cl}}_{\left(a q\right)}^{-}$
You can thus rewrite the equation as
2"Al"_ ((s)) + 6"H"_ ((aq))^(+) + 6"Cl"_ ((aq))^(-) -> ?
Remember, the stoichiometric coefficient of the hydrogen ions must also be distributed to the chloride anions, since
$6 {\text{HCl"_ ((aq)) -> 6"H"_ ((aq))^(+) + 6"Cl}}_{\left(a q\right)}^{-}$
Now, when aluminium reacts with hydrochloric acid, it gets oxidized to aluminium cations, ${\text{Al}}^{3 +}$. At the same time, the hydrogen ions get reduced to hydrogen gas, ${\text{H}}_{2}$.
$2 {\text{Al"_ ((s)) + 6"H"_ ((aq))^(+) + color(red)(cancel(color(black)(6"Cl"_ ((aq))^(-)))) -> "Al"_ ((aq))^(3+) + color(red)(cancel(color(black)(6"Cl"_ ((aq))^(-)))) + "H}}_{2 \left(g\right)} \uparrow$
As you can see ,the chloride anions are spectator ions, which is why the initial equation didn't include them
$2 {\text{Al"_ ((s)) + 6"H"_ ((aq))^(+) -> "Al"_ ((aq))^(3+) + "H}}_{2 \left(g\right)} \uparrow$
Now all you have to do is balance the aluminium and hydrogen atoms
$\textcolor{g r e e n}{| \overline{\underline{\textcolor{w h i t e}{\frac{a}{a}} \textcolor{b l a c k}{2 {\text{Al"_ ((s)) + 6"H"_ ((aq))^(+) -> 2"Al"_ ((aq))^(3+) + 3"H}}_{2 \left(g\right)} \uparrow} \textcolor{w h i t e}{\frac{a}{a}} |}}}$
The products of the reaction will thus be aqueous aluminium cations, ${\text{Al}}^{3 +}$, and hydrogen gas, ${\text{H}}_{2}$.
If you want, you can add in the chloride anions to get
$2 {\text{Al"_ ((s)) + 6"H"_ ((aq))^(+) + 6"Cl"_ ((aq))^(-) -> 2"Al"_ ((aq))^(3+) + 6"Cl"_ ((aq))^(-) + 3"H}}_{2 \left(g\right)} \uparrow$
This is equivalent to
$2 {\text{Al"_ ((s)) + 6"HCl"_ ((aq)) -> 2"AlCl"_ (3(aq)) + 3"H}}_{2 \left(g\right)} \uparrow$
The single replacement reaction reaction between aluminium metal and hydrochloric acid produces aqueous aluminium chloride and hydrogen gas.
|
Skip to main content
# 8.3: A Single Population Mean using the Student t-Distribution
In practice, we rarely know the population standard deviation. In the past, when the sample size was large, this did not present a problem to statisticians. They used the sample standard deviation $$s$$ as an estimate for $$\sigma$$ and proceeded as before to calculate a confidence interval with close enough results. However, statisticians ran into problems when the sample size was small. A small sample size caused inaccuracies in the confidence interval.
William S. Goset (1876–1937) of the Guinness brewery in Dublin, Ireland ran into this problem. His experiments with hops and barley produced very few samples. Just replacing $$\sigma$$ with $$s$$ did not produce accurate results when he tried to calculate a confidence interval. He realized that he could not use a normal distribution for the calculation; he found that the actual distribution depends on the sample size. This problem led him to "discover" what is called the Student's t-distribution. The name comes from the fact that Gosset wrote under the pen name "Student."
Up until the mid-1970s, some statisticians used the normal distribution approximation for large sample sizes and only used the Student's $$t$$-distribution only for sample sizes of at most 30. With graphing calculators and computers, the practice now is to use the Student's t-distribution whenever $$s$$ is used as an estimate for $$\sigma$$. If you draw a simple random sample of size $$n$$ from a population that has an approximately a normal distribution with mean $$\mu$$ and unknown population standard deviation $$\sigma$$ and calculate the $$t$$-score
$t = \dfrac{\bar{x} - \mu}{\left(\dfrac{s}{\sqrt{n}}\right)},$
then the $$t$$-scores follow a Student's t-distribution with $$n – 1$$ degrees of freedom. The $$t$$-score has the same interpretation as the z-score. It measures how far $$\bar{x}$$ is from its mean $$\mu$$. For each sample size $$n$$, there is a different Student's t-distribution.
The degrees of freedom, $$n – 1$$, come from the calculation of the sample standard deviation $$s$$. In [link], we used $$n$$ deviations ($$x - \bar{x}$$ values) to calculate $$s$$. Because the sum of the deviations is zero, we can find the last deviation once we know the other $$n – 1$$ deviations. The other $$n – 1$$ deviations can change or vary freely. We call the number $$n – 1$$ the degrees of freedom (df).
For each sample size $$n$$, there is a different Student's t-distribution.
Properties of the Student's $$t$$-Distribution
• The graph for the Student's $$t$$-distribution is similar to the standard normal curve.
• The mean for the Student's $$t$$-distribution is zero and the distribution is symmetric about zero.
• The Student's $$t$$-distribution has more probability in its tails than the standard normal distribution because the spread of the $$t$$-distribution is greater than the spread of the standard normal. So the graph of the Student's $$t$$-distribution will be thicker in the tails and shorter in the center than the graph of the standard normal distribution.
• The exact shape of the Student's $$t$$-distribution depends on the degrees of freedom. As the degrees of freedom increases, the graph of Student's $$t$$-distribution becomes more like the graph of the standard normal distribution.
• The underlying population of individual observations is assumed to be normally distributed with unknown population mean $$\mu$$ and unknown population standard deviation $$\sigma$$. The size of the underlying population is generally not relevant unless it is very small. If it is bell shaped (normal) then the assumption is met and doesn't need discussion. Random sampling is assumed, but that is a completely separate assumption from normality.
Calculators and computers can easily calculate any Student's $$t$$-probabilities. The TI-83,83+, and 84+ have a tcdf function to find the probability for given values of $$t$$. The grammar for the tcdf command is tcdf(lower bound, upper bound, degrees of freedom). However for confidence intervals, we need to use inverse probability to find the value of t when we know the probability.
For the TI-84+ you can use the invT command on the DISTRibution menu. The invT command works similarly to the invnorm. The invT command requires two inputs: invT(area to the left, degrees of freedom) The output is the t-score that corresponds to the area we specified.
The TI-83 and 83+ do not have the invT command. (The TI-89 has an inverse T command.)
A probability table for the Student's $$t$$-distribution can also be used. The table gives $$t$$-scores that correspond to the confidence level (column) and degrees of freedom (row). (The TI-86 does not have an invT program or command, so if you are using that calculator, you need to use a probability table for the Student's $$t$$-Distribution.) When using a $$t$$-table, note that some tables are formatted to show the confidence level in the column headings, while the column headings in some tables may show only corresponding area in one or both tails.
A Student's $$t$$-table (See [link]) gives $$t$$-scores given the degrees of freedom and the right-tailed probability. The table is very limited. Calculators and computers can easily calculate any Student's $$t$$-probabilities.
The notation for the Student's t-distribution (using T as the random variable) is:
• $$T \sim t_{df}$$ where $$df = n – 1$$.
• For example, if we have a sample of size $$n = 20$$ items, then we calculate the degrees of freedom as $$df = n - 1 = 20 - 1 = 19$$ and we write the distribution as $$T \sim t_{19}$$.
If the population standard deviation is not known, the error bound for a population mean is:
• $$EBM = \left(t_{\frac{\alpha}{2}}\right)\left(\frac{s}{\sqrt{n}}\right)$$,
• $$t_{\frac{\alpha}{2}}$$ is the $$t$$-score with area to the right equal to $$\frac{\alpha}{2}$$,
• use $$df = n – 1$$ degrees of freedom, and
• $$s =$$ sample standard deviation.
The format for the confidence interval is:
$(\bar{x} - EBM, \bar{x} + EBM).$
To calculate the confidence interval directly:
Press STAT.
Arrow over to TESTS.
Arrow down to 8:TInterval and press ENTER (or just press 8).
Example $$\PageIndex{1}$$: Acupuncture
Suppose you do a study of acupuncture to determine how effective it is in relieving pain. You measure sensory rates for 15 subjects with the results given. Use the sample data to construct a 95% confidence interval for the mean sensory rate for the population (assumed normal) from which you took the data.
The solution is shown step-by-step and by using the TI-83, 83+, or 84+ calculators.
8.6; 9.4; 7.9; 6.8; 8.3; 7.3; 9.2; 9.6; 8.7; 11.4; 10.3; 5.4; 8.1; 5.5; 6.9
Answer
• The first solution is step-by-step (Solution A).
• The second solution uses the TI-83+ and TI-84 calculators (Solution B).
Solution A
To find the confidence interval, you need the sample mean, $$\bar{x}$$, and the $$EBM$$.
$$\bar{x} = 8.2267 \(s$$ = 1.6722 n = 15\)
$$df = 15 – 1 = 14 CL so \alpha = 1 – CL = 1 – 0.95 = 0.05$$
$$\frac{\alpha}{2} = 0.025 t_{\frac{\alpha}{2}} = t_{0.025}$$
The area to the right of $$t_{0.025}$$ is 0.025, and the area to the left of $$t_{0.025}$$ is 1 – 0.025 = 0.975
$$t_{\frac{\alpha}{2}} = t_{0.025} = 2.14$$ using invT(.975,14) on the TI-84+ calculator.
$$EBM = \left(t_{\frac{\alpha}{2}}\right)\left(\frac{s}{\sqrt{n}}\right)$$
$$EBM = (2.14)\left(\frac{1.6722}{\sqrt{15}}\right) = 0.924$$
$$\bar{x} – EBM = 8.2267 – 0.9240 = 7.3$$
$$\bar{x} + EBM = 8.2267 + 0.9240 = 9.15$$
The 95% confidence interval is (7.30, 9.15).
We estimate with 95% confidence that the true population mean sensory rate is between 7.30 and 9.15.
Solution B
Press STAT and arrow over to TESTS.
Arrow down to 8:TInterval and press ENTER (or you can just press 8).
Arrow to Data and press ENTER.
Arrow down to List and enter the list name where you put the data.
There should be a 1 after Freq.
Arrow down to C-level and enter 0.95
Arrow down to Calculate and press ENTER.
The 95% confidence interval is (7.3006, 9.1527)
When calculating the error bound, a probability table for the Student's t-distribution can also be used to find the value of $$t$$. The table gives $$t$$-scores that correspond to the confidence level (column) and degrees of freedom (row); the $$t$$-score is found where the row and column intersect in the table.
Exercise $$\PageIndex{2}$$
You do a study of hypnotherapy to determine how effective it is in increasing the number of hourse of sleep subjects get each night. You measure hours of sleep for 12 subjects with the following results. Construct a 95% confidence interval for the mean number of hours slept for the population (assumed normal) from which you took the data.
8.2; 9.1; 7.7; 8.6; 6.9; 11.2; 10.1; 9.9; 8.9; 9.2; 7.5; 10.5
Answer
(8.1634, 9.8032)
Example $$\PageIndex{2}$$: The Human Toxome Project
The Human Toxome Project (HTP) is working to understand the scope of industrial pollution in the human body. Industrial chemicals may enter the body through pollution or as ingredients in consumer products. In October 2008, the scientists at HTP tested cord blood samples for 20 newborn infants in the United States. The cord blood of the "In utero/newborn" group was tested for 430 industrial compounds, pollutants, and other chemicals, including chemicals linked to brain and nervous system toxicity, immune system toxicity, and reproductive toxicity, and fertility problems. There are health concerns about the effects of some chemicals on the brain and nervous system. Table shows how many of the targeted chemicals were found in each infant’s cord blood.
79 145 147 160 116 100 159 151 156 126 137 83 156 94 121 144 123 114 139 99
Use this sample data to construct a 90% confidence interval for the mean number of targeted industrial chemicals to be found in an in infant’s blood.
Solution A
From the sample, you can calculate $$\bar{x} = 127.45$$ and $$s = 25.965$$. There are 20 infants in the sample, so $$n = 20$$, and $$df = 20 – 1 = 19$$.
You are asked to calculate a 90% confidence interval: $$CL = 0.90$$, so
$\alpha = 1 – CL = 1 – 0.90 = 0.10 \frac{\alpha}{2} = 0.05, t_{\frac{\alpha}{2}} = t_{0.05}$
By definition, the area to the right of $$t_{0.05}$$ is 0.05 and so the area to the left of $$t_{0.05}$$ is $$1 – 0.05 = 0.95$$.
Use a table, calculator, or computer to find that $$t_{0.05} = 1.729$$.
$$EBM = t_{\frac{\alpha}{2}}\left(\frac{s}{\sqrt{n}}\right) = 1.729\left(\frac{25.965}{\sqrt{20}}\right) \approx 10.038$$
$$\bar{x} – EBM = 127.45 – 10.038 = 117.412$$
$$\bar{x} + EBM = 127.45 + 10.038 = 137.488$$
We estimate with 90% confidence that the mean number of all targeted industrial chemicals found in cord blood in the United States is between 117.412 and 137.488.
Solution B
Enter the data as a list.
Press STAT and arrow over to TESTS.
Arrow down to 8:TInterval and press ENTER (or you can just press 8). Arrow to Data and press ENTER.
Arrow down to List and enter the list name where you put the data.
Arrow down to Freq and enter 1.
Arrow down to C-level and enter 0.90
Arrow down to Calculate and press ENTER.
The 90% confidence interval is (117.41, 137.49).
Exercise $$\PageIndex{2}$$
A random sample of statistics students were asked to estimate the total number of hours they spend watching television in an average week. The responses are recorded in Table. Use this sample data to construct a 98% confidence interval for the mean number of hours statistics students will spend watching television in one week.
0 3 1 20 9 5 10 1 10 4 14 2 4 4 5
Solution A
$$\bar{x} = 6.133, \(s$$ = 5.514, n = 15\), and $$df = 15 – 1 = 14$$
$$CL = 0.98$$, so $$\alpha = 1 - CL = 1 - 0.98 = 0.02$$
$$\frac{\alpha}{2} = 0.01 t_{\frac{\alpha}{2}} = t_{0.01} 2.624$$
$$EBM = t_{\frac{\alpha}{2}}\left(\frac{s}{\sqrt{n}}\right) = 2.624\left(\frac{5.514}{\sqrt{15}}\right) - 3.736$$
$$\bar{x} – EBM = 6.133 – 3.736 = 2.397$$
$$\bar{x} + EBM = 6.133 + 3.736 = 9.869$$
We estimate with 98% confidence that the mean number of all hours that statistics students spend watching television in one week is between 2.397 and 9.869.
Solution B
Enter the data as a list.
Press STAT and arrow over to TESTS.
Arrow down to 8:TInterval.
Press ENTER.
Arrow to Data and press ENTER.
Arrow down and enter the name of the list where the data is stored.
Enter Freq: 1
Enter C-Level: 0.98
Arrow down to Calculate and press Enter.
The 98% confidence interval is (2.3965, 9,8702).
## Reference
1. “America’s Best Small Companies.” Forbes, 2013. Available online at http://www.forbes.com/best-small-companies/list/ (accessed July 2, 2013).
2. Data from Microsoft Bookshelf.
3. Data from http://www.businessweek.com/.
4. Data from http://www.forbes.com/.
5. “Disclosure Data Catalog: Leadership PAC and Sponsors Report, 2012.” Federal Election Commission. Available online at http://www.fec.gov/data/index.jsp (accessed July 2,2013).
6. “Human Toxome Project: Mapping the Pollution in People.” Environmental Working Group. Available online at http://www.ewg.org/sites/humantoxome...tero%2Fnewborn (accessed July 2, 2013).
7. “Metadata Description of Leadership PAC List.” Federal Election Commission. Available online at http://www.fec.gov/finance/disclosur...pPacList.shtml (accessed July 2, 2013).
## Glossary
Degrees of Freedom ($$df$$)
the number of objects in a sample that are free to vary
Normal Distribution
a continuous random variable (RV) with pdf $$f(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{-(x-\mu)^2/2\sigma^{2}}$$, where $$\mu$$ is the mean of the distribution and $$\sigma$$ is the standard deviation, notation: $$X \sim N(\mu,\sigma)$$. If $$\mu = 0$$ and $$\sigma = 1$$, the RV is called the standard normal distribution.
Standard Deviation
a number that is equal to the square root of the variance and measures how far data values are from their mean; notation: $$s$$ for sample standard deviation and $$\sigma$$ for population standard deviation
Student's $$t$$-Distribution
investigated and reported by William S. Gossett in 1908 and published under the pseudonym Student; the major characteristics of the random variable (RV) are:
• It is continuous and assumes any real values.
• The pdf is symmetrical about its mean of zero. However, it is more spread out and flatter at the apex than the normal distribution.
• It approaches the standard normal distribution as $$n$$ get larger.
• There is a "family of $$t$$-distributions: each representative of the family is completely defined by the number of degrees of freedom, which is one less than the number of data.
## Contributors
• Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/[email protected].
|
Chapter 3, Application 38
The graph below gives the function
$$f_a(x) = ax(1-x)$$.
Manipulate the value of $$a$$ with the slider bar, and adjust the initial condition. Determine the value of $$a$$ when the fixed point at $$x=0$$ changes from attracting to repelling.
a:
|
Qwika Toolbar for IE and Firefox users!
## Home > English
Searching 21,964,380 articles in 1,158 wikis. Beta release. Any comments please contact us Press release (Feb 17): New search engine helps bridge the language gap in Wikipedia Press release (Apr 4): Qwika search engine now indexes 1158 wikis in 12 languages
Search wikis:
Results for Hydrodynamics 1 to 10 of 360
Hydrodynamics
Hydrodynamics Hydrodynamics (literally, "water motion") is fluid dynamics applied ... Leonhard Euler established the general equations of hydrodynamics. The practice was continued by Joseph Louis ... Reynolds, Poiseuille's law, potential flow. plume (hydrodynamics) entrainment (hydrodynamics)
http://en.wikipedia.org/wiki/Hydrodynamics - 2k - Cached - Similar pages
Hydrodynamics (translated from German)
Hydrodynamics Those Hydrodynamics (also: Fluid dynamics; from the Greek one ... also occupied Wasser . The fundamental equation of hydrodynamics is those Continuity equation $\left\{\partial \rho ...$
http://de.wikipedia.org/wiki/Hydrodynamik - 2k - Cached (German) - Wikipedia (German) - Similar pages
Hydrodynamics (translated from Dutch)
Hydrodynamics hydrodynamics the dynamics of fluïda has been applied ... this field in 17e century . The term ' hydrodynamics ' was for the first time used Daniel ... Bernoulli as a title of its work Hydrodynamics ( 1738 ). Bernoulli and Leonhard Euler developed the general comparisons of hydrodynamics. The work became verdergezet Joseph-Louis ...
http://nl.wikipedia.org/wiki/Hydrodynamica - 3k - Cached (Dutch) - Wikipedia (Dutch) - Similar pages
Hydrodynamics (translated from Russian)
Hydrodynamics Gidrodin?amika - the division physics of continuous ... motion. Content The main divisions of the hydrodynamics Ideal medium - is studied the behavior of ... incompressible medium, wave in the revolving medium. Hydrodynamics of the laminar flows Hydrodynamics the laminar flows the behavior of liquid ... of the case of the equation of hydrodynamics they take the sufficiently simple form ...
http://ru.wikipedia.org/wiki/Гидродинамика - 14k - Cached (Russian) - Wikipedia (Russian) - Similar pages
Entrainment (hydrodynamics)
Entrainment (hydrodynamics) Entrainment is the term used to the ...
http://en.wikipedia.org/wiki/Entrainment_(hydrodynamics) - 1k - Cached - Similar pages
Plume (hydrodynamics)
Plume (hydrodynamics) In hydrodynamics, a plume is a column of one ...
http://en.wikipedia.org/wiki/Plume_(hydrodynamics) - 6k - Cached - Similar pages
Talk:Hydrodynamics
Talk:Hydrodynamics Come help with Wikipedia:WikiProject Fluid dynamics ... UTC) This is only a history of hydrodynamics, not actually what it is. Unless it ...
http://en.wikipedia.org/wiki/Talk:Hydrodynamics - 0k - Cached - Similar pages
Quantum hydrodynamics
Quantum hydrodynamics Quantum hydrodynamics is more than the study of superfluidity ... Some of the main subjects in quantum hydrodynamics are quantum turbulence, quantized vortices, first , second ... Many famous scientists have worked in quantum hydrodynamics, including Richard Feynman, Lev Landau, and Pyotr ...
http://en.wikipedia.org/wiki/Quantum_hydrodynamics - 1k - Cached - Similar pages
Smoothed Particle Hydrodynamics
Smoothed Particle Hydrodynamics Smooth particle hydrodynamics is a Lagrangian technique for computational fluid ...
http://en.wikipedia.org/wiki/Smoothed_Particle_Hydrodynamics - 1k - Cached - Similar pages
Smoothed particle hydrodynamics
Smoothed particle hydrodynamics Smoothed Particle Hydrodynamics (SPH) is a computational method used for ... as the density. Method The smoothed particle hydrodynamics (SPH) method works by dividing the fluid ... model self-gravity in addition to pure hydrodynamics. The particle-based nature of SPH makes ... Astrophysics The adaptive resolution of smoothed particle hydrodynamics, combined with its ability to simulate ...
http://en.wikipedia.org/wiki/Smoothed_particle_hydrodynamics - 9k - Cached - Similar pages
Page:1 2 3 4 5 6 7 8 9 10 Next >>
Search wikis:
Search:
Try your search on: FactBites (sentence-based)
|
# zbMATH — the first resource for mathematics
Inclusions of von Neumann algebras, and quantum groupoïds. (English) Zbl 0974.46055
Authors’ abstract: From a depth 2 inclusion of von Neumann algebras $$M_0\subset M_1$$, which an operator-valued weight verifying a regularity condition, we construct a pseudo-multiplicative unitary, which leads to two structures of Hopf bimodules, dual to each other. Moreover, we construct an action of one of these structures on the algebra $$M_1$$ such that $$M_0$$ is the fixed point subalgebra, the algebra $$M_2$$ given by the basic construction being then isomorphic to the crossed-product. We construct on $$M_2$$ an action of the other structure, which can be considered as the dual action. If the inclusion $$M_0\subset M_1$$ is irreducible, we recover quantum groups, as proved in former papers. This construction generalizes the situation which occurs for actions (or co-actions) of groupoïds. Other examples of “quantum groupoïds” are given.
##### MSC:
46L89 Other “noncommutative” mathematics based on $$C^*$$-algebra theory 16W30 Hopf algebras (associative rings and algebras) (MSC2000) 46N50 Applications of functional analysis in quantum physics 46L60 Applications of selfadjoint operator algebras to physics 46L10 General theory of von Neumann algebras 81R50 Quantum groups and related algebraic methods applied to problems in quantum theory
Full Text:
##### References:
[1] Baaj, S.; Skandalis, G., Unitaires multiplicatifs et dualité pour LES produits croisés de $$C$$*-algèbres, Ann. sci. ENS, 26, 425-488, (1993) · Zbl 0804.46078 [2] Böhm, G.; Szlachányi, K., A coassociative $$C$$*-quantum group with non integral dimensions, Lett. math. phys., 38, 437-456, (1996) · Zbl 0872.16022 [3] Böhm, G.; Szlachányi, K., Weak $$C$$*-Hopf algebras: the coassociative symmetry of non-integral dimensions, in quantum groups and quantum spaces, Banach center publications, 40, 9-19, (1997) · Zbl 0894.16018 [4] Connes, A., On the spatial theory of von Neumann algebras, J. funct. analysis, 35, 153-164, (1980) · Zbl 0443.46042 [5] Connes, A., Non commutative geometry, (1994), Academic Press · Zbl 0933.46069 [6] Connes, A.; Skandalis, G., The longitudinal index theorem for foliations, Pub. R.I.M.S., 20, 1139-1183, (1984) · Zbl 0575.58030 [7] Enock, M., Inclusions irréductibles de facteurs et unitaires multiplicatifs II, J. funct. analysis, 154, 67-109, (1998) · Zbl 0921.46065 [8] Enock, M., Sous-facteurs intermédiaires et groupes quantiques mesurés, J. operator theory, 42, 305-330, (1999) · Zbl 0993.46031 [9] M. Enock, Inclusions of von Neumann algebras, and quantum groupoı́ds II, tirage préliminaire Nov. 99. [10] Enock, M.; Nest, R., Inclusions of factors, multiplicative unitaries and Kac algebras, J. funct. analysis, 137, 466-543, (1996) · Zbl 0847.22003 [11] Enock, M.; Schwartz, J.-M., Kac algebras and duality of locally compact groups, (1989), Springer-Verlag Berlin [12] Enock, M.; Vaınerman, L., Deformation of a Kac algebra by an abelian subgroup, Comm. math. phys., 178, 571-596, (1996) · Zbl 0876.46042 [13] Goodman, F.M.; de la Harpe, P.; Jones, V.F.R., Coxeter graphs and towers of algebras, MSRI publ. 14, (1989), Springer [14] Hilsum, M.; Skandalis, G., Stabilité des $$C$$*-algèbres de feuilletages, Ann. inst. Fourier, 33, 201-208, (1983) · Zbl 0505.46043 [15] Jones, V.; Takesaki, M., Actions of compact abelian groups on semi-finite injective factors, Acta math., 153, 213-258, (1984) · Zbl 0588.46042 [16] Masuda, T., Groupoid dynamical systems and crossed product I; the case of W**-dynamical systems, Publ. R.I.M.S. Kyoto, 20, 929-957, (1984) · Zbl 0584.46055 [17] Masuda, T.; Nakagami, Y., A von Neumann algebra framework for the duality of the quantum groups, Publ. RIMS Kyoto, 30, 799-850, (1994) · Zbl 0839.46055 [18] Monthubert, B.; Pierrot, F., Indice analytique et groupoı̈des de Lie, C.R. amer. math. soc. Paris, 325, 193-198, (1997) · Zbl 0955.22004 [19] Pimsner, M.; Popa, S., Iterating the basic construction, Trans. amer. math. soc., 310, 127-133, (1988) · Zbl 0706.46047 [20] J. Renault, A Groupoid Approach to $$C$$*-Algebras, Lecture Notes in Math, Vol, 793, Springer-Verlag. [21] Renault, J., The Fourier algebra of a measured groupoid and its multipliers, J. funct. analysis, 145, 455-490, (1997) · Zbl 0874.43003 [22] J.-L. Sauvageot, Produit tensoriel de Z-modules et applications, in Operator Algebras and their Connections with Topology and Ergodic Theory, Proceedings Buşteni, Romania, 1983, Lecture Notes in Math., Vol. 1132, pp. 468-485, Springer-Verlag. [23] Sauvageot, J.-L., Sur le produit tensoriel relatif d’espaces de Hilbert, J. operator theory, 9, 237-352, (1983) · Zbl 0517.46050 [24] Stratila, Ş., Modular theory in operator algebras, (1981), Abacus Press Turnbridge Wells · Zbl 0504.46043 [25] Szlachányi, K., Weak Hopf algebras, () · Zbl 1098.16504 [26] Vallin, J.-M., Bimodules de Hopf et poids opératoriels de Haar, J. operator theory, 35, 39-65, (1996) · Zbl 0849.22002 [27] J.-M. Vallin, Unitaire pseudo-multiplicatif associé à un groupoı̈de; applications à la moyennabilité, à parıtre au, J. Operator Theory. [28] J.-M. Vallin, Finite Dimensional Quantum Groupoı̈ds, in preparation. [29] Woronowicz, S.L., Tannaka – krein duality for compact matrix pseudogroups. twisted SU(N) group, Invent. math., 93, 35-76, (1988) · Zbl 0664.58044 [30] Woronowicz, S.L., Compact quantum group, Symétries quantiques (LES houches 1995),, (1998), North-Holland Amsterdam, p. 845-884 · Zbl 0997.46045 [31] S. L. Woronowicz, From multiplicative unitaries to quantum groups, preprint 1995. · Zbl 0876.46044 [32] Yamanouchi, T., Duality for actions and coactions of measured groupoids on von Neumann algebras, Mem. amer. math. soc., 101, 1-109, (1993) · Zbl 0822.46070 [33] Yamanouchi, T., Crossed products by groupoid actions and their smooth flow of weights, Publ. RIMS Kyoto, 28, 535-578, (1994) · Zbl 0824.46080 [34] Yamanouchi, T., Dual weights on crossed products by groupoid actions, Publ. RIMS Kyoto, 28, 653-678, (1994) · Zbl 0824.46081 [35] Yamanouchi, T., Duality for generalized Kac algebras and a characterization on finite groupoids algebras, J. algebra, 163, 9-50, (1994) · Zbl 0830.46047
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
# Page History
## Key
• This line was removed.
• Formatting was changed.
...
`<major>.<minor>.<revision>([{{<qualifier>]|[`<build>])}}
where:
• the qualifier section is optional (and is SNAPSHOT, alpha-1, alpha-2)
• the build section is optional (and increments starting at 1 if specified)
• any '0' build or revision elements can be omitted.
• only one of build and qualifier can be given (note that the timestamped qualifier includes a build number, but this is not the same)
• the build number is for those that repackage the original artifact (eg, as is often done with rpms)
...
nearest\farthest
compile
provided
runtime
system
test
compile
compile
compile
compile
compile
compile
provided
provided compile
provided provided
runtime
provided
provided
runtime
runtime compile
runtime
runtime
runtime
runtime
system
system compile
system
system
system
system
test
test compile
test test
runtime
test
test
|
# Are there any other twin primes with this property?
The twin primes 5 and 7 are such that one half their sum is a perfect number. Are there any other twin primes with this property?
It works for p=5. I think it should be of the form 1/2*(p+P+2). Is this true? How can I prove it?
Thx
|
# Operator Semigroups Simplified
How do I explain Operator Semigroups, in particular, positive operator semigroups to someone who hasn't studied math beyond high school?
I just want to give a vague idea/analogy to someone to let them know a bit about my a project I am working on.
• It depends on what you are doing with them. I don't think just explaining what they are is particularly motivating. – Don Thousand Jan 22 at 18:09
• @DonThousand Right now, I am just studying Positive Operator Semigroups. However, I just want to give someone from a non-math background a rough idea of what I'm studying. How do I do that without getting technical? – Mark Jan 22 at 18:12
• Wikipedia provides a good starting point. – Don Thousand Jan 22 at 18:19
• Were you likely to understand positive operator semigroups when you were in High School? – DisintegratingByParts Jan 23 at 7:30
• @DisintegratingByParts I don't really want them to completely understand. Something like a real-world application or analogy of sorts. Just give a really basic idea. – Mark Jan 23 at 10:12
How do I explain Operator Semigroups to someone who hasn't studied math beyond high school? I just want to give a vague idea/analogy. I don't really want them to completely understand.
Maybe, a possible analogy is the exponential function: you are studying a generalization of $$f(t)=e^{a t}$$ which allow ''matrix exponents''.
• But why would anyone want to study things like that?
Because, as we know that the said function is the solution of some important problems, we expect that the said generalization is the solution of some important generalized problems.
• What are these problems?
They are the functional equation $$f(x+y)=f(x)f(y)$$ and the differential equation $$f'(x)=af(x).$$ If we assume that $$f$$ is real-valued, then a solution is the exponential function $$f(t)=e^{at}$$. If we assume that $$f$$ is matrix-valued, then a solution will be given by a ''matrix exponential''. If we want go one step further (which have important applications), we will need semigroup of operators. Here is where your project starts.
|
Mathematics
OpenStudy (anonymous):
evaluate the function at the given values of x, if possible f(x)=6x^2-1/x^2 (a) x-=4 (b)x=-2
OpenStudy (anonymous):
is this $f(x)=\frac{6x^2-1}{x^2}$? if so then $f(-2)=\frac{6\times (-2)^2-1}{(-2)^2}=\frac{24-1}{4}$
OpenStudy (anonymous):
is a undefined
|
Tutorial:How to find the mapping percentages for data deposited in the Zaire ebolavirus bioproject from the 2014 outbreak
1
7
Entering edit mode
7.8 years ago
I was working on a material that I put together as an example of studying the Ebola data and strangely code that ran fine in the Fall now did not produce any results. Quite the head scratcher ...
I ended up troubleshooting it for a while only to realize that most of the new data deposited into the main Ebola project does not actually map to Ebola. At all. In fact 541 files map at rates below 1%!
Below is the code used to evaluate the mapping percentages for all data (891 files) in this project, see the results at the end). Hopefully it might save some time to someone else.
ebola bwa • 2.2k views
1
Entering edit mode
7.8 years ago
piet ★ 1.8k
0.25 bam/SRR1735115.bam 100 + 0 properly paired (0.25%:nan%)
I have repeated the mapping for a read-set you have found only 100 reads for. I mapped SRR1735115 to KR817241.1 with bwa mem. I found 7690 reads mapping. This is a 40-fold coverage in average (the Ebola genome is only 18000 nt in size). I inspected the mapping with Tablet. The reads show a very high error rate. I have never inspected RNA sequencing experiments before. Maybe that is normal? Nevertheless, I could clearly identify a few SNP with respect to the reference genome (which is from the 2014 outbreak).
In my opinion the challenge is sample preparation (as always). Ebola is a RNA virus. You have to prepare RNA from human blood serum, then reverse-transcribe it into DNA. Isolating RNA from blood is tricky and error-prone.
You stated that all samples were positive in qPCR (quantitative PCR). PCR is very sensitive. If PCR is positive, this does not mean that there is enough RNA in the serum for sequencing. With qPCR you can determine the viral load. The viral load varies by several orders of magnitude depending on the stage of the disease, but we currently know little about this for Ebola. It would be very interesting to compare the viral loads estimated from qPCR with the results from read mapping.
1
Entering edit mode
I was curious to find out where the more than 2.5 million reads not mapping to Ebola come from. First I mapped them to some human genes (dhfr, actb, 18S rRNA), but does not got any hits. The serum seams to be free of human RNA. Then I mapped to some bacterial rRNA operons from several phyla (proteobacteria, firmicutes, actinomycete). I got diffuse mapping with thousands of reads for all of them. Thus the major fraction of RNA in this sample is bacterial rRNA from a broad spectrum of species. Presumably the serum was stored a room temperature for an elevated period of time.
I also noted that all of the read sets with high amount of reads mapping to Ebola belong to the 99 fully assembled sequences submitted to Genbank along with the Science paper. The other read sets seam to be just the trash of that great study.
0
Entering edit mode
That's pretty cool. Thanks for sharing, I wish more studies had posts like this were we can talks about it.
Also since we are talking about it, here is the original post that I was trying to reproduce: Mission Impossible: you have 1 minute to analyze the Ebola Genome As it turns out this time one needs to pick the right data.
0
Entering edit mode
I've selected the first 20K (paired) reads from the samples so that I could map all samples (891). I wonder was the mapping percentage 0.25% a good approximation?. But you are right in that this may be sufficient. I have also noted low sequencing accuracy for the samples with low coverage.
0
Entering edit mode
The ratio of mapped reads is 0.3 % in my mapping, which is very close to your result obtained with only the first 20K.
|
Definitions
# Charts on SO(3)
In mathematics, the special orthogonal group in three dimensions, otherwise known as the rotation group SO(3), is a naturally occurring example of a manifold. The various charts on SO(3) set up rival coordinate systems: in this case there cannot be said to be a preferred set of parameters describing a rotation. There are three degrees of freedom, so that the dimension of SO(3) is three. In numerous applications one or other coordinate system is used, and the question arises how to convert from a given system to another.
The candidates include:
There are problems in using these as more than local charts, to do with their multiple-valued nature, and singularities. That is, one must be careful above all to work only with diffeomorphisms in the definition of chart. Problems of this sort are inevitable, since SO(3) is diffeomorphic to real projective space RP3, which is a quotient of S3 by identifying antipodal points, and charts try to model a manifold using R3.
This explains why, for example, the Euler angles appear to give a variable in the 3-torus, and the unit quaternions in a 3-sphere. The uniqueness of the representation by Euler angles breaks down at some points (cf. gimbal lock), while the quaternion representation is always a double cover, with q and −q giving the same rotation.
If we use a skew-symmetric matrix, every 3×3 skew-symmetric matrix is determined by 3 parameters, and so at first glance, the parameter space is R3. Exponentiating such a matrix results in an orthogonal 3×3 matrix of determinant 1--in other words, a rotation matrix, but this is a many-to-one map. It is possible to restrict these matrices to a ball around the origin in R3 so that rotations do not exceed 180 degrees, and this will be one-to-one, except for rotations by 180 degrees, which correspond to the boundary S2, and these identify antipodal points. The 3-ball with this identification of the boundary is RP3. A similar situation holds for applying a Cayley transform to the skew-symmetric matrix.
Axis angle gives parameters in S2×S1; if we replace the unit vector by the actual axis of rotation, so that n and −n give the same axis line, the set of axis becomes RP2, the real projective plane. But since rotations around n and −n are parameterized by opposite values of θ, the result is an S1 bundle over RP2, which turns out to be RP3.
Fractional linear transformations use four complex parameters, a, b, c, and d, with the condition that ad-bc is non-zero. Since multiplying all four parameters by the same complex number does not change the parameter, we can insist that ad-bc=1. This suggests writing (a,b,c,d) as a 2×2 complex matrix of determinant 1, that is, as an element of the special linear group SL(2,C). But not all such matrices produce rotations: conformal maps on S2 are also included. To only get rotations we insist that d is the complex conjugate of a, and c is the negative of the complex conjugate of b. Then we have two complex numbers, a and b, subject to |a|2+|b|2=1. If we write a+bj, this is a quaternion of unit length.
Ultimately, since R3 is not RP3, there will be a problem with each of these approaches. In some cases, we need to remember that certain parameter values result in the same rotation, and to remove this issue, boundaries must be set up, but then a path through this region in R3 must then suddenly jump to a different region when it crosses a boundary. Gimbal lock is a problem when the derivative of the map is not full rank, which occurs with Euler angles and Tait-Bryan angles, but not for the other choices. The quaternion representation has none of these problems (being a two-to-one mapping everywhere), but it has 4 parameters with a condition (unit length), which sometimes makes it harder to see the three degrees of freedom available.
One area in which these considerations, in some form, become inevitable, is the kinematics of a rigid body. One can take as definition the idea of a curve in the Euclidean group E(3) of three-dimensional Euclidean space, starting at the identity (initial position). The translation subgroup T of E(3) is a normal subgroup, with quotient SO(3) if we look at the subgroup E+(3) of direct isometries only (which is reasonable in kinematics). The translational part can be decoupled from the rotational part in standard Newtonian kinematics by considering the motion of the center of mass, and rotations of the rigid body about the center of mass. Therefore any rigid body movement leads directly to SO(3), when we factor out the translational part.
|
317 views
Have a look at the series : $2,1,(1/2),(1/4),\dots$ What number should come next?
1. $(1/3)$
2. $(1/8)$
3. $(2/8)$
4. $(1/16)$
## 1 Answer
Method1: It is a series of numbers where the next number is half of its previous number.
2/2 = 1
1/2 = 1/2
(1/2)/2 = 1/4
(1/4)/2 = 1/8
option B is correct.
Method 2:The above sequence is a GP (geometric progression) with the first term 2, and the common ratio 1/2.
$n^th$ term of GP = $a r^(n−1)$
So, fifth term of GP = $2*(1/2)^4$
=$2 * (1/16)$
= $1/8$
So, next term of GP = $1/8$
4.8k points
Answer:
1 vote
1 answer
1
438 views
1 vote
1 answer
2
284 views
1 vote
1 answer
3
1,164 views
1 vote
1 answer
4
1 vote
1 answer
5
|
It is currently 19 Oct 2017, 11:34
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
M26-30
Author Message
TAGS:
Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 41892
Kudos [?]: 128888 [0], given: 12183
Show Tags
16 Sep 2014, 01:26
Expert's post
9
This post was
BOOKMARKED
00:00
Difficulty:
95% (hard)
Question Stats:
27% (01:14) correct 73% (01:24) wrong based on 37 sessions
HideShow timer Statistics
If $$x$$, $$y$$, and $$z$$ are positive integers and $$xyz=2,700$$. Is $$\sqrt{x}$$ an integer?
(1) $$y$$ is an even perfect square and $$z$$ is an odd perfect cube.
(2) $$\sqrt{z}$$ is not an integer.
[Reveal] Spoiler: OA
_________________
Kudos [?]: 128888 [0], given: 12183
Math Expert
Joined: 02 Sep 2009
Posts: 41892
Kudos [?]: 128888 [0], given: 12183
Show Tags
16 Sep 2014, 01:26
Expert's post
1
This post was
BOOKMARKED
Official Solution:
Note: a perfect square, is an integer that can be written as the square of some other integer. For example $$16=4^2$$, is a perfect square. Similarly, a perfect cube is an integer that can be written as the cube of some other integer. For example, $$27=3^3$$ is a perfect cube.
Make prime factorization of 2,700: $$xyz=2,700=2^2*3^3*5^2$$.
(1) $$y$$ is an even perfect square and $$z$$ is an odd perfect cube. If $$y$$ is either $$2^2$$ or $$2^2*5^2$$ and $$z=3^3= \text{odd perfect square}$$ then $$x$$ must be a perfect square which makes $$\sqrt{x}$$ an integer: $$x=5^2$$ or $$x=1$$. But if $$z=1^3= \text{odd perfect cube}$$ then $$x$$ could be $$3^3$$ which makes $$\sqrt{x}$$ not an integer. Not sufficient.
(2) $$\sqrt{z}$$ is not an integer. Clearly insufficient.
(1)+(2) As from (1) $$\sqrt{z} \ne integer$$ then $$z \ne 1$$, therefore it must be $$3^3$$ (from 1), so $$x$$ must be a perfect square which makes $$\sqrt{x}$$ an integer: $$x=5^2$$ or $$x=1$$. Sufficient.
_________________
Kudos [?]: 128888 [0], given: 12183
Intern
Joined: 11 Aug 2014
Posts: 5
Kudos [?]: 2 [0], given: 31
Show Tags
16 Nov 2014, 00:21
1
This post was
BOOKMARKED
Bunuel wrote:
Official Solution:
Note: a perfect square, is an integer that can be written as the square of some other integer. For example $$16=4^2$$, is a perfect square. Similarly, a perfect cube is an integer that can be written as the cube of some other integer. For example, $$27=3^3$$ is a perfect cube.
Make prime factorization of 2,700: $$xyz=2,700=2^2*3^3*5^2$$.
(1) $$y$$ is an even perfect square and $$z$$ is an odd perfect cube. If $$y$$ is either $$2^2$$ or $$2^2*5^2$$ and $$z=3^3= \text{odd perfect square}$$ then $$x$$ must be a perfect square which makes $$\sqrt{x}$$ an integer: $$x=5^2$$ or $$x=1$$. But if $$z=1^3= \text{odd perfect cube}$$ then $$x$$ could be $$3^3$$ which makes $$\sqrt{x}$$ not an integer. Not sufficient.
(2) $$\sqrt{z}$$ is not an integer. Clearly insufficient.
(1)+(2) As from (1) $$\sqrt{z} \ne integer$$ then $$z \ne 1$$, therefore it must be $$3^3$$ (from 1), so $$x$$ must be a perfect square which makes $$\sqrt{x}$$ an integer: $$x=5^2$$ or $$x=1$$. Sufficient.
can you please explain 2) \sqrt{z} is not an integer. Clearly insufficient. and sqrt{z} ne integer then z ne 1
Kudos [?]: 2 [0], given: 31
Math Expert
Joined: 02 Sep 2009
Posts: 41892
Kudos [?]: 128888 [0], given: 12183
Show Tags
16 Nov 2014, 06:06
Bunuel wrote:
Official Solution:
Note: a perfect square, is an integer that can be written as the square of some other integer. For example $$16=4^2$$, is a perfect square. Similarly, a perfect cube is an integer that can be written as the cube of some other integer. For example, $$27=3^3$$ is a perfect cube.
Make prime factorization of 2,700: $$xyz=2,700=2^2*3^3*5^2$$.
(1) $$y$$ is an even perfect square and $$z$$ is an odd perfect cube. If $$y$$ is either $$2^2$$ or $$2^2*5^2$$ and $$z=3^3= \text{odd perfect square}$$ then $$x$$ must be a perfect square which makes $$\sqrt{x}$$ an integer: $$x=5^2$$ or $$x=1$$. But if $$z=1^3= \text{odd perfect cube}$$ then $$x$$ could be $$3^3$$ which makes $$\sqrt{x}$$ not an integer. Not sufficient.
(2) $$\sqrt{z}$$ is not an integer. Clearly insufficient.
(1)+(2) As from (1) $$\sqrt{z} \ne integer$$ then $$z \ne 1$$, therefore it must be $$3^3$$ (from 1), so $$x$$ must be a perfect square which makes $$\sqrt{x}$$ an integer: $$x=5^2$$ or $$x=1$$. Sufficient.
can you please explain 2) \sqrt{z} is not an integer. Clearly insufficient. and sqrt{z} ne integer then z ne 1
$$z \ne 1$$ because if it were 1, then $$\sqrt{z}$$ would be an integer and that would violate the second statement.
_________________
Kudos [?]: 128888 [0], given: 12183
Intern
Joined: 30 Jun 2012
Posts: 13
Kudos [?]: 3 [0], given: 0
Show Tags
07 Dec 2014, 16:21
Why is the answer not D) Why is 2) not sufficient, if the square root of Z is not an integer that means that Z must be 3^ 3 and x must be either 5^2 or 2^2 and the square root of either of those number is an integer.
Make prime factorization of 2,700: xyz=2,700=22∗33∗52xyz=2,700=2^2*3^3*5^2.
Kudos [?]: 3 [0], given: 0
CEO
Joined: 17 Jul 2014
Posts: 2604
Kudos [?]: 394 [0], given: 184
Location: United States (IL)
Concentration: Finance, Economics
GMAT 1: 650 Q49 V30
GPA: 3.92
WE: General Management (Transportation)
Show Tags
07 Dec 2014, 16:57
rsamant wrote:
Why is the answer not D) Why is 2) not sufficient, if the square root of Z is not an integer that means that Z must be 3^ 3 and x must be either 5^2 or 2^2 and the square root of either of those number is an integer.
Make prime factorization of 2,700: xyz=2,700=22∗33∗52xyz=2,700=2^2*3^3*5^2.
would like as well answer to this question.
Kudos [?]: 394 [0], given: 184
Math Expert
Joined: 02 Sep 2009
Posts: 41892
Kudos [?]: 128888 [0], given: 12183
Show Tags
08 Dec 2014, 05:10
mvictor wrote:
rsamant wrote:
Why is the answer not D) Why is 2) not sufficient, if the square root of Z is not an integer that means that Z must be 3^ 3 and x must be either 5^2 or 2^2 and the square root of either of those number is an integer.
Make prime factorization of 2,700: xyz=2,700=22∗33∗52xyz=2,700=2^2*3^3*5^2.
would like as well answer to this question.
There are other cases possible. For example, z=3, y=2*3*5^2 and x=2.
_________________
Kudos [?]: 128888 [0], given: 12183
Intern
Joined: 23 Apr 2016
Posts: 22
Kudos [?]: 9 [0], given: 39
Location: Finland
GPA: 3.65
Show Tags
07 Nov 2016, 10:24
Really nice question Bunuel !
Kudos [?]: 9 [0], given: 39
Intern
Joined: 04 Jan 2017
Posts: 5
Kudos [?]: [0], given: 0
Show Tags
10 Apr 2017, 09:19
Awesome question!
Kudos [?]: [0], given: 0
Intern
Joined: 24 Jun 2017
Posts: 15
Kudos [?]: 0 [0], given: 22
Show Tags
30 Sep 2017, 09:21
Wow what a question! Very difficult without blatantly trying to trick you.
This one took me a while, but I'm very proud to say I got it right!
Kudos [?]: 0 [0], given: 22
Re: M26-30 [#permalink] 30 Sep 2017, 09:21
Display posts from previous: Sort by
M26-30
Moderators: Bunuel, Vyshak
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
Proceedings of the Yerevan State University, series Physical and Mathematical Sciences
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Proceedings of the YSU, Physical and Mathematical Sciences: Year: Volume: Issue: Page: Find
Proceedings of the YSU, Physical and Mathematical Sciences, 2014, Issue 1, Pages 26–34 (Mi uzeru47)
Mathematics
On the representation of $\langle\rho_j, W_j\rangle$ absolute monotone functions
B. A. Sahakyan
Yerevan State University
Abstract: In the paper [1] the notion of the $\langle\rho_j, W_j\rangle$ absolutely monotone function was introduced. In the present paper we give some examples of sequences $\{W_j (x)\}^{\infty}_0$ consider the corresponding classes of $\langle\rho_j, W_j\rangle$ absolute monotone functions and study the problems of thei representation.
Keywords: operators of Riemann-Liouville type, $\langle\rho_j, W_j\rangle$ absolutely monotone functions.
Full text: PDF file (163 kB)
References: PDF file HTML file
MSC: 30H05
Accepted:23.01.2014
Language:
Citation: B. A. Sahakyan, “On the representation of $\langle\rho_j, W_j\rangle$ absolute monotone functions”, Proceedings of the YSU, Physical and Mathematical Sciences, 2014, no. 1, 26–34
Citation in format AMSBIB
\Bibitem{Sah14} \by B.~A.~Sahakyan \paper On the representation of $\langle\rho_j, W_j\rangle$ absolute monotone functions \jour Proceedings of the YSU, Physical and Mathematical Sciences \yr 2014 \issue 1 \pages 26--34 \mathnet{http://mi.mathnet.ru/uzeru47}
• http://mi.mathnet.ru/eng/uzeru47
• http://mi.mathnet.ru/eng/uzeru/y2014/i1/p26
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
Cycle of papers
This publication is cited in the following articles:
1. B. A. Sahakyan, “On the representation of $\langle\rho_j, W_j\rangle$ absolute monotone functions (p. 2)”, Proceedings of the YSU, Physics & Mathematics, 2014, no. 2, 30–38
2. B. A. Sahakyan, “On a generalized formula of Taylor–Maclaurin type on the generalized completely monotone functions”, Uch. zapiski EGU, ser. Fizika i Matematika, 52:3 (2018), 172–179
3. B. A. Sahakyan, “On the $\langle\rho_j, W_j\rangle$ generalized completely monotone functions”, Uch. zapiski EGU, ser. Fizika i Matematika, 54:1 (2020), 35–43
• Number of views: This page: 55 Full text: 16 References: 44
|
Thread: Integration by limit of sum
1. Integration by limit of sum
Hello...
I need to calculate this integral using the limit of the summation.
The period [0,T] is split into N intervals and the value point is at the left ot the interval.
$\displaystyle \int_0^{T} x^3 (t)\ dX(t)\ = \lim_{N\to \infty}\sum_{i=0}^{N-1} X_{i}^{3}(X_{i+1} - X_{i})$
$\displaystyle \int_0^{T} 2X(t)\ dX(t)\ = \lim_{N\to \infty}\sum_{i=0}^{N-1} 2X_i (X_{i+1} - X_{i}) = X^2(T) - T$
|
# Find the Jordan form of the matrix
let $A\in F^{8x8}$ , $A^3=0$,$A^2\neq0$ Find its Jordan forms that possible.
solution was : characteristic polynomial $P_A(x)=x^8$and $q_8=m_A(x)=x^3,q_7=x^3,q_6=x^2,q_5=1,q_4=1,q_3=1,q_2=1,q_1=1$
so $e_1(x)=x^3,e_2(x)=x^3,e_3(x)=x^2$
I dont understand how he write $x^8$ and also writing of $q_i$. I know $m_A(x)=x^3$ and $q_1.q_2...q_8=P_A(x)$ how do we know $P_A(x)=|xI-A|$ in form of \begin{pmatrix} x & 0 & 0 & \cdots & 0 \\ a & x & 0 & \cdots & 0 \\ b & c & x & \cdots & 0 \\ \vdots & \vdots& \vdots & \ddots & \vdots \\ h & j & k & \cdots & x \end{pmatrix} thanks
similar question : $A\in F^{7x7},A^2=A$ find its Jordan which is possible
since $A^2-A=0$ $q_7=x^2-x,q_6=x^2-x,q_5=x^2-x,q_4=x,q_3=1,q_2=1,q_1=1$
The only eigenvalue for the matrix is $0$ because $A^{3}$ is the minimal polynommial. Jordan canonical form allows you to arrange the blocks starting with the largest in the upper left, and in order of descending block size. All of the blocks have $0$'s along the diagonal and $1$'s on the diagonal above the main diagonal, unless the block size is $1$. A block $B_{n}$ with $n$ $0$'s on the diagonal and $n-1$ one's one the superdiagonal has order $n$, meaning that $B_{n}^{n}=0$ but $B_{n}^{n-1} \ne 0$. The extreme case is $B_{1}^{1}=0$.
The block sizes of all blocks must sum to $8$. The largest block size is $3$, and there must be at least one such block because $A^{3}=0$, but $A^{2}\ne 0$. That leaves $5$ for the total of the remaining block sizes. So these are the possible Jordan block sizes $$(3,3,2),(3,3,1,1), (3,2,2,1), (3,2,1,1,1), (3,1,1,1,1,1).$$
• thanks. lets say $A\in F^{8x8} A^5=0,A^4\neq 0$ so $q_8=x^5$ since minimal polynommial $m_A(x)=x^5$ we can choose the sizes (5,2,1),(5,1,1,1)... right? and is $P_A(x)=x^8$ arbitrary? can it be something else? – lyme May 29 '14 at 20:24
• If $A$ is an $8\times 8$ matrix with $A^{8}=0$ and $A^{7}\ne 0$, then there's one $8$ block, and that's the whole matrix. – DisintegratingByParts May 29 '14 at 20:28
• my last question, what can we say about charac. polynomial of A if $A\in F^{7x7},A^2=A$ aside from $(x^2-x)|_{P_A(x)}$? – lyme May 29 '14 at 20:50
• So you have $x(x-1)=0$. That means that the blocks with $0$ in the diagonal are simple and the blocks with $1$'s in the diagonal are also simple. So, the Jordan form is diagonal with $1$'s and $0$'s along the diagonal. There is a theorem: The minimal polynomial has distinct factors iff the matrix is diagonalizable. – DisintegratingByParts May 29 '14 at 22:10
• Added remark: for the case mentioned $A^{2}-A=0$, it can be that $A-I=0$ or $A=0$ if $x^{2}-x$ is not the minimal polynomial. So you might have (a) all simple blocks (size 1) with $0$'s on the diagonal or (b) all simple blocks with $1$'s on the diagonal. – DisintegratingByParts May 29 '14 at 22:18
Notice that this sequence $(\ker A^k)_k$ is strictly increasing and stationary:
$$\{0\}\subset\ker A\subset\ker A^2\subset\ker A^3=\Bbb R^8$$ moreover it's well known that the sequence $$(\dim\ker A^{k+1}-\dim\ker A^k)_k$$ is decreasing hence using the Young tableau we have these possibilities:
and each row of Young tableau is a jordan block so for example for the last tableau we have two blocks with size $3$ and a block with size 2 hence the Jordan matrix is $(3,3,2)$.
Note that $A^3=\mathbf0$ implies $A$ is nilpotent. It follows that the characteristic polynomial of $A$ is $$\DeclareMathOperator{char}{char}\char_A(\lambda)=\lambda^8$$ Furthermore, since $A^2\neq\mathbf0$, the minimal polynomial of $A$ is $$m_A(\lambda)=\lambda^3$$ This information tells us that the Jordan form $J$ of $A$ has
1. Diagonal consisting only of $0$'s
2. The size of the largest Jordan block is $\deg m_A(\lambda)=3$
A few of the possible Jordan forms are $$\begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$$ Can you find the rest?
• Thanks. appriciated. in second example that I wrote. A isnt nilpotent right? so characteristic polynomial doesnt have to be $char_A(λ)=λ^7$ ? – lyme May 29 '14 at 20:34
|
MathSciNet bibliographic data MR348216 34G05 (35K30) Levine, Howard A. Some nonexistence and instability theorems for solutions of formally parabolic equations of the form \$Pu\sb{t}=-Au+{\scr F}(u)\$$Pu\sb{t}=-Au+{\scr F}(u)$. Arch. Rational Mech. Anal. 51 (1973), 371–386. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
# Thursday Night
Paul Betts’s personal website / blog / what-have-you
# Convert a .NET 2.0 DLL to 4.0 (VS2010) without source
## Converting .NET DLLs to 4.0 by hand is too much work
One of the blockers for upgrading a project to .NET 4.0 is that your old .NET 2.0/3.0/3.5 DLLs will have some difficulty running in .NET 4.0. I’m far too lazy to track down all the 3rd party DLLs like Moq or log4net, download the source, switch the project to 4.0, then recompile the DLL.
## Hackery makes life easier
Instead of doing this, for a lot of DLLs, you can get away with roundtripping the DLL using ildasm/ilasm; the only tricky part is changing the assembly references so that they point to .NET 4.0 DLLs instead of the old and busted 2.0 System.* assemblies.
The good news is, I’ve written an IronRuby (or MRI, or JRuby, etc) script to handle this automagically. Here’s how to use it:
## How to use:
1. Download IronRuby from CodePlex (You can use the .NET 4.0 release too, doesn’t matter), and copy it to C:\IronRuby
3. Launch the VS Command Prompt (2010) – don’t launch the 2008 one by accident!
4. path=%path%;C:\ironruby\bin
5. mkdir v4dlls
6. ir dotnet4ify_dll.rb C:\path\to\an\old\assembly.dll .\v4dlls\assembly.dll
## Some caveats
• While I’ve tested this on some pretty complex DLLs (DotNetOpenAuth, ParallelFramework_3_5.dll), it’s definitely in the “Works on My Machine” class of software; in particular, C++/CLI DLLs will probably not work. Embedded resources do still get embedded in the new binary
• .NET Assemblies get upset if you rename them, so you can’t do something like “ir dotnet4ify_dll.rb foo.dll foo_v4.dll” – just put all your v4 assemblies in a separate directory
Worked for me, but I make no guarantees it won’t replace DLLs with a lolcat
Written by Paul Betts
December 3rd, 2009 at 12:36 am
Posted in Mono / .NET
|
Principles of Economics
# 7.2The Structure of Costs in the Short Run
Principles of Economics7.2 The Structure of Costs in the Short Run
By the end of this section, you will be able to:
• Analyze short-run costs as influenced by total cost, fixed cost, variable cost, marginal cost, and average cost.
• Calculate average profit
• Evaluate patterns of costs to determine potential profit
The cost of producing a firm’s output depends on how much labor and physical capital the firm uses. A list of the costs involved in producing cars will look very different from the costs involved in producing computer software or haircuts or fast-food meals. However, the cost structure of all firms can be broken down into some common underlying patterns. When a firm looks at its total costs of production in the short run, a useful starting point is to divide total costs into two categories: fixed costs that cannot be changed in the short run and variable costs that can be changed.
### Fixed and Variable Costs
Fixed costs are expenditures that do not change regardless of the level of production, at least not in the short term. Whether you produce a lot or a little, the fixed costs are the same. One example is the rent on a factory or a retail space. Once you sign the lease, the rent is the same regardless of how much you produce, at least until the lease runs out. Fixed costs can take many other forms: for example, the cost of machinery or equipment to produce the product, research and development costs to develop new products, even an expense like advertising to popularize a brand name. The level of fixed costs varies according to the specific line of business: for instance, manufacturing computer chips requires an expensive factory, but a local moving and hauling business can get by with almost no fixed costs at all if it rents trucks by the day when needed.
Variable costs, on the other hand, are incurred in the act of producing—the more you produce, the greater the variable cost. Labor is treated as a variable cost, since producing a greater quantity of a good or service typically requires more workers or more work hours. Variable costs would also include raw materials.
As a concrete example of fixed and variable costs, consider the barber shop called “The Clip Joint” shown in Figure 7.3. The data for output and costs are shown in Table 7.2. The fixed costs of operating the barber shop, including the space and equipment, are $160 per day. The variable costs are the costs of hiring barbers, which in our example is$80 per barber each day. The first two columns of the table show the quantity of haircuts the barbershop can produce as it hires additional barbers. The third column shows the fixed costs, which do not change regardless of the level of production. The fourth column shows the variable costs at each level of output. These are calculated by taking the amount of labor hired and multiplying by the wage. For example, two barbers cost: 2 × $80 =$160. Adding together the fixed costs in the third column and the variable costs in the fourth column produces the total costs in the fifth column. So, for example, with two barbers the total cost is: $160 +$160 = $320. Labor Quantity Fixed Cost Variable Cost Total Cost 1 16$160 $80$240
2 40 $160$160 $320 3 60$160 $240$400
4 72 $160$320 $480 5 80$160 $400$560
6 84 $160$480 $640 7 82$160 $560$720
Table 7.2 Output and Total Costs
Figure 7.3 How Output Affects Total Costs At zero production, the fixed costs of $160 are still present. As production increases, variable costs are added to fixed costs, and the total cost is the sum of the two. The relationship between the quantity of output being produced and the cost of producing that output is shown graphically in the figure. The fixed costs are always shown as the vertical intercept of the total cost curve; that is, they are the costs incurred when output is zero so there are no variable costs. You can see from the graph that once production starts, total costs and variable costs rise. While variable costs may initially increase at a decreasing rate, at some point they begin increasing at an increasing rate. This is caused by diminishing marginal returns, discussed in the chapter on Choice in a World of Scarcity, which is easiest to see with an example. As the number of barbers increases from zero to one in the table, output increases from 0 to 16 for a marginal gain of 16; as the number rises from one to two barbers, output increases from 16 to 40, a marginal gain of 24. From that point on, though, the marginal gain in output diminishes as each additional barber is added. For example, as the number of barbers rises from two to three, the marginal output gain is only 20; and as the number rises from three to four, the marginal gain is only 12. To understand the reason behind this pattern, consider that a one-man barber shop is a very busy operation. The single barber needs to do everything: say hello to people entering, answer the phone, cut hair, sweep up, and run the cash register. A second barber reduces the level of disruption from jumping back and forth between these tasks, and allows a greater division of labor and specialization. The result can be greater increasing marginal returns. However, as other barbers are added, the advantage of each additional barber is less, since the specialization of labor can only go so far. The addition of a sixth or seventh or eighth barber just to greet people at the door will have less impact than the second one did. This is the pattern of diminishing marginal returns. As a result, the total costs of production will begin to rise more rapidly as output increases. At some point, you may even see negative returns as the additional barbers begin bumping elbows and getting in each other’s way. In this case, the addition of still more barbers would actually cause output to decrease, as shown in the last row of Table 7.2. This pattern of diminishing marginal returns is common in production. As another example, consider the problem of irrigating a crop on a farmer’s field. The plot of land is the fixed factor of production, while the water that can be added to the land is the key variable cost. As the farmer adds water to the land, output increases. But adding more and more water brings smaller and smaller increases in output, until at some point the water floods the field and actually reduces output. Diminishing marginal returns occur because, at a given level of fixed costs, each additional input contributes less and less to overall production. ### Average Total Cost, Average Variable Cost, Marginal Cost The breakdown of total costs into fixed and variable costs can provide a basis for other insights as well. The first five columns of Table 7.3 duplicate the previous table, but the last three columns show average total costs, average variable costs, and marginal costs. These new measures analyze costs on a per-unit (rather than a total) basis and are reflected in the curves shown in Figure 7.4. Figure 7.4 Cost Curves at the Clip Joint The information on total costs, fixed cost, and variable cost can also be presented on a per-unit basis. Average total cost (ATC) is calculated by dividing total cost by the total quantity produced. The average total cost curve is typically U-shaped. Average variable cost (AVC) is calculated by dividing variable cost by the quantity produced. The average variable cost curve lies below the average total cost curve and is typically U-shaped or upward-sloping. Marginal cost (MC) is calculated by taking the change in total cost between two levels of output and dividing by the change in output. The marginal cost curve is upward-sloping. Labor Quantity Fixed Cost Variable Cost Total Cost Marginal Cost Average Total Cost Average Variable Cost 1 16$160 $80$240 $5.00$15.00 $5.00 2 40$160 $160$320 $3.30$8.00 $4.00 3 60$160 $240$400 $4.00$6.60 $4.00 4 72$160 $320$480 $6.60$6.60 $4.40 5 80$160 $400$560 $10.00$7.00 $5.00 6 84$160 $480$640 $20.00$7.60 $5.70 Table 7.3 Different Types of Costs Average total cost (sometimes referred to simply as average cost) is total cost divided by the quantity of output. Since the total cost of producing 40 haircuts is$320, the average total cost for producing each of 40 haircuts is $320/40, or$8 per haircut. Average cost curves are typically U-shaped, as Figure 7.4 shows. Average total cost starts off relatively high, because at low levels of output total costs are dominated by the fixed cost; mathematically, the denominator is so small that average total cost is large. Average total cost then declines, as the fixed costs are spread over an increasing quantity of output. In the average cost calculation, the rise in the numerator of total costs is relatively small compared to the rise in the denominator of quantity produced. But as output expands still further, the average cost begins to rise. At the right side of the average cost curve, total costs begin rising more rapidly as diminishing returns kick in.
Average variable cost obtained when variable cost is divided by quantity of output. For example, the variable cost of producing 80 haircuts is $400, so the average variable cost is$400/80, or $5 per haircut. Note that at any level of output, the average variable cost curve will always lie below the curve for average total cost, as shown in Figure 7.4. The reason is that average total cost includes average variable cost and average fixed cost. Thus, for Q = 80 haircuts, the average total cost is$8 per haircut, while the average variable cost is $5 per haircut. However, as output grows, fixed costs become relatively less important (since they do not rise with output), so average variable cost sneaks closer to average cost. Average total and variable costs measure the average costs of producing some quantity of output. Marginal cost is somewhat different. Marginal cost is the additional cost of producing one more unit of output. So it is not the cost per unit of all units being produced, but only the next one (or next few). Marginal cost can be calculated by taking the change in total cost and dividing it by the change in quantity. For example, as quantity produced increases from 40 to 60 haircuts, total costs rise by 400 – 320, or 80. Thus, the marginal cost for each of those marginal 20 units will be 80/20, or$4 per haircut. The marginal cost curve is generally upward-sloping, because diminishing marginal returns implies that additional units are more costly to produce. A small range of increasing marginal returns can be seen in the figure as a dip in the marginal cost curve before it starts rising. There is a point at which marginal and average costs meet, as the following Clear it Up feature discusses.
### Clear It Up
The marginal cost line intersects the average cost line exactly at the bottom of the average cost curve—which occurs at a quantity of 72 and cost of $6.60 in Figure 7.4. The reason why the intersection occurs at this point is built into the economic meaning of marginal and average costs. If the marginal cost of production is below the average cost for producing previous units, as it is for the points to the left of where MC crosses ATC, then producing one more additional unit will reduce average costs overall—and the ATC curve will be downward-sloping in this zone. Conversely, if the marginal cost of production for producing an additional unit is above the average cost for producing the earlier units, as it is for points to the right of where MC crosses ATC, then producing a marginal unit will increase average costs overall—and the ATC curve must be upward-sloping in this zone. The point of transition, between where MC is pulling ATC down and where it is pulling it up, must occur at the minimum point of the ATC curve. This idea of the marginal cost “pulling down” the average cost or “pulling up” the average cost may sound abstract, but think about it in terms of your own grades. If the score on the most recent quiz you take is lower than your average score on previous quizzes, then the marginal quiz pulls down your average. If your score on the most recent quiz is higher than the average on previous quizzes, the marginal quiz pulls up your average. In this same way, low marginal costs of production first pull down average costs and then higher marginal costs pull them up. The numerical calculations behind average cost, average variable cost, and marginal cost will change from firm to firm. However, the general patterns of these curves, and the relationships and economic intuition behind them, will not change. ### Lessons from Alternative Measures of Costs Breaking down total costs into fixed cost, marginal cost, average total cost, and average variable cost is useful because each statistic offers its own insights for the firm. Whatever the firm’s quantity of production, total revenue must exceed total costs if it is to earn a profit. As explored in the chapter Choice in a World of Scarcity, fixed costs are often sunk costs that cannot be recouped. In thinking about what to do next, sunk costs should typically be ignored, since this spending has already been made and cannot be changed. However, variable costs can be changed, so they convey information about the firm’s ability to cut costs in the present and the extent to which costs will increase if production rises. ### Clear It Up #### Why are total cost and average cost not on the same graph? Total cost, fixed cost, and variable cost each reflect different aspects of the cost of production over the entire quantity of output being produced. These costs are measured in dollars. In contrast, marginal cost, average cost, and average variable cost are costs per unit. In the previous example, they are measured as cost per haircut. Thus, it would not make sense to put all of these numbers on the same graph, since they are measured in different units ($ versus $per unit of output). It would be as if the vertical axis measured two different things. In addition, as a practical matter, if they were on the same graph, the lines for marginal cost, average cost, and average variable cost would appear almost flat against the horizontal axis, compared to the values for total cost, fixed cost, and variable cost. Using the figures from the previous example, the total cost of producing 40 haircuts is$320. But the average cost is $320/40, or$8. If you graphed both total and average cost on the same axes, the average cost would hardly show.
Average cost tells a firm whether it can earn profits given the current price in the market. If we divide profit by the quantity of output produced we get average profit, also known as the firm’s profit margin. Expanding the equation for profit gives:
But note that:
Thus:
This is the firm’s profit margin. This definition implies that if the market price is above average cost, average profit, and thus total profit, will be positive; if price is below average cost, then profits will be negative.
The marginal cost of producing an additional unit can be compared with the marginal revenue gained by selling that additional unit to reveal whether the additional unit is adding to total profit—or not. Thus, marginal cost helps producers understand how profits would be affected by increasing or decreasing production.
### A Variety of Cost Patterns
The pattern of costs varies among industries and even among firms in the same industry. Some businesses have high fixed costs, but low marginal costs. Consider, for example, an Internet company that provides medical advice to customers. Such a company might be paid by consumers directly, or perhaps hospitals or healthcare practices might subscribe on behalf of their patients. Setting up the website, collecting the information, writing the content, and buying or leasing the computer space to handle the web traffic are all fixed costs that must be undertaken before the site can work. However, when the website is up and running, it can provide a high quantity of service with relatively low variable costs, like the cost of monitoring the system and updating the information. In this case, the total cost curve might start at a high level, because of the high fixed costs, but then might appear close to flat, up to a large quantity of output, reflecting the low variable costs of operation. If the website is popular, however, a large rise in the number of visitors will overwhelm the website, and increasing output further could require a purchase of additional computer space.
For other firms, fixed costs may be relatively low. For example, consider firms that rake leaves in the fall or shovel snow off sidewalks and driveways in the winter. For fixed costs, such firms may need little more than a car to transport workers to homes of customers and some rakes and shovels. Still other firms may find that diminishing marginal returns set in quite sharply. If a manufacturing plant tried to run 24 hours a day, seven days a week, little time remains for routine maintenance of the equipment, and marginal costs can increase dramatically as the firm struggles to repair and replace overworked equipment.
Every firm can gain insight into its task of earning profits by dividing its total costs into fixed and variable costs, and then using these calculations as a basis for average total cost, average variable cost, and marginal cost. However, making a final decision about the profit-maximizing quantity to produce and the price to charge will require combining these perspectives on cost with an analysis of sales and revenue, which in turn requires looking at the market structure in which the firm finds itself. Before we turn to the analysis of market structure in other chapters, we will analyze the firm’s cost structure from a long-run perspective.
Order a print copy
As an Amazon Associate we earn from qualifying purchases.
|
It is currently 15 Jun 2019, 23:08
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If a, b, and c are integers
Author Message
TAGS:
Intern
Joined: 20 May 2019
Posts: 31
Followers: 0
Kudos [?]: 2 [0], given: 0
If a, b, and c are integers [#permalink] 08 Jun 2019, 04:41
00:00
Question Stats:
0% (00:00) correct 100% (01:13) wrong based on 3 sessions
If a, b, and c are integers and $$\frac{ab^2}{c}$$ is a positive even integer, which of the following must be true?
I. ab is even
II. ab > 0
III. c is even
A. I only
B. II only
C. I and II
D. I and III
E. I, II, and III
[Reveal] Spoiler: OA
GRE Instructor
Joined: 10 Apr 2015
Posts: 1958
Followers: 60
Kudos [?]: 1790 [0], given: 9
Re: If a, b, and c are integers [#permalink] 10 Jun 2019, 08:49
Expert's post
dvk007 wrote:
If a, b, and c are integers and $$\frac{ab^2}{c}$$ is a positive even integer, which of the following must be true?
I. ab is even
II. ab > 0
III. c is even
A. I only
B. II only
C. I and II
D. I and III
E. I, II, and III
Statement I. ab is even
GIVEN: $$\frac{ab^2}{c}$$ is an even integer
This means we can say that $$\frac{ab^2}{c}$$ = 2k (for some integer k)
Multiply both sides by c to get: $$ab^2 = 2kc$$
We can see that 2kc must be EVEN, which means ab^2 must be EVEN.
If ab^2 is EVEN, then either a or b must be EVEN, which means ab must be EVEN
So statement I is true
---------------------------
Statement II. ab > 0
Notice that, regardless of the value of b, we know that b² is POSITIVE (for all non-zero values of b)
This leads me to test some possible values...
If $$\frac{ab^2}{c}$$ is a positive even integer, then it COULD be the case that a = 2, b = -1 and c = 1
Notice that $$\frac{ab^2}{c}=\frac{(2)(-1)^2}{1}=2$$, which is a positive even integer
In this case, ab = (2)(-1) = -2
So, it is NOT true that ab > 0
So statement II is NOT true
Check the answer choices....ELIMINATE C and E
-------------------------------
Statement III. c is even
Notice that we can reuse the values we used above (a = 2, b = -1 and c = 1)
If c = 1, then c is NOT even
So statement III is NOT true
Cheers,
Brent
_________________
Brent Hanneson – Creator of greenlighttestprep.com
Re: If a, b, and c are integers [#permalink] 10 Jun 2019, 08:49
Display posts from previous: Sort by
|
# Calculate the sum with floor function.
Let $$a$$ be a positive number. Calculate the sum $$\sum_{1\le n\le x}\left\lfloor \sqrt{n^{2}+a} \right\rfloor$$
I tried to calculate first $$\left\lfloor \sqrt{n^{2}+a} \right\rfloor-n$$. But probably it won't help. Maybe you have some ideas how to solve it?
|
# Normalizing the free particle wave function
One way to normalize the free particle wave function
"is to replace the the boundary condition $\psi(\pm{\frac{a}{2}}) = 0$ [for the infinite well] by periodic boundary conditions expressed in the form $\psi(x)=\psi(x+a)$" -- Quantum Physics, S. Gasiorowicz
How does this work? What does this mean physically? Or more precisely, why does this approximation suffice?
I understand that this makes the wavefunction square-integrable (when integrated from $x=0$ to $x=a$) hence normalizable.
Thanks.
-
## migrated from math.stackexchange.comAug 23 '11 at 23:09
This question came from our site for people studying math at any level and professionals in related fields.
The physical idea is that you'll let $a$ go to infinity for a truly free particle, and if you take this limit, then the specific details of the boundary conditions should be irrelevant, because the boundaries are so far away anyway.
Therefore, you are welcome to choose convenient boundary conditions, and the periodic ones are convenient, because then you have just plain waves $e^{ikx}$, with the admitted $k$-values determined by $e^{ika} = 1$, so $ka = 2\pi n$, and $n \in \mathbb{Z}$.
You want boundary conditions that conserve the total probability within your finite box. The probability current is proportional to, roughly, $J \sim \psi \partial \psi$. Setting $\psi=0$ at the boundary forces $J=0$ at the boundary (particles get reflected) so probability is conserved (once can see that Dirichlet boundary condition $\partial\psi = 0$ does the same). Setting $\psi(x+a)=\psi(x)$ forces $J(x+a) = J(x)$ so the current going out of your box on the left is equal to that coming in on the right (i.e. your box is really a torus), and probability is again conserved.
|
1. ## Finding G(X) within a limit, need help please!
We want to calculate the limit limx→∞ 4x^2/Sqrt3x^4+2 (square root is for the whole bottom function)
Rewrite The limit as as 4/sqrt3+g(x) (square root is for the whole bottom function)where g(x)=?
2. ## Re: Finding G(X) within a limit, need help please!
I know what the limit is when it goes towards infinity but just pointing out G(x) is confusing
3. ## Re: Finding G(X) within a limit, need help please!
$\dfrac{4x^2}{\sqrt{3x^4+2}} \cdot \dfrac{x^{-2}}{\sqrt{x^{-4}}}= \dfrac{4}{\sqrt{3+2x^{-4}}}$
4. ## Re: Finding G(X) within a limit, need help please!
Thank you! I see why I kept getting the wrong answer now, appreciate it!
|
missing '=' etcd when defining service file
I'm struggling while following Kelsey Hightower's "Kubernetes the Hard Way" tutorial. I've gone off script, because I'm trying to bootstrap k8s on a local server.
I've got the point where I'm bootstrapping etcd, however, when I'm creating the service I'm getting an error:
``````Failed to start etcd.service: Unit is not loaded properly: Bad message.
See system logs and 'systemctl status etcd.service' for details.
``````
Checking the logs and I get:
``````Jun 21 20:16:49 controller-0 systemd[1]: [/etc/systemd/system/etcd.service:9] Missing '='.
Jun 21 20:16:49 controller-0 systemd[1]: [/etc/systemd/system/etcd.service:9] Missing '='.
Jun 21 20:17:25 controller-0 systemd[1]: [/etc/systemd/system/etcd.service:9] Missing '='.
``````
Here's the etcd.service file:
``````[Unit]
Description=etcd service
Documentation=https://github.com/coreos/etcd
[Service]
User=etcd
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name \${ETCD_NAME} \\
--data-dir /var/lib/etcd \\
--listen-peer-urls http://\${ETCD_HOST_IP}:2380 \\
--listen-client-urls http://\${ETCD_HOST_IP}:2379,http://127.0.0.1:2379 \\
--initial-cluster-token etcd-cluster-1 \\
--initial-cluster etcd-1=http://192.168.0.7:2380 \\
--initial-cluster-state new \\
--heartbeat-interval 1000 \\
--election-timeout 5000
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
``````
• Are you sure that is your unit file? Some lines appear to be missing, and there appear to be extra characters at the end of some lines. – Michael Hampton Jun 22 '19 at 3:59
• you're right, I missed the first line which is [Unit]. I added it. The \\ are because the one line is extremely long – Baron Jun 22 '19 at 5:15
• You should have one backslash, not two. – Michael Hampton Jun 22 '19 at 6:15
• Thanks this was it! – Baron Jun 22 '19 at 20:59
• Hello @Baron, Could you add your solution as an answer and mark it as approved? It will make your solution more visible if anyone will be searching for similar issues. – PjoterS Jun 25 '19 at 14:32
The answer was pointed out by @Michael Hampton. The two backslashes were because the code was supposed to be written from the terminal (in the guide). In the etcd.service file, lines should be broken with a single .
``````[Unit]
Description=etcd service
Documentation=https://github.com/coreos/etcd
[Service]
User=etcd
Type=notify
ExecStart=/usr/local/bin/etcd \
--name \${ETCD_NAME} \
--data-dir /var/lib/etcd \
--listen-peer-urls http://\${ETCD_HOST_IP}:2380 \
--listen-client-urls http://\${ETCD_HOST_IP}:2379,http://127.0.0.1:2379 \
|
3. Définir les types d'extension : divers sujets¶
This section aims to give a quick fly-by on the various type methods you can implement and what they do.
Here is the definition of PyTypeObject, with some fields only used in debug builds omitted:
typedef struct _typeobject {
const char *tp_name; /* For printing, in format "<module>.<name>" */
Py_ssize_t tp_basicsize, tp_itemsize; /* For allocation */
/* Methods to implement standard operations */
destructor tp_dealloc;
Py_ssize_t tp_vectorcall_offset;
getattrfunc tp_getattr;
setattrfunc tp_setattr;
PyAsyncMethods *tp_as_async; /* formerly known as tp_compare (Python 2)
or tp_reserved (Python 3) */
reprfunc tp_repr;
/* Method suites for standard classes */
PyNumberMethods *tp_as_number;
PySequenceMethods *tp_as_sequence;
PyMappingMethods *tp_as_mapping;
/* More standard operations (here for binary compatibility) */
hashfunc tp_hash;
ternaryfunc tp_call;
reprfunc tp_str;
getattrofunc tp_getattro;
setattrofunc tp_setattro;
/* Functions to access object as input/output buffer */
PyBufferProcs *tp_as_buffer;
/* Flags to define presence of optional/expanded features */
unsigned long tp_flags;
const char *tp_doc; /* Documentation string */
/* Assigned meaning in release 2.0 */
/* call function for all accessible objects */
traverseproc tp_traverse;
/* delete references to contained objects */
inquiry tp_clear;
/* Assigned meaning in release 2.1 */
/* rich comparisons */
richcmpfunc tp_richcompare;
/* weak reference enabler */
Py_ssize_t tp_weaklistoffset;
/* Iterators */
getiterfunc tp_iter;
iternextfunc tp_iternext;
/* Attribute descriptor and subclassing stuff */
struct PyMethodDef *tp_methods;
struct PyMemberDef *tp_members;
struct PyGetSetDef *tp_getset;
// Strong reference on a heap type, borrowed reference on a static type
struct _typeobject *tp_base;
PyObject *tp_dict;
descrgetfunc tp_descr_get;
descrsetfunc tp_descr_set;
Py_ssize_t tp_dictoffset;
initproc tp_init;
allocfunc tp_alloc;
newfunc tp_new;
freefunc tp_free; /* Low-level free-memory routine */
inquiry tp_is_gc; /* For PyObject_IS_GC */
PyObject *tp_bases;
PyObject *tp_mro; /* method resolution order */
PyObject *tp_cache;
PyObject *tp_subclasses;
PyObject *tp_weaklist;
destructor tp_del;
/* Type attribute cache version tag. Added in version 2.6 */
unsigned int tp_version_tag;
destructor tp_finalize;
vectorcallfunc tp_vectorcall;
} PyTypeObject;
Now that's a lot of methods. Don't worry too much though -- if you have a type you want to define, the chances are very good that you will only implement a handful of these.
As you probably expect by now, we're going to go over this and give more information about the various handlers. We won't go in the order they are defined in the structure, because there is a lot of historical baggage that impacts the ordering of the fields. It's often easiest to find an example that includes the fields you need and then change the values to suit your new type.
const char *tp_name; /* For printing */
The name of the type -- as mentioned in the previous chapter, this will appear in various places, almost entirely for diagnostic purposes. Try to choose something that will be helpful in such a situation!
Py_ssize_t tp_basicsize, tp_itemsize; /* For allocation */
These fields tell the runtime how much memory to allocate when new objects of this type are created. Python has some built-in support for variable length structures (think: strings, tuples) which is where the tp_itemsize field comes in. This will be dealt with later.
const char *tp_doc;
Ici vous pouvez mettre une chaîne (ou son adresse) que vous voulez renvoyer lorsque le script Python référence obj.__doc__ pour récupérer le docstring.
Nous en arrivons maintenant aux méthodes de type basiques -- celles que la plupart des types d'extension mettront en œuvre.
3.1. Finalisation et de-allocation¶
destructor tp_dealloc;
This function is called when the reference count of the instance of your type is reduced to zero and the Python interpreter wants to reclaim it. If your type has memory to free or other clean-up to perform, you can put it here. The object itself needs to be freed here as well. Here is an example of this function:
static void
newdatatype_dealloc(newdatatypeobject *obj)
{
free(obj->obj_UnderlyingDatatypePtr);
Py_TYPE(obj)->tp_free((PyObject *)obj);
}
If your type supports garbage collection, the destructor should call PyObject_GC_UnTrack() before clearing any member fields:
static void
newdatatype_dealloc(newdatatypeobject *obj)
{
PyObject_GC_UnTrack(obj);
Py_CLEAR(obj->other_obj);
...
Py_TYPE(obj)->tp_free((PyObject *)obj);
}
One important requirement of the deallocator function is that it leaves any pending exceptions alone. This is important since deallocators are frequently called as the interpreter unwinds the Python stack; when the stack is unwound due to an exception (rather than normal returns), nothing is done to protect the deallocators from seeing that an exception has already been set. Any actions which a deallocator performs which may cause additional Python code to be executed may detect that an exception has been set. This can lead to misleading errors from the interpreter. The proper way to protect against this is to save a pending exception before performing the unsafe action, and restoring it when done. This can be done using the PyErr_Fetch() and PyErr_Restore() functions:
static void
my_dealloc(PyObject *obj)
{
MyObject *self = (MyObject *) obj;
PyObject *cbresult;
if (self->my_callback != NULL) {
PyObject *err_type, *err_value, *err_traceback;
/* This saves the current exception state */
PyErr_Fetch(&err_type, &err_value, &err_traceback);
cbresult = PyObject_CallNoArgs(self->my_callback);
if (cbresult == NULL)
PyErr_WriteUnraisable(self->my_callback);
else
Py_DECREF(cbresult);
/* This restores the saved exception state */
PyErr_Restore(err_type, err_value, err_traceback);
Py_DECREF(self->my_callback);
}
Py_TYPE(obj)->tp_free((PyObject*)self);
}
Note
There are limitations to what you can safely do in a deallocator function. First, if your type supports garbage collection (using tp_traverse and/or tp_clear), some of the object's members can have been cleared or finalized by the time tp_dealloc is called. Second, in tp_dealloc, your object is in an unstable state: its reference count is equal to zero. Any call to a non-trivial object or API (as in the example above) might end up calling tp_dealloc again, causing a double free and a crash.
Starting with Python 3.4, it is recommended not to put any complex finalization code in tp_dealloc, and instead use the new tp_finalize type method.
Voir aussi
PEP 442 explique le nouveau schéma de finalisation.
3.2. Présentation de l'objet¶
In Python, there are two ways to generate a textual representation of an object: the repr() function, and the str() function. (The print() function just calls str().) These handlers are both optional.
reprfunc tp_repr;
reprfunc tp_str;
The tp_repr handler should return a string object containing a representation of the instance for which it is called. Here is a simple example:
static PyObject *
newdatatype_repr(newdatatypeobject * obj)
{
return PyUnicode_FromFormat("Repr-ified_newdatatype{{size:%d}}",
obj->obj_UnderlyingDatatypePtr->size);
}
If no tp_repr handler is specified, the interpreter will supply a representation that uses the type's tp_name and a uniquely-identifying value for the object.
The tp_str handler is to str() what the tp_repr handler described above is to repr(); that is, it is called when Python code calls str() on an instance of your object. Its implementation is very similar to the tp_repr function, but the resulting string is intended for human consumption. If tp_str is not specified, the tp_repr handler is used instead.
Voici un exemple simple :
static PyObject *
newdatatype_str(newdatatypeobject * obj)
{
return PyUnicode_FromFormat("Stringified_newdatatype{{size:%d}}",
obj->obj_UnderlyingDatatypePtr->size);
}
3.3. Gestion des attributs¶
For every object which can support attributes, the corresponding type must provide the functions that control how the attributes are resolved. There needs to be a function which can retrieve attributes (if any are defined), and another to set attributes (if setting attributes is allowed). Removing an attribute is a special case, for which the new value passed to the handler is NULL.
Python supports two pairs of attribute handlers; a type that supports attributes only needs to implement the functions for one pair. The difference is that one pair takes the name of the attribute as a char*, while the other accepts a . Each type can use whichever pair makes more sense for the implementation's convenience.
getattrfunc tp_getattr; /* char * version */
setattrfunc tp_setattr;
/* ... */
getattrofunc tp_getattro; /* PyObject * version */
setattrofunc tp_setattro;
If accessing attributes of an object is always a simple operation (this will be explained shortly), there are generic implementations which can be used to provide the version of the attribute management functions. The actual need for type-specific attribute handlers almost completely disappeared starting with Python 2.2, though there are many examples which have not been updated to use some of the new generic mechanism that is available.
3.3.1. Gestion des attributs génériques¶
Most extension types only use simple attributes. So, what makes the attributes simple? There are only a couple of conditions that must be met:
1. Le nom des attributs doivent être déjà connus lorsqu'on lance PyType_Ready().
2. No special processing is needed to record that an attribute was looked up or set, nor do actions need to be taken based on the value.
Note that this list does not place any restrictions on the values of the attributes, when the values are computed, or how relevant data is stored.
When PyType_Ready() is called, it uses three tables referenced by the type object to create descriptors which are placed in the dictionary of the type object. Each descriptor controls access to one attribute of the instance object. Each of the tables is optional; if all three are NULL, instances of the type will only have attributes that are inherited from their base type, and should leave the tp_getattro and tp_setattro fields NULL as well, allowing the base type to handle attributes.
Les tables sont déclarées sous la forme de trois champs de type objet :
struct PyMethodDef *tp_methods;
struct PyMemberDef *tp_members;
struct PyGetSetDef *tp_getset;
If tp_methods is not NULL, it must refer to an array of PyMethodDef structures. Each entry in the table is an instance of this structure:
typedef struct PyMethodDef {
const char *ml_name; /* method name */
PyCFunction ml_meth; /* implementation function */
int ml_flags; /* flags */
const char *ml_doc; /* docstring */
} PyMethodDef;
One entry should be defined for each method provided by the type; no entries are needed for methods inherited from a base type. One additional entry is needed at the end; it is a sentinel that marks the end of the array. The ml_name field of the sentinel must be NULL.
The second table is used to define attributes which map directly to data stored in the instance. A variety of primitive C types are supported, and access may be read-only or read-write. The structures in the table are defined as:
typedef struct PyMemberDef {
const char *name;
int type;
int offset;
int flags;
const char *doc;
} PyMemberDef;
For each entry in the table, a descriptor will be constructed and added to the type which will be able to extract a value from the instance structure. The type field should contain one of the type codes defined in the structmember.h header; the value will be used to determine how to convert Python values to and from C values. The flags field is used to store flags which control how the attribute can be accessed.
The following flag constants are defined in structmember.h; they may be combined using bitwise-OR.
Constante
Signification
READONLY
Jamais disponible en écriture.
PY_AUDIT_READ
Emit an object.__getattr__ audit events before reading.
Modifié dans la version 3.10: RESTRICTED, READ_RESTRICTED and WRITE_RESTRICTED are deprecated. However, READ_RESTRICTED is an alias for PY_AUDIT_READ, so fields that specify either RESTRICTED or READ_RESTRICTED will also raise an audit event.
Un avantage intéressant de l'utilisation de la table tp_members pour construire les descripteurs qui sont utilisés à l'exécution, est que à tout attribut défini de cette façon on peut associer un docstring, en écrivant simplement le texte dans la table. Une application peut utiliser l'API d'introspection pour récupérer le descripteur de l'objet de classe, et utiliser son attribut __doc__ pour renvoyer le docstring.
As with the tp_methods table, a sentinel entry with a name value of NULL is required.
3.3.2. Gestion des attributs de type spécifiques¶
For simplicity, only the char* version will be demonstrated here; the type of the name parameter is the only difference between the char* and flavors of the interface. This example effectively does the same thing as the generic example above, but does not use the generic support added in Python 2.2. It explains how the handler functions are called, so that if you do need to extend their functionality, you'll understand what needs to be done.
The tp_getattr handler is called when the object requires an attribute look-up. It is called in the same situations where the __getattr__() method of a class would be called.
Voici un exemple :
static PyObject *
newdatatype_getattr(newdatatypeobject *obj, char *name)
{
if (strcmp(name, "data") == 0)
{
return PyLong_FromLong(obj->data);
}
PyErr_Format(PyExc_AttributeError,
"'%.50s' object has no attribute '%.400s'",
tp->tp_name, name);
return NULL;
}
The tp_setattr handler is called when the __setattr__() or __delattr__() method of a class instance would be called. When an attribute should be deleted, the third parameter will be NULL. Here is an example that simply raises an exception; if this were really all you wanted, the tp_setattr handler should be set to NULL.
static int
newdatatype_setattr(newdatatypeobject *obj, char *name, PyObject *v)
{
return -1;
}
3.4. Comparaison des objets¶
richcmpfunc tp_richcompare;
The tp_richcompare handler is called when comparisons are needed. It is analogous to the rich comparison methods, like __lt__(), and also called by PyObject_RichCompare() and PyObject_RichCompareBool().
This function is called with two Python objects and the operator as arguments, where the operator is one of Py_EQ, Py_NE, Py_LE, Py_GE, Py_LT or Py_GT. It should compare the two objects with respect to the specified operator and return Py_True or Py_False if the comparison is successful, Py_NotImplemented to indicate that comparison is not implemented and the other object's comparison method should be tried, or NULL if an exception was set.
Here is a sample implementation, for a datatype that is considered equal if the size of an internal pointer is equal:
static PyObject *
newdatatype_richcmp(PyObject *obj1, PyObject *obj2, int op)
{
PyObject *result;
int c, size1, size2;
/* code to make sure that both arguments are of type
newdatatype omitted */
size1 = obj1->obj_UnderlyingDatatypePtr->size;
size2 = obj2->obj_UnderlyingDatatypePtr->size;
switch (op) {
case Py_LT: c = size1 < size2; break;
case Py_LE: c = size1 <= size2; break;
case Py_EQ: c = size1 == size2; break;
case Py_NE: c = size1 != size2; break;
case Py_GT: c = size1 > size2; break;
case Py_GE: c = size1 >= size2; break;
}
result = c ? Py_True : Py_False;
Py_INCREF(result);
return result;
}
3.5. Support pour le protocole abstrait¶
Python supports a variety of abstract 'protocols;' the specific interfaces provided to use these interfaces are documented in Couche d'abstraction des objets.
A number of these abstract interfaces were defined early in the development of the Python implementation. In particular, the number, mapping, and sequence protocols have been part of Python since the beginning. Other protocols have been added over time. For protocols which depend on several handler routines from the type implementation, the older protocols have been defined as optional blocks of handlers referenced by the type object. For newer protocols there are additional slots in the main type object, with a flag bit being set to indicate that the slots are present and should be checked by the interpreter. (The flag bit does not indicate that the slot values are non-NULL. The flag may be set to indicate the presence of a slot, but a slot may still be unfilled.)
PyNumberMethods *tp_as_number;
PySequenceMethods *tp_as_sequence;
PyMappingMethods *tp_as_mapping;
If you wish your object to be able to act like a number, a sequence, or a mapping object, then you place the address of a structure that implements the C type PyNumberMethods, PySequenceMethods, or PyMappingMethods, respectively. It is up to you to fill in this structure with appropriate values. You can find examples of the use of each of these in the Objects directory of the Python source distribution.
hashfunc tp_hash;
This function, if you choose to provide it, should return a hash number for an instance of your data type. Here is a simple example:
static Py_hash_t
newdatatype_hash(newdatatypeobject *obj)
{
Py_hash_t result;
result = obj->some_size + 32767 * obj->some_number;
if (result == -1)
result = -2;
return result;
}
Py_hash_t is a signed integer type with a platform-varying width. Returning -1 from tp_hash indicates an error, which is why you should be careful to avoid returning it when hash computation is successful, as seen above.
ternaryfunc tp_call;
This function is called when an instance of your data type is "called", for example, if obj1 is an instance of your data type and the Python script contains obj1('hello'), the tp_call handler is invoked.
Cette fonction prend trois arguments :
1. self is the instance of the data type which is the subject of the call. If the call is obj1('hello'), then self is obj1.
2. args is a tuple containing the arguments to the call. You can use PyArg_ParseTuple() to extract the arguments.
3. kwds is a dictionary of keyword arguments that were passed. If this is non-NULL and you support keyword arguments, use PyArg_ParseTupleAndKeywords() to extract the arguments. If you do not want to support keyword arguments and this is non-NULL, raise a TypeError with a message saying that keyword arguments are not supported.
Ceci est une implémentation tp_call très simple :
static PyObject *
newdatatype_call(newdatatypeobject *self, PyObject *args, PyObject *kwds)
{
PyObject *result;
const char *arg1;
const char *arg2;
const char *arg3;
if (!PyArg_ParseTuple(args, "sss:call", &arg1, &arg2, &arg3)) {
return NULL;
}
result = PyUnicode_FromFormat(
"Returning -- value: [%d] arg1: [%s] arg2: [%s] arg3: [%s]\n",
obj->obj_UnderlyingDatatypePtr->size,
arg1, arg2, arg3);
return result;
}
/* Iterators */
getiterfunc tp_iter;
iternextfunc tp_iternext;
These functions provide support for the iterator protocol. Both handlers take exactly one parameter, the instance for which they are being called, and return a new reference. In the case of an error, they should set an exception and return NULL. tp_iter corresponds to the Python __iter__() method, while tp_iternext corresponds to the Python __next__() method.
Tout objet iterable doit implémenter le gestionnaire tp_iter, qui doit renvoyer un objet de type iterator. Ici, les mêmes directives s'appliquent de la même façon que pour les classes Python :
• Pour les collections (telles que les listes et les n-uplets) qui peuvent implémenter plusieurs itérateurs indépendants, un nouvel itérateur doit être créé et renvoyé par chaque appel de type tp_iter.
• Objects which can only be iterated over once (usually due to side effects of iteration, such as file objects) can implement tp_iter by returning a new reference to themselves -- and should also therefore implement the tp_iternext handler.
Any iterator object should implement both tp_iter and tp_iternext. An iterator's tp_iter handler should return a new reference to the iterator. Its tp_iternext handler should return a new reference to the next object in the iteration, if there is one. If the iteration has reached the end, tp_iternext may return NULL without setting an exception, or it may set StopIteration in addition to returning NULL; avoiding the exception can yield slightly better performance. If an actual error occurs, tp_iternext should always set an exception and return NULL.
3.6. Prise en charge de la référence faible¶
L'un des objectifs de l'implémentation de la référence faible de Python est de permettre à tout type d'objet de participer au mécanisme de référence faible sans avoir à supporter le surcoût de la performance critique des certains objets, tels que les nombres.
Voir aussi
Documentation pour le module weakref.
Pour qu'un objet soit faiblement référençable, le type d'extension doit faire deux choses :
1. Inclure un champ dans la structure d'objet C dédiée au mécanisme de référence faible. Le constructeur de l'objet doit le laisser à la valeur NULL (ce qui est automatique lorsque l'on utilise le champ par défaut tp_alloc).
2. Définissez le membre de type tp_weaklistoffset à la valeur de décalage (offset) du champ susmentionné dans la structure de l'objet C, afin que l'interpréteur sache comment accéder à ce champ et le modifier.
Concrètement, voici comment une structure d'objet simple serait complétée par le champ requis :
typedef struct {
PyObject *weakreflist; /* List of weak references */
} TrivialObject;
Et le membre correspondant dans l'objet de type déclaré statiquement :
static PyTypeObject TrivialType = {
/* ... other members omitted for brevity ... */
.tp_weaklistoffset = offsetof(TrivialObject, weakreflist),
};
Le seul ajout supplémentaire est que tp_dealloc doit effacer toute référence faible (en appelant PyObject_ClearWeakRefs()) si le champ est non NULL :
static void
Trivial_dealloc(TrivialObject *self)
{
/* Clear weakrefs first before calling any destructors */
if (self->weakreflist != NULL)
PyObject_ClearWeakRefs((PyObject *) self);
/* ... remainder of destruction code omitted for brevity ... */
Py_TYPE(self)->tp_free((PyObject *) self);
}
3.7. Plus de suggestions¶
Pour savoir comment mettre en œuvre une méthode spécifique pour votre nouveau type de données, téléchargez le code source CPython. Allez dans le répertoire Objects, puis cherchez dans les fichiers sources C la fonction tp_ plus la fonction que vous voulez (par exemple, tp_richcompare). Vous trouverez des exemples de la fonction que vous voulez implémenter.
Lorsque vous avez besoin de vérifier qu'un objet est une instance concrète du type que vous implémentez, utilisez la fonction PyObject_TypeCheck(). Voici un exemple de son utilisation :
if (!PyObject_TypeCheck(some_object, &MyType)) {
PyErr_SetString(PyExc_TypeError, "arg #1 not a mything");
return NULL;
}
Voir aussi
Télécharger les versions sources de CPython.
|
Planes and wheat
Alignments to Content Standards: A-CED.A.1
A government buys $x$ fighter planes at $z$ dollars each, and $y$ tons of wheat at $w$ dollars each. It spends a total of $B$ dollars, where $B = xz + yw$. In (a)–(c), write an equation whose solution is the given quantity.
1. The number of tons of wheat the government can afford to buy if it spends a total of \$100 million, wheat costs \$300 per ton, and it must buy 5 fighter planes at \$15 million each. 2. The price of fighter planes if the government bought 3 of them, in addition to$10,\!000$tons of wheat at \$500 a ton, for a total of \$50 million. 3. The price of a ton of wheat, given that a fighter plane costs$100,\!000$times as much as a ton of wheat, and that the government bought 20 fighter planes and$15,\!000$tons of wheat for a total cost of \$90 million.
IM Commentary
This is a simple exercise in creating equations from a situation with many variables. By giving three different scenarios, the problem requires students to keep going back to the definitions of the variables, thus emphasizing the importance of defining variables when you write an equation. In order to reinforce this aspect of the problem, the variables have not been given names that remind the student of what they stand for. The emphasis here is on setting up equations, not solving them.
Solution
1. We want to find the value of $y$. We are given $B = 100,\!000,\!000$, $w = 300$, $x = 5$, and $z = 15,\!000,\!000$. So the equation is $$100,\!000,\!000 = 5 \cdot 15,\!000,\!000 + 300 y,$$ or $$100,\!000,\!000 = 75,\!000,\!000 + 300 y.$$
2. We want to find the value of $z$. We are given that $x = 3$, $y = 10,\!000$, $w = 500$, and $B = 50,\!000,\!000$. So the equation is $$50,\!000,\!000 = 3z + 10,\!000\cdot 500,$$ or $$50,\!000,\!000 = 3z + 5,\!000,\!000.$$
3. We want to find the value of $w$. We are given that $x = 20$ and $y = 15,\!000$, $B = 90,\!000,\!000$, and $z = 100,\!000w$. So the equation is $$90,\!000,\!000 = 20 (100,\!000 w) + 15,\!000 w,$$ which simplifies to $$90,\!000,\!000 = 2,\!015,\!000 w.$$
|
# Real Analysis
American Mathematical Soc., 2005 - 151 y[W
This book is written by award-winning author, Frank Morgan. It offers a simple and sophisticated point of view, reflecting Morgan's insightful teaching, lecturing, and writing style. Intended for undergraduates studying real analysis, this book builds the theory behind calculus directly from the basic concepts of real numbers, limits, and open and closed sets in $\mathbf{R n$. It gives the three characterizations of continuity: via epsilon-delta, sequences, and open sets. It givesthe three characterizations of compactness: as "closed and bounded," via sequences, and via open covers. Topics include Fourier series, the Gamma function, metric spaces, and Ascoli's Theorem. This concise text not only provides efficient proofs, but also shows students how to derive them. Theexcellent exercises are accompanied at the back of the book by select solutions. Ideally suited as an undergraduate textbook, this complete book on real analysis will fit comfortably into one semester. Frank Morgan received the first national Haimo teaching award from the Mathematical Association of America. He has also garnered top teaching awards from Rice University (Houston, TX) and MIT (Cambridge, MA).
### r[ -r[
#### Review: Real Analysis
You'd think Cauchy and other mathematicians wasted their lives away with the amount of space this book gives to some of their ideas. Indeed you wonder if someone could ever learn anything with such a ...
#### Review: Real Analysis
This book contains most of the important part of the analysis mathematic and you can review all you knowledge. It has 4 parts. First part is Real Number and limits which is about the pre-calculus ...
### ڎ
III 3 IV 9 V 13 VI 21 VII 25 VIII 27 IX 33 X 35
XXII 81 XXIII 85 XXIV 89 XXV 93 XXVI 99 XXVII 105 XXVIII 109 XXIX 111
XI 37 XII 41 XIII 45 XIV 47 XV 49 XVI 53 XVII 59 XVIII 61 XIX 65 XX 71 XXI 75
XXX 115 XXXI 119 XXXII 121 XXXIII 125 XXXIV 129 XXXV 133 XXXVI 137 XXXVII 147 XXXVIII 149 쌠
### EFuy[W̎Q
Frank Morgan: Real Analysis
Real Analysis Frank Morgan Publication Year: 2005 ISBN-10: 0-8218-3670-6 ISBN-13: 978-0-8218-3670-5. This page is maintained by the author. ...
www.ams.org/ bookpages/ real/
Real analysis | scitech Book News | Find Articles at BNET.com
Real analysis from scitech Book News in Reference provided free by Find Articles.
findarticles.com/ p/ articles/ mi_m0QLT/ is_2005_Sept/ ai_n15782606
Real Analysis and Applications - Including Fourier Series and the ...
scitech Book News - Real Analysis and Applications - Including Fourier Series and the Calculus of Variations.(Brief Article)(Book Review) - From the ...
www.highbeam.com/ doc/ 1G1-142886798.html
A.
Real Analysis and Applications starts with a streamlined, but complete, ... With this applied version of his Real Analysis text, Morgan brings his famous ...
www.yurinsha.com/ 388/ p7.htm
ɏ A₢Ȃǂ͂܂ e-mail:kamesan ...
00292 Al-Gawaiz, ma/Elsanousi, sa F Elements of Real Analysis. 420 p. 2006 Chapman & Hall/CRC * 1-58488-661-7 hardcover 10514 M ...
www.kamedabook.com/ zaikolist_a.html
Published SMALL papers
Christopher Cox*, Lisa Harrison*, Meg Tilton*, The shortest enclosure of three connected areas in R2, Real Analysis Exchange 20 (1994/95), 313-335. ...
www.williams.edu/ go/ math/ published.html
Naturvitenskap > Matematikk > Matematisk analyse > Reell analyse ...
"Real Analysis" is the third volume in the "Princeton Lectures in Analysis", ... This is a course in real analysis directed at advanced undergraduates and ...
www.bokkilden.no/ SamboWeb/ emneliste.do?emnekode=1.2.8.1& rom=MP
arxiv:math.DG/0412020 v3 2 Nov 2006
on gReal Analysish and gReal Analysis and Applications.h Bronfman Science Center, Williams College, Williamstown, MA 01267 USA. [email protected] ...
arxiv.org/ pdf/ math/ 0412020
m}200/01
24, Real analysis and probability / rm Dudley. -- : hbk, : pbk. ..... 144, Real analysis and applications : including Fourier series and the calculus of ...
www.math.nagoya-u.ac.jp/ library/ y2006-01.html
Xin Wei Sha - Whitehead's Poetical Mathematics - Configurations 13:1
For a thorough introduction to the relevant notions of completeness, accumulation, and limit, see Halsey L. Royden, Real Analysis, 3rd ed. ...
muse.jhu.edu/ journals/ configurations/ v013/ 13.1wei.html
### ҂ɂ (2005)
Frank Morgan, M.D., is the author of Ruth and Esther: Shadows of our Future, and the soon coming book, The Watchmen's Cry. He is a family physician with a busy practice in Northern Colorado, and is married with five children. After heartbreaking personal failures left him broken emotionally and spiritually, Frank had a life-changing encounter with the God of Israel. Realizing that blindly following traditional church doctrines had not protected him from his own fallen nature, the author began to earnestly search out the ways of God. For the past several years, Frank has carefully examined matters of law and grace and has developed a keen interest in eschatology. It was through his study of the end times that Frank came to the startling realization of our identity as Believers in Messiah Yeshua. This realization has led to a deeper love for the God of Israel and for His instruction, the Torah. Morgan's desire to better understand the end-times is unique in that he writes with an understanding of both the houses of Israel, Ephraim and Judah (Isa 8:14). His works issue a a clarion call for those of Ephraim to arise, to love their brother, Judah, and to respond to their high call in the Messiah. One also hears a heart cry for the restoration of the whole house of Israel.
|
Accurate and online-efficient evaluation of the a posteriori error bound in the reduced basis method
ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 48 (2014) no. 1, p. 207-229
The full text of recent articles is available to journal subscribers only. See the article on the journal's website
The reduced basis method is a model reduction technique yielding substantial savings of computational time when a solution to a parametrized equation has to be computed for many values of the parameter. Certification of the approximation is possible by means of an a posteriori error bound. Under appropriate assumptions, this error bound is computed with an algorithm of complexity independent of the size of the full problem. In practice, the evaluation of the error bound can become very sensitive to round-off errors. We propose herein an explanation of this fact. A first remedy has been proposed in [F. Casenave, Accurate a posteriori error evaluation in the reduced basis method. C. R. Math. Acad. Sci. Paris 350 (2012) 539-542.]. Herein, we improve this remedy by proposing a new approximation of the error bound using the empirical interpolation method (EIM). This method achieves higher levels of accuracy and requires potentially less precomputations than the usual formula. A version of the EIM stabilized with respect to round-off errors is also derived. The method is illustrated on a simple one-dimensional diffusion problem and a three-dimensional acoustic scattering problem solved by a boundary element method.
DOI : https://doi.org/10.1051/m2an/2013097
Classification: 65N15, 65D05, 68W25, 76Q05
Keywords: reduced basis method, a posteriori error bound, round-off errors, boundary element method, empirical interpolation method, acoustics
@article{M2AN_2014__48_1_207_0,
author = {Casenave, Fabien and Ern, Alexandre and Leli\evre, Tony},
title = {Accurate and online-efficient evaluation of the a posteriori error bound in the reduced basis method},
journal = {ESAIM: Mathematical Modelling and Numerical Analysis - Mod\'elisation Math\'ematique et Analyse Num\'erique},
publisher = {EDP-Sciences},
volume = {48},
number = {1},
year = {2014},
pages = {207-229},
doi = {10.1051/m2an/2013097},
zbl = {1288.65157},
language = {en},
url = {http://www.numdam.org/item/M2AN_2014__48_1_207_0}
}
Casenave, Fabien; Ern, Alexandre; Lelièvre, Tony. Accurate and online-efficient evaluation of the a posteriori error bound in the reduced basis method. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 48 (2014) no. 1, pp. 207-229. doi : 10.1051/m2an/2013097. http://www.numdam.org/item/M2AN_2014__48_1_207_0/`
[1] Z. Bai and D. Skoogh, Krylov subspace techniques for reduced-order modeling of large-scale dynamical systems. Appl. Numer. Math. 43 (2002) 9-44. | MR 1936100 | Zbl 1012.65136
[2] M.A. Bahayou, Sur le problème de Helmholtz. Rendiconti del Seminario matematico della Università e Politecnico di Torino (2007) 427-450. | MR 2402854 | Zbl 1187.35028
[3] M. Barrault, Y. Maday, N.C. Nguyen and A.T. Patera, An ‘empirical interpolation' method: application to efficient reduced-basis discretization of partial differential equations. C. R. Math. Acad. Sci. Paris 339 (2004) 667-672. | MR 2103208 | Zbl 1061.65118
[4] A. Björck and C.C. Paige, Loss and recapture of orthogonality in the modified Gram-Schmidt algorithm. SIAM J. Matrix Anal. Appl. 13 (1992) 176-190. | MR 1146660 | Zbl 0747.65026
[5] S. Boyaval, Mathematical modelling and numerical simulation in materials science. Ph.D. thesis, Université Paris-Est (2009).
[6] A. Buffa and R. Hiptmair, Regularized combined field integral equations. Numer. Math. 100 (2005) 1-19. | MR 2129699 | Zbl 1067.65137
[7] R.L. Burden and J.D. Faires, Numerical Analysis. PWS Publishing Company (1993). | Zbl 0788.65001
[8] E. Cancès, V. Ehrlacher and T. Lelièvre, Convergence of a greedy algorithm for high-dimensional convex nonlinear problems. Math. Models Methods Appl. Sci. 21 (2011) 2433-2467. | MR 2864637 | Zbl 1259.65098
[9] F. Casenave, Accurate a posteriori error evaluation in the reduced basis method. C. R. Math. Acad. Sci. Paris 350 (2012) 539-542. | MR 2929064 | Zbl 1245.65105
[10] F. Casenave, Ph.D. thesis, in preparation (2013).
[11] F. Casenave, M. Ghattassi and R. Joubaud, A multiscale problem in thermal science. ESAIM: Proceedings 38 (2012) 202-219.
[12] A. Chatterjee, An introduction to the proper orthogonal decomposition. Curr. Sci. 78 (2000) 808-817.
[13] Y. Chen, J.S. Hesthaven, Y. Maday, J. Rodriguez and X. Zhu, Certified reduced basis method for electromagnetic scattering and radar cross section estimation. Technical Report 2011-28, Scientific Computing Group, Brown University, Providence, RI, USA (2011). | MR 2924023 | Zbl 1253.78045
[14] Y. Chen, J.S. Hesthaven, Y. Maday and J. Rodríguez, Improved successive constraint method based a posteriori error estimate for reduced basis approximation of 2D Maxwell's problem. ESAIM: M2AN 43 (2009) 1099-1116. | Numdam | MR 2588434 | Zbl 1181.78019
[15] F. Chinesta, P. Ladeveze and C. Elías, A short review on model order reduction based on proper generalized decomposition. Arch. Comput. Methods Eng. 18 (2011) 395-404.
[16] A. Delnevo, I. Terrasse, Code Acti3S harmonique : Justifications Mathématiques : Partie I. Technical report, EADS CCR (2001).
[17] A. Delnevo, I. Terrasse, Code Acti3S, Justifications Mathématiques : Partie II, présence d'un écoulement uniforme. Technical report, EADS CCR (2002).
[18] A. Ern and J.L. Guermond, Theory and Practice of Finite Elements, in vol. 159 of Applied Mathematical Sciences. Springer (2004). | MR 2050138 | Zbl 1059.65103
[19] M. Fares, J.S. Hesthaven, Y. Maday and B. Stamm, The reduced basis method for the electric field integral equation. J. Comput. Phys. 230 (2011) 5532-5555. | MR 2799523 | Zbl 1220.78045
[20] L. Giraud and J. Langou, When modified Gram-Schmidt generates a well-conditioned set of vectors. IMA J. Numer. Anal. 22 (2002) 521-528. | MR 1936517 | Zbl 1027.65050
[21] D. Goldberg, What every computer scientist should know about floating point arithmetic. ACM Computing Surveys 23 (1991) 5-48.
[22] G.H. Golub and C.F. Van Loan, Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press (1996). | MR 1417720 | Zbl 1268.65037
[23] R.J. Guyan, Reduction of stiffness and mass matrices. AIAA J. 3 (1965) 380.
[24] R. Hiptmair, Coercive combined field integral equations. J. Numer. Math. 11 (2003) 115-134. | MR 1987591 | Zbl 1115.76356
[25] R. Hiptmair and P. Meury, Stable FEM-BEM Coupling for Helmholtz Transmission Problems. ETH, Seminar für Angewandte Mathematik (2005). | MR 2263042 | Zbl 1221.65308
[26] G.C. Hsiao and W.L. Wendland, Boundary Element Methods: Foundation and Error Analysis. John Wiley & Sons, Ltd (2004).
[27] D.B.P. Huynh, G. Rozza, S. Sen and A.T. Patera, A successive constraint linear optimization method for lower bounds of parametric coercivity and inf-sup stability constants. C. R. Math. Acad. Sci. Paris 345 (2007) 473-478. | MR 2367928 | Zbl 1127.65086
[28] P. Langlois, S. Graillat and N. Louvet, Compensated Horner scheme. Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2006).
[29] L. Machiels, Y. Maday, I.B. Oliveira, A.T. Patera and D.V. Rovas, Output bounds for reduced-basis approximations of symmetric positive definite eigenvalue problems. C. R. Math. Acad. Sci. Paris 331 (2000) 153-158. | MR 1781533 | Zbl 0960.65063
[30] Y. Maday, N.C. Nguyen, A.T. Patera and S. Pau, A general multipurpose interpolation procedure: the magic points. Commun. Pure Appl. Anal. 8 (2008) 383-404. | Zbl 1184.65020
[31] W.C.H. Mclean, Strongly Elliptic Systems and Boundary Integral Equations. Cambridge University Press (2000). | MR 1742312 | Zbl 0948.35001
[32] A. Nouy and O.P. Le Maître, Generalized spectral decomposition for stochastic nonlinear problems. J. Comput. Phys. 228 (2009) 202-235. | MR 2464076 | Zbl 1157.65009
[33] A.T. Patera, Private communication (2012).
[34] A.T. Patera and G. Rozza, Reduced Basis Approximation and A Posteriori Error Estimation for Parametrized Partial Differential Equations. MIT Pappalardo Graduate Monographs in Mechanical Engineering (2007). | Zbl pre05344486
[35] M. Paz, Dynamic condensation. AIAA J. 22 (1984) 724-727.
[36] C. Prud'Homme, D.V. Rovas, K. Veroy, L. Machiels, Y. Maday, A.T. Patera and G. Turinici, Reliable real-time solution of parametrized partial differential equations: Reduced-basis output bound methods. J. Fluids Eng. 124 (2002) 70-80.
[37] S.A. Sauter and C. Schwab, Boundary Element Methods. Springer Series in Computational Mathematics. Springer (2010). | MR 2743235 | Zbl 1215.65183
[38] I.E. Shparlinski, Sparse polynomial approximation in finite fields. In Proceedings of the thirty-third annual ACM symposium on Theory of computing, STOC '01. ACM, New York, USA (2001) 209-215. | MR 2120317
[39] K. Veroy and A.T. Patera, Certified real-time solution of the parametrized steady incompressible Navier-Stokes equations: rigorous reduced-basis a posteriori error bounds. Int. J. Numer. Methods Fluids 47 (2005) 773-788. | MR 2123791 | Zbl 1134.76326
[40] K. Veroy, C. Prud'Homme and A.T. Patera, Reduced-basis approximation of the viscous Burgers equation: rigorous a posteriori error bounds. C. R. Math. Acad. Sci. Paris 337 (2003) 619-624. | MR 2017737 | Zbl 1036.65075
|
# Prove that the square of any positive integer is of the form 3m or,
Question:
Prove that the square of any positive integer is of the form 3m or, 3m + 1 but not of the form 3m + 2.
Solution:
To Prove: that the square of an positive integer is of the form 3m or 3m + 1 but not of the form 3m + 2.
Proof: Since positive integer n is of the form of 3q, 3q + 1 and 3q + 2
If n = 3q
$\Rightarrow n^{2}=(3 q)^{2}$
$\Rightarrow n^{2}=9 q^{2}$
$\Rightarrow n^{2}=3\left(3 q^{2}\right)$
$\Rightarrow n^{2}=3 m\left(m=3 q^{2}\right)$
If n = 3q + 1
Then, $n^{2}=(3 q+1)^{2}$
$\Rightarrow n^{2}=\left(3 q^{2}\right)+6 q+1$
$\Rightarrow n^{2}=9 q^{2}+6 q+1$
$\Rightarrow n^{2}=3 q(3 q+1)+1$
$\Rightarrow n^{2}=3 m+1($ where $m=(3 q+2))$
If $n=3 q+2$
Then, $n^{2}=(3 q+2)^{2}$
$\Rightarrow n^{2}=\left(3 q^{2}\right)+12 q+4$
$\Rightarrow n^{2}=9 q^{2}+12 q+4$
$\Rightarrow n^{2}=3(3 q+4 q+1)+1$
$\Rightarrow n^{2}=3 m+1$ (where $q=(3 q+4 q+1)$ )
Hence n2 integer is of the form 3m, 3m + 1 but not of the form 3m + 2.
|
# Kimchi
• This document specifies kimchi, a zero-knowledge proof system that’s a variant of PLONK.
• This document does not specify how circuits are created or executed, but only how to convert a circuit and its execution into a proof.
Table of content:
## Overview
There are three main algorithms to kimchi:
• Setup: takes a circuit and produces a prover index, and a verifier index.
• Proof creation: takes the prover index, and the execution trace of the circuit to produce a proof.
• Proof verification: takes the verifier index and a proof to verify.
As part of these algorithms, a number of tables are created (and then converted into polynomials) to create a proof.
### Tables used to describe a circuit
The following tables are created to describe the circuit:
Gates. A circuit is described by a series of gates, that we list in a table. The columns of the tables list the gates, while the rows are the length of the circuit. For each row, only a single gate can take a value while all other gates take the value .
01000000000
10100000000
Coefficients. The coefficient table has 15 columns, and is used to tweak the gates. Currently, only the Generic and the Poseidon gates use it (refer to their own sections to see how). All other gates set their values to .
row01234567891011121314
0///////////////
Wiring (or Permutation, or sigmas). For gates to take the outputs of other gates as inputs, we use a wiring table to wire registers together. To learn about registers, see the next section. It is defined at every row, but only for the first registers. Each cell specifies a (row, column) tuple that it should be wired to. Cells that are not connected to another cell are wired to themselves. Note that if three or more registers are wired together, they must form a cycle. For example, if register (0, 4) is wired to both registers (80, 6) and (90, 0) then you would have the following table:
row0123456
00,00,10,20,380,60,50,6
8080,080,180,280,380,480,590,0
900,490,190,290,390,490,590,6
The lookup feature is currently optional, as it can add some overhead to the protocol. In the case where you would want to use lookups, the following tables would be needed:
Lookup Tables. The different lookup tables that are used in the circuit. For example, the XOR lookup table:
lro
101
011
110
000
Lookup selectors. A lookup selector is used to perform a number of queries in different lookup tables. Any gate can advertise its use of a lookup selector (so a lookup selector can be associated to several gates), and on which rows they want to use them (current and/or next). In cases where a gate need to use lookups in its current row only, and is the only one performing a specific combination of queries, then its gate selector can be used in place of a lookup selector. As with gates, lookup selectors (including gates used as lookup selectors) are mutually exclusives (only one can be used on a given row).
We currently have two lookup selectors:
rowChaChaQueryChaChaFinalQuery
000
110
Where each apply 4 queries. A query is a table describing which lookup table it queries, and the linear combination of the witness to use in the query. For example, the following table describes a query into the XOR table made out of linear combinations of registers (checking that ):
table_idlro
XOR1, r01, r22, r1
### Tables produced during proof creation
The following tables are created by the prover at runtime:
Registers (or Witness). Registers are also defined at every row, and are split into two types: the IO registers from to usually contain input or output of the gates (note that a gate can output a value on the next row as well). I/O registers can be wired to each other (they’ll be forced to have the same value), no matter what row they’re on (for example, the register at row:0, col:4 can be wired to the register at row:80, col:6). The rest of the registers, through , are called advice registers as they can store values that useful only for the row’s active gate. Think of them as intermediary or temporary values needed in the computation when the prover executes a circuit.
row01234567891011121314
0///////////////
Wiring (Permutation) trace. You can think of the permutation trace as an extra register that is used to enforce the wiring specified in the wiring table. It is a single column that applies on all the rows as well, which the prover computes as part of a proof.
rowpt
0/
Queries trace. These are the actual values made by queries, calculated by the prover at runtime, and used to construct the proof.
Table trace. Represents the concatenation of all the lookup tables, combined into a single column at runtime by both the prover and the verifier.
Sorted trace. Represents the processed (see the lookup section) concatenation of the queries trace and the table trace. It is produced at runtime by the prover. The sorted trace is long enough that it is split in several columns.
Lookup (aggregation, or permutation) trace. This is a one column table that is similar to the wiring (permutation) trace we talked above. It is produced at runtime by the prover.
## Dependencies
To specify kimchi, we rely on a number of primitives that are specified outside of this specification. In this section we list these specifications, as well as the interfaces we make use of in this specification.
### Polynomial Commitments
Refer to the specification on polynomial commitments. We make use of the following functions from that specification:
• PolyCom.non_hiding_commit(poly) -> PolyCom::NonHidingCommitment
• PolyCom.commit(poly) -> PolyCom::HidingCommitment
• PolyCom.evaluation_proof(poly, commitment, point) -> EvaluationProof
• PolyCom.verify(commitment, point, evaluation, evaluation_proof) -> bool
### Poseidon hash function
Refer to the specification on Poseidon. We make use of the following functions from that specification:
• Poseidon.init(params) -> FqSponge
• Poseidon.update(field_elem)
• Poseidon.finalize() -> FieldElem
specify the following functions on top:
• Poseidon.produce_challenge() (TODO: uses the endomorphism)
• Poseidon.to_fr_sponge() -> state_of_fq_sponge_before_eval, FrSponge
With the current parameters:
### Pasta
Kimchi is made to work on cycles of curves, so the protocol switch between two fields Fq and Fr, where Fq represents the base field and Fr represents the scalar field.
See the Pasta curves specification.
## Constraints
Kimchi enforces the correct execution of a circuit by creating a number of constraints and combining them together. In this section, we describe all the constraints that make up the main polynomial once combined.
We define the following functions:
• combine_constraints(range_alpha, constraints), which takes a range of contiguous powers of alpha and a number of constraints. It returns the sum of all the constraints, where each constraint has been multiplied by a power of alpha. In other words it returns:
The different ranges of alpha are described as follows:
• gates. Offset starts at 0 and 21 powers of are used
• Permutation. Offset starts at 21 and 3 powers of are used
Note
As gates are mutually exclusive (a single gate is used on each row), we can reuse the same range of powers of alpha across all the gates.
TODO: linearization
### Permutation
The permutation constraints are the following 4 constraints:
The two sides of the coin (with ):
and
the initialization of the accumulator:
and the accumulator’s final value:
You can read more about why it looks like that in this post.
The quotient contribution of the permutation is split into two parts and . They will be used by the prover.
and bnd:
The linearization:
where is computed as:
To compute the permutation aggregation polynomial, the prover interpolates the polynomial that has the following evaluations. The first evaluation represents the initial value of the accumulator: For , where is the size of the domain, evaluations are computed as:
with
and
If computed correctly, we should have .
Finally, randomize the last EVAL_POINTS evaluations and , in order to add zero-knowledge to the protocol.
### Lookup
Lookups in kimchi allows you to check if a single value, or a series of values, are part of a table. The first case is useful to check for checking if a value belongs to a range (from 0 to 1,000, for example), whereas the second case is useful to check truth tables (for example, checking that three values can be found in the rows of an XOR table) or write and read from a memory vector (where one column is an index, and the other is the value stored at that index).
Note
Similarly to the generic gate, each values taking part in a lookup can be scaled with a fixed field element.
The lookup functionality is an opt-in feature of kimchi that can be used by custom gates. From the user’s perspective, not using any gates that make use of lookups means that the feature will be disabled and there will be no overhead to the protocol.
Note
For now, the Chacha gates are the only gates making use of lookups.
Refer to the lookup RFC for an overview of the lookup feature.
In this section, we describe the tables kimchi supports, as well as the different lookup selectors (and their associated queries)
#### The Lookup Tables
Kimchi currently supports a single lookup table:
/// The table ID associated with the XOR lookup table.
pub const XOR_TABLE_ID: i32 = 0;
/// The range check table ID.
pub const RANGE_CHECK_TABLE_ID: i32 = 1;
XOR. The lookup table for 4-bit xor. Note that it is constructed so that (0, 0, 0) is the last position in the table.
This is because tables are extended to the full size of a column (essentially) by padding them with their final value. And, having the value (0, 0, 0) here means that when we commit to this table and use the dummy value in the lookup_sorted columns, those entries that have the dummy value of
will translate into a scalar multiplication by 0, which is free.
#### The Lookup Selectors
XorSelector. Performs 4 queries to the XOR lookup table.
lro-lro-lro-lro
1, r31, r71, r11-1, r41, r81, r12-1, r51, r91, r13-1, r61, r101, r14
ChaChaFinalSelector. Performs 4 different queries to the XOR lookup table. (TODO: specify the layout)
#### Producing the sorted table as the prover
Because of our ZK-rows, we can’t do the trick in the plookup paper of wrapping around to enforce consistency between the sorted lookup columns.
Instead, we arrange the LookupSorted table into columns in a snake-shape.
Like so,
_ _
| | | | |
| | | | |
|_| |_| |
or, imagining the full sorted array is [ s0, ..., s8 ], like
s0 s4 s4 s8
s1 s3 s5 s7
s2 s2 s6 s6
So the direction (“increasing” or “decreasing” (relative to LookupTable) is
if i % 2 = 0 { Increasing } else { Decreasing }
Then, for each i < max_lookups_per_row, if i % 2 = 0, we enforce that the last element of LookupSorted(i) = last element of LookupSorted(i + 1), and if i % 2 = 1, we enforce that the first element of LookupSorted(i) = first element of LookupSorted(i + 1).
### Gates
A circuit is described as a series of gates. In this section we describe the different gates currently supported by kimchi, the constraints associated to them, and the way the register table, coefficient table, and permutation can be used in conjunction.
TODO: for each gate describe how to create it?
#### Double Generic Gate
The double generic gate contains two generic gates.
A generic gate is simply the 2-fan in gate specified in the vanilla PLONK protocol that allows us to do operations like:
• addition of two registers (into an output register)
• or multiplication of two registers
• equality of a register with a constant
More generally, the generic gate controls the coefficients in the equation:
The layout of the gate is the following:
01234567891011121314
l1r1o1l2r2o2
where l1, r1, and o1 (resp. l2, r2, o2) are the left, right, and output registers of the first (resp. second) generic gate.
The selectors are stored in the coefficient table as:
01234567891011121314
l1r1o1m1c1l2r2o2m2c2
with m1 (resp. m2) the mul selector for the first (resp. second) gate, and c1 (resp. c2) the constant selector for the first (resp. second) gate.
The constraints:
where the are the coefficients.
#### Poseidon
The poseidon gate encodes 5 rounds of the poseidon permutation. A state is represents by 3 field elements. For example, the first state is represented by (s0, s0, s0), and the next state, after permutation, is represented by (s1, s1, s1).
Below is how we store each state in the register table:
01234567891011121314
s0s0s0s4s4s4s1s1s1s2s2s2s3s3s3
s5s5s5
The last state is stored on the next row. This last state is either used:
• with another Poseidon gate on that next row, representing the next 5 rounds.
• or with a Zero gate, and a permutation to use the output elsewhere in the circuit.
• or with another gate expecting an input of 3 field elements in its first registers.
Note
As some of the poseidon hash variants might not use rounds (for some ), the result of the 4-th round is stored directly after the initial state. This makes that state accessible to the permutation.
We define as the MDS matrix at row and column .
We define the S-box operation as for the SPONGE_BOX constant.
We store the 15 round constants required for the 5 rounds (3 per round) in the coefficient table:
01234567891011121314
r0r1r2r3r4r5r6r7r8r9r10r11r12r13r14
The initial state, stored in the first three registers, are not constrained. The following 4 states (of 3 field elements), including 1 in the next row, are constrained to represent the 5 rounds of permutation. Each of the associated 15 registers is associated to a constraint, calculated as:
first round:
second round:
third round:
fourth round:
fifth round:
where is the polynomial which points to the next row.
#### Chacha
There are four chacha constraint types, corresponding to the four lines in each quarter round.
a += b; d ^= a; d <<<= 16;
c += d; b ^= c; b <<<= 12;
a += b; d ^= a; d <<<= 8;
c += d; b ^= c; b <<<= 7;
or, written without mutation, (and where + is mod ),
a' = a + b ; d' = (d ⊕ a') <<< 16;
c' = c + d'; b' = (b ⊕ c') <<< 12;
a'' = a' + b'; d'' = (d' ⊕ a') <<< 8;
c'' = c' + d''; b'' = (c'' ⊕ b') <<< 7;
We lay each line as two rows.
Each line has the form
x += z; y ^= x; y <<<= k
or without mutation,
x' = x + z; y' = (y ⊕ x') <<< k
which we abbreviate as
L(x, x’, y, y’, z, k)
In general, such a line will be laid out as the two rows
01234567891011121314
xyz(y^x’)_0(y^x’)_1(y^x’)_2(y^x’)_3(x+z)_0(x+z)_1(x+z)_2(x+z)_3y_0y_1y_2y_3
x’y’(x+z)_8(y^x’)_4(y^x’)_5(y^x’)_6(y^x’)_7(x+z)_4(x+z)_5(x+z)_6(x+z)_7y_4y_5y_6y_7
where A_i indicates the i^th nybble (four-bit chunk) of the value A.
is special, since we know it is actually at most 1 bit (representing the overflow bit of x + z).
So the first line L(a, a', d, d', b, 8) for example becomes the two rows
01234567891011121314
a’d’(a+b)_8(d^a’)_4(d^a’)_5(d^a’)_6(d^a’)_7(a+b)_4(a+b)_5(a+b)_6(a+b)_7d_4d_5d_6d_7
along with the equations
• (booleanity check)
The rotates the nybbles left by 4, which means bit-rotating by as desired.
The final line is a bit more complicated as we have to rotate by 7, which is not a multiple of 4. We accomplish this as follows.
Let’s say we want to rotate the nybbles left by 7. First we’ll rotate left by 4 to get
Rename these as
We now want to left-rotate each by 3.
Let be the low bit of . Then, the low 3 bits of are .
The result will thus be
or re-writing in terms of our original nybbles ,
For neatness, letting , the first 2 rows for the final line will be:
01234567891011121314
xyz(y^x’)_0(y^x’)_1(y^x’)_2(y^x’)_3(x+z)_0(x+z)_1(x+z)_2(x+z)_3y_0y_1y_2y_3
x’_(x+z)_8(y^x’)_4(y^x’)_5(y^x’)_6(y^x’)_7(x+z)_4(x+z)_5(x+z)_6(x+z)_7y_4y_5y_6y_7
but then we also need to perform the bit-rotate by 1.
For this we’ll add an additional 2 rows. It’s probably possible to do it with just 1, but I think we’d have to change our plookup setup somehow, or maybe expand the number of columns, or allow access to the previous row.
Let be the low bit of the nybble n. The 2 rows will be
01234567891011121314
y’(y^x’)_0(y^x’)_1(y^x’)_2(y^x’)_3lo((y^x’)_0)lo((y^x’)_1)lo((y^x’)_2)lo((y^x’)_3)
_(y^x’)_4(y^x’)_5(y^x’)_6(y^x’)_7lo((y^x’)_4)lo((y^x’)_5)lo((y^x’)_6)lo((y^x’)_7)
On each of them we’ll do the plookups
((cols[1] - cols[5])/2, (cols[1] - cols[5])/2, 0) in XOR
((cols[2] - cols[6])/2, (cols[2] - cols[6])/2, 0) in XOR
((cols[3] - cols[7])/2, (cols[3] - cols[7])/2, 0) in XOR
((cols[4] - cols[8])/2, (cols[4] - cols[8])/2, 0) in XOR
which checks that is a nybble, which guarantees that the low bit is computed correctly.
There is no need to check nybbleness of (y^x’)_i because those will be constrained to be equal to the copies of those values from previous rows, which have already been constrained for nybbleness (by the lookup in the XOR table).
And we’ll check that y’ is the sum of the shifted nybbles.
The layout is
012345678910
x1y1x2y2x3y3infsame_xsinf_zx21_inv
where
• (x1, y1), (x2, y2) are the inputs and (x3, y3) the output.
• inf is a boolean that is true iff the result (x3, y3) is the point at infinity.
The rest of the values are inaccessible from the permutation argument, but
• same_x is a boolean that is true iff x1 == x2.
The following constraints are generated:
constraint 1:
constraint 2:
constraint 3:
constraint 4:
constraint 5:
constraint 6:
constraint 7:
#### Endo Scalar
We give constraints for the endomul scalar computation.
Each row corresponds to 8 iterations of the inner loop in “Algorithm 2” on page 29 of the Halo paper.
The state of the algorithm that’s updated across iterations of the loop is (a, b). It’s clear from that description of the algorithm that an iteration of the loop can be written as
(a, b, i) ->
( 2 * a + c_func(r_{2 * i}, r_{2 * i + 1}),
2 * b + d_func(r_{2 * i}, r_{2 * i + 1}) )
for some functions c_func and d_func. If one works out what these functions are on every input (thinking of a two-bit input as a number in ), one finds they are given by
c_func(x), defined by
• c_func(0) = 0
• c_func(1) = 0
• c_func(2) = -1
• c_func(3) = 1
d_func(x), defined by
• d_func(0) = -1
• d_func(1) = 1
• d_func(2) = 0
• d_func(3) = 0
One can then interpolate to find polynomials that implement these functions on .
You can use sage, as
R = PolynomialRing(QQ, 'x')
c_func = R.lagrange_polynomial([(0, 0), (1, 0), (2, -1), (3, 1)])
d_func = R.lagrange_polynomial([(0, -1), (1, 1), (2, 0), (3, 0)])
Then, c_func is given by
2/3 * x^3 - 5/2 * x^2 + 11/6 * x
and d_func is given by
2/3 * x^3 - 7/2 * x^2 + 29/6 * x - 1 <=> c_func + (-x^2 + 3x - 1)
We lay it out the witness as
01234567891011121314Type
n0n8a0b0a8b8x0x1x2x3x4x5x6x7ENDO
where each xi is a two-bit “crumb”.
We also use a polynomial to check that each xi is indeed in , which can be done by checking that each is a root of the polyunomial below:
crumb(x)
= x (x - 1) (x - 2) (x - 3)
= x^4 - 6*x^3 + 11*x^2 - 6*x
= x *(x^3 - 6*x^2 + 11*x - 6)
Each iteration performs the following computations
• Update :
• Update :
• Update :
Then, after the 8 iterations, we compute expected values of the above operations as:
• expected_n8 := 2 * ( 2 * ( 2 * ( 2 * ( 2 * ( 2 * ( 2 * (2 * n0 + x0) + x1 ) + x2 ) + x3 ) + x4 ) + x5 ) + x6 ) + x7
• expected_a8 := 2 * ( 2 * ( 2 * ( 2 * ( 2 * ( 2 * ( 2 * (2 * a0 + c0) + c1 ) + c2 ) + c3 ) + c4 ) + c5 ) + c6 ) + c7
• expected_b8 := 2 * ( 2 * ( 2 * ( 2 * ( 2 * ( 2 * ( 2 * (2 * b0 + d0) + d1 ) + d2 ) + d3 ) + d4 ) + d5 ) + d6 ) + d7
Putting together all of the above, these are the 11 constraints for this gate
• Checking values after the 8 iterations:
• Constrain : 0 = expected_n8 - n8
• Constrain : 0 = expected_a8 - a8
• Constrain : 0 = expected_b8 - b8
• Checking the crumbs, meaning each is indeed in the range :
• Constrain : 0 = x0 * ( x0^3 - 6 * x0^2 + 11 * x0 - 6 )
• Constrain : 0 = x1 * ( x1^3 - 6 * x1^2 + 11 * x1 - 6 )
• Constrain : 0 = x2 * ( x2^3 - 6 * x2^2 + 11 * x2 - 6 )
• Constrain : 0 = x3 * ( x3^3 - 6 * x3^2 + 11 * x3 - 6 )
• Constrain : 0 = x4 * ( x4^3 - 6 * x4^2 + 11 * x4 - 6 )
• Constrain : 0 = x5 * ( x5^3 - 6 * x5^2 + 11 * x5 - 6 )
• Constrain : 0 = x6 * ( x6^3 - 6 * x6^2 + 11 * x6 - 6 )
• Constrain : 0 = x7 * ( x7^3 - 6 * x7^2 + 11 * x7 - 6 )
#### Endo Scalar Multiplication
We implement custom gate constraints for short Weierstrass curve endomorphism optimised variable base scalar multiplication.
Given a finite field of order , if the order is not a multiple of 2 nor 3, then an elliptic curve over in short Weierstrass form is represented by the set of points that satisfy the following equation with and : If and are two points in the curve , the goal of this operation is to perform the operation efficiently as .
S = (P + (b ? T : −T)) + P
The same algorithm can be used to perform other scalar multiplications, meaning it is not restricted to the case , but it can be used for any arbitrary . This is done by decomposing the scalar into its binary representation. Moreover, for every step, there will be a one-bit constraint meant to differentiate between addition and subtraction for the operation :
In particular, the constraints of this gate take care of 4 bits of the scalar within a single EVBSM row. When the scalar is longer (which will usually be the case), multiple EVBSM rows will be concatenated.
Row01234567891011121314Type
ixTyTØØxPyPnxRyRs1s3b1b2b3b4EVBSM
i+1==xSySn’xR’yR’s1’s3’b1’b2’b3’b4’EVBSM
The layout of this gate (and the next row) allows for this chained behavior where the output point of the current row gets accumulated as one of the inputs of the following row, becoming in the next constraints. Similarly, the scalar is decomposed into binary form and ( respectively) will store the current accumulated value and the next one for the check.
For readability, we define the following variables for the constraints:
• endo EndoCoefficient
• xq1 endo
• xq2 endo
• yq1
• yq2
These are the 11 constraints that correspond to each EVBSM gate, which take care of 4 bits of the scalar within a single EVBSM row:
• First block:
• (xq1 - xp) * s1 = yq1 - yp
• (2 * xp – s1^2 + xq1) * ((xp – xr) * s1 + yr + yp) = (xp – xr) * 2 * yp
• (yr + yp)^2 = (xp – xr)^2 * (s1^2 – xq1 + xr)
• Second block:
• (xq2 - xr) * s3 = yq2 - yr
• (2*xr – s3^2 + xq2) * ((xr – xs) * s3 + ys + yr) = (xr – xs) * 2 * yr
• (ys + yr)^2 = (xr – xs)^2 * (s3^2 – xq2 + xs)
• Booleanity checks:
• Bit flag : 0 = b1 * (b1 - 1)
• Bit flag : 0 = b2 * (b2 - 1)
• Bit flag : 0 = b3 * (b3 - 1)
• Bit flag : 0 = b4 * (b4 - 1)
• Binary decomposition:
• Accumulated scalar: n_next = 16 * n + 8 * b1 + 4 * b2 + 2 * b3 + b4
The constraints above are derived from the following EC Affine arithmetic equations:
• (1) =>
• (2&3) =>
• (2) =>
• <=>
• (3) =>
• <=>
• (4) =>
• (5&6) =>
• (5) =>
• <=>
• (6) =>
• <=>
Defining and as
Gives the following equations when substituting the values of and :
1. (xq1 - xp) * s1 = (2 * b1 - 1) * yt - yp
2. (2 * xp – s1^2 + xq1) * ((xp – xr) * s1 + yr + yp) = (xp – xr) * 2 * yp
3. (yr + yp)^2 = (xp – xr)^2 * (s1^2 – xq1 + xr)
1. (xq2 - xr) * s3 = (2 * b2 - 1) * yt - yr
2. (2 * xr – s3^2 + xq2) * ((xr – xs) * s3 + ys + yr) = (xr – xs) * 2 * yr
3. (ys + yr)^2 = (xr – xs)^2 * (s3^2 – xq2 + xs)
#### Scalar Multiplication
We implement custom Plonk constraints for short Weierstrass curve variable base scalar multiplication.
Given a finite field of order , if the order is not a multiple of 2 nor 3, then an elliptic curve over in short Weierstrass form is represented by the set of points that satisfy the following equation with and : If and are two points in the curve , the algorithm we represent here computes the operation (point doubling and point addition) as .
Info
Point has nothing to do with the order of the field .
The original algorithm that is being used can be found in the Section 3.1 of https://arxiv.org/pdf/math/0208038.pdf, which can perform the above operation using 1 multiplication, 2 squarings and 2 divisions (one more squaring) if ), thanks to the fact that computing the -coordinate of the intermediate addition is not required. This is more efficient to the standard algorithm that requires 1 more multiplication, 3 squarings in total and 2 divisions.
Moreover, this algorithm can be applied not only to the operation , but any other scalar multiplication . This can be done by expressing the scalar in biwise form and performing a double-and-add approach. Nonetheless, this requires conditionals to differentiate from . For that reason, we will implement the following pseudocode from https://github.com/zcash/zcash/issues/3924 (where instead, they give a variant of the above efficient algorithm for Montgomery curves ).
Acc := [2]T
for i = n-1 ... 0:
Q := (r_i == 1) ? T : -T
Acc := Acc + (Q + Acc)
return (d_0 == 0) ? Q - P : Q
The layout of the witness requires 2 rows. The i-th row will be a VBSM gate whereas the next row will be a ZERO gate.
Row01234567891011121314Type
ixTyTx0y0nn’x1y1x2y2x3y3x4y4VBSM
i+1x5y5b0b1b2b3b4s0s1s2s3s4ZERO
The gate constraints take care of 5 bits of the scalar multiplication. Each single bit consists of 4 constraints. There is one additional constraint imposed on the final number. Thus, the VarBaseMul gate argument requires 21 constraints.
For every bit, there will be one constraint meant to differentiate between addition and subtraction for the operation :
S = (P + (b ? T : −T)) + P
• If the bit is positive, the sign should be a subtraction
• If the bit is negative, the sign should be an addition
Then, paraphrasing the above, we will represent this behavior as:
S = (P - (2 * b - 1) * T ) + P
Let us call Input the point with coordinates (xI, yI) and Target is the point being added with coordinates (xT, yT). Then Output will be the point with coordinates (xO, yO) resulting from O = ( I ± T ) + I
Info
Do not confuse our Output point (xO, yO) with the point at infinity that is normally represented as .
In each step of the algorithm, we consider the following elliptic curves affine arithmetic equations:
For readability, we define the following 3 variables in such a way that can be expressed as u / t:
• rx
• t rx
• u t
Next, for each bit in the algorithm, we create the following 4 constraints that derive from the above:
• Booleanity check on the bit : 0 = b * b - b
• Constrain : (xI - xT) * s1 = yI – (2b - 1) * yT
• Constrain Output -coordinate and : 0 = u^2 - t^2 * (xO - xT + s1^2)
• Constrain Output -coordinate and : 0 = (yO + yI) * t - (xI - xO) * u
When applied to the 5 bits, the value of the Target point (xT, yT) is maintained, whereas the values for the Input and Output points form the chain:
[(x0, y0) -> (x1, y1) -> (x2, y2) -> (x3, y3) -> (x4, y4) -> (x5, y5)]
Similarly, 5 different s0..s4 are required, just like the 5 bits b0..b4.
Finally, the additional constraint makes sure that the scalar is being correctly expressed into its binary form (using the double-and-add decomposition) as: This equation is translated as the constraint:
• Binary decomposition: 0 = n' - (b4 + 2 * (b3 + 2 * (b2 + 2 * (b1 + 2 * (b0 + 2*n)))))
#### Range Check
The multi range check gadget is comprised of three circuit gates (RangeCheck0, RangeCheck1 and Zero) and can perform range checks on three values ( and ) of up to 88 bits each.
Values can be copied as inputs to the multi range check gadget in two ways.
• [Standard mode] With 3 copies, by copying and to the first cells of the first 3 rows of the gadget. In this mode the first gate coefficient is set to 0.
• [Compact mode] With 2 copies, by copying to the first cell of the first row and copying to the 2nd cell of row 2. In this mode the first gate coefficient is set to 1.
The RangeCheck0 gate can also be used on its own to perform 64-bit range checks by constraining witness cells 1-2 to zero.
Byte-order:
• Each cell value is in little-endian byte order
• Limbs are mapped to columns in big-endian order (i.e. the lowest columns contain the highest bits)
• We also have the highest bits covered by copy constraints and plookups, so that we can copy the highest two constraints to zero and get a 64-bit lookup, which are envisioned to be a common case
The values are decomposed into limbs as follows.
• L is a 12-bit lookup (or copy) limb,
• C is a 2-bit “crumb” limb (we call half a nybble a crumb).
<----6----> <------8------>
v0 = L L L L L L C C C C C C C C
v1 = L L L L L L C C C C C C C C
<2> <--4--> <---------------18---------------->
v2 = C C L L L L C C C C C C C C C C C C C C C C C C
Witness structure:
RowContents
0
1
2
3
• The first 2 rows contain and and their respective decompositions into 12-bit and 2-bit limbs
• The 3rd row contains and part of its decomposition: four 12-bit limbs and the 1st 10 crumbs
• The final row contains ’s and ’s 5th and 6th 12-bit limbs as well as the remaining 10 crumbs of
Note
Because we are constrained to 4 lookups per row, we are forced to postpone some lookups of v0 and v1 to the final row.
Constraints:
For efficiency, the limbs are constrained differently according to their type.
• 12-bit limbs are constrained with plookups
• 2-bit crumbs are constrained with degree-4 constraints
Layout:
This is how the three 88-bit inputs and are laid out and constrained.
• vipj is the jth 12-bit limb of value
• vicj is the jth 2-bit crumb limb of value
GatesRangeCheck0RangeCheck0RangeCheck1Zero
Rows0123
Cols
0v0v1v2crumb v2c9
MS:1copy v0p0copy v1p0optional v12crumb v2c10
2copy v0p1copy v1p1crumb v2c0crumb v2c11
3plookup v0p2plookup v1p2plookup v2p0plookup v0p0
4plookup v0p3plookup v1p3plookup v2p1plookup v0p1
5plookup v0p4plookup v1p4plookup v2p2plookup v1p0
6plookup v0p5plookup v1p5plookup v2p3plookup v1p1
7crumb v0c0crumb v1c0crumb v2c1crumb v2c12
8crumb v0c1crumb v1c1crumb v2c2crumb v2c13
9crumb v0c2crumb v1c2crumb v2c3crumb v2c14
10crumb v0c3crumb v1c3crumb v2c4crumb v2c15
11crumb v0c4crumb v1c4crumb v2c5crumb v2c16
12crumb v0c5crumb v1c5crumb v2c6crumb v2c17
13crumb v0c6crumb v1c6crumb v2c7crumb v2c18
LS:14crumb v0c7crumb v1c7crumb v2c8crumb v2c19
The 12-bit chunks are constrained with plookups and the 2-bit crumbs are constrained with degree-4 constraints of the form .
Note that copy denotes a plookup that is deferred to the 4th gate (i.e. Zero). This is because of the limitation that we have at most 4 lookups per row. The copies are constrained using the permutation argument.
Gate types:
Different rows are constrained using different CircuitGate types
RowCircuitGatePurpose
0RangeCheck0Partially constrain
1RangeCheck0Partially constrain
2RangeCheck1Fully constrain (and trigger plookups constraints on row 3)
3ZeroComplete the constraining of and using lookups
Note
Each CircuitGate type corresponds to a unique polynomial and thus is assigned its own unique powers of alpha
RangeCheck0 - Range check constraints
• This circuit gate is used to partially constrain values and
• Optionally, it can be used on its own as a single 64-bit range check by constraining columns 1 and 2 to zero
• The rest of and are constrained by the lookups in the Zero gate row
• This gate operates on the Curr row
It uses three different types of constraints
• copy - copy to another cell (12-bits)
• plookup - plookup (12-bits)
• crumb - degree-4 constraint (2-bits)
Given value v the layout looks like this
ColumnCurr
0v
1copy vp0
2copy vp1
3plookup vp2
4plookup vp3
5plookup vp4
6plookup vp5
7crumb vc0
8crumb vc1
9crumb vc2
10crumb vc3
11crumb vc4
12crumb vc5
13crumb vc6
14crumb vc7
where the notation vpi and vci defined in the “Layout” section above.
RangeCheck1 - Range check constraints
• This circuit gate is used to fully constrain
• It operates on the Curr and Next rows
It uses two different types of constraints
• plookup - plookup (12-bits)
• crumb - degree-4 constraint (2-bits)
Given value v2 the layout looks like this
ColumnCurrNext
0v2crumb v2c9
1optional v12crumb v2c10
2crumb v2c0crumb v2c11
3plookup v2p0(ignored)
4plookup v2p1(ignored)
5plookup v2p2(ignored)
6plookup v2p3(ignored)
7crumb v2c1crumb v2c12
8crumb v2c2crumb v2c13
9crumb v2c3crumb v2c14
10crumb v2c4crumb v2c15
11crumb v2c5crumb v2c16
12crumb v2c6crumb v2c17
13crumb v2c7crumb v2c18
14crumb v2c8crumb v2c19
where the notation v2ci and v2pi defined in the “Layout” section above.
These circuit gates are used to constrain that
left_input +/- right_input = field_overflow * foreign_modulus + result
##### Mapping
To make things clearer, the following mapping between the variable names used in the code and those of the RFC document can be helpful.
left_input_lo -> a0 right_input_lo -> b0 result_lo -> r0 bound_lo -> u0
left_input_mi -> a1 right_input_mi -> b1 result_mi -> r1 bound_mi -> u1
left_input_hi -> a2 right_input_hi -> b2 result_hi -> r2 bound_hi -> u2
field_overflow -> q
sign -> s
carry_lo -> c0
carry_mi -> c1
bound_carry_lo -> k0
bound_carry_mi -> k1
Note: Our limbs are 88-bit long. We denote with:
• lo the least significant limb (in little-endian, this is from 0 to 87)
• mi the middle limb (in little-endian, this is from 88 to 175)
• hi the most significant limb (in little-endian, this is from 176 to 263)
Let left_input_lo, left_input_mi, left_input_hi be 88-bit limbs of the left element
Let right_input_lo, right_input_mi, right_input_hi be 88-bit limbs of the right element
Let foreign_modulus_lo, foreign_modulus_mi, foreign_modulus_hi be 88-bit limbs of the foreign modulus
Then the limbs of the result are
• result_lo = left_input_lo +/- right_input_lo - field_overflow * foreign_modulus_lo - 2^{88} * result_carry_lo
• result_mi = left_input_mi +/- right_input_mi - field_overflow * foreign_modulus_mi - 2^{88} * result_carry_mi + result_carry_lo
• result_hi = left_input_hi +/- right_input_hi - field_overflow * foreign_modulus_hi + result_carry_mi
field_overflow or or handles overflows in the field
result_carry_i are auxiliary variables that handle carries between limbs
Apart from the range checks of the chained inputs, we need to do an additional range check for a final bound to make sure that the result is less than the modulus, by adding 2^{3*88} - foreign_modulus to it. (This can be computed easily from the limbs of the modulus) Note that 2^{264} as limbs represents: (0, 0, 0, 1) then:
The upper-bound check can be calculated as
• bound_lo = result_lo - foreign_modulus_lo - bound_carry_lo * 2^{88}
• bound_mi = result_mi - foreign_modulus_mi - bound_carry_mi * 2^{88} + bound_carry_lo
• bound_hi = result_hi - foreign_modulus_hi + 2^{88} + bound_carry_mi
Which is equivalent to another foreign field addition with right input 2^{264}, q = 1 and s = 1
• bound_lo = result_lo + s * 0 - q * foreign_modulus_lo - bound_carry_lo * 2^{88}
• bound_mi = result_mi + s * 0 - q * foreign_modulus_mi - bound_carry_mi * 2^{88} + bound_carry_lo
• bound_hi = result_hi + s * 2^{88} - q * foreign_modulus_hi + bound_carry_mi
bound_carry_i or or are auxiliary variables that handle carries between limbs
The range check of bound can be skipped until the end of the operations and result is an intermediate value that is unused elsewhere (since the final result must have had the right amount of moduli subtracted along the way, meaning a multiple of the modulus). In other words, intermediate results could potentially give a valid witness that satisfies the constraints but where the result is larger than the modulus (yet smaller than 2^{264}). The reason that we have a final bound check is to make sure that the final result (min_result) is indeed the minimum one (meaning less than the modulus).
A more optimized version of these constraints is able to reduce by 2 the number of constraints and by 1 the number of witness cells needed. The idea is to condense the low and middle limbs in one longer limb of 176 bits (which fits inside our native field) and getting rid of the low carry flag. With this idea in mind, the sole carry flag we need is the one located between the middle and the high limbs.
##### Layout
The sign of the operation (whether it is an addition or a subtraction) is stored in the fourth coefficient as a value +1 (for addition) or -1 (for subtraction). The first 3 coefficients are the 3 limbs of the foreign modulus. One could lay this out as a double-width gate for chained foreign additions and a final row, e.g.:
colForeignFieldAddchain ForeignFieldAddfinal ForeignFieldAddfinal Zero
0left_input_lo (copy)result_lo (copy)min_result_lo (copy)bound_lo (copy)
1left_input_mi (copy)result_mi (copy)min_result_mi (copy)bound_mi (copy)
2left_input_hi (copy)result_hi (copy)min_result_hi (copy)bound_hi (copy)
3right_input_lo (copy)0 (check)
4right_input_mi (copy)0 (check)
5right_input_hi (copy)2^88 (check)
6field_overflow (copy?)1 (check)
7carrybound_carry
8
9
10
11
12
13
14
We reuse the foreign field addition gate for the final bound check since this is an addition with a specific parameter structure. Checking that the correct right input, overflow, and overflow are used shall be done by copy constraining these values with a public input value. One could have a specific gate for just this check requiring less constrains, but the cost of adding one more selector gate outweights the savings of one row and a few constraints of difference.
##### Integration
• Copy final overflow bit from public input containing value 1 - Range check the final bound
#### Foreign Field Multiplication
This gadget is used to constrain that
left_input * right_input = quotient * foreign_field_modulus + remainder
##### Documentation
For more details please see the Foreign Field Multiplication RFC
##### Notations
For clarity, we use more descriptive variable names in the code than in the RFC, which uses mathematical notations.
In order to relate the two documents, the following mapping between the variable names used in the code and those of the RFC can be helpful.
left_input0 => a0 right_input0 => b0 quotient0 => q0 remainder0 => r0
left_input1 => a1 right_input1 => b1 quotient1 => q1 remainder1 => r1
left_input2 => a2 right_input2 => b2 quotient2 => q2 remainder2 => r2
product1_lo => p10 product1_hi_0 => p110 product1_hi_1 => p111
carry0 => v0 carry1_lo => v10 carry1_hi => v11
quotient_bound0 => q'0 quotient_bound12 => q'12
quotient_bound_carry => q'_carry01
##### Suffixes
The variable names in this code uses descriptive suffixes to convey information about the positions of the bits referred to. When a word is split into up to n parts we use: 0, 1n (where n is the most significant). For example, if we split word x into three limbs, we’d name them x0, x1 and x2 or x[0], x[1] and x[2].
Continuing in this fashion, when one of those words is subsequently split in half, then we add the suffixes _lo and _hi, where hi corresponds to the most significant bits. For our running example, x1 would become x1_lo and x1_hi. If we are splitting into more than two things, then we pick meaningful names for each.
So far we’ve explained our conventions for a splitting depth of up to 2. For splitting deeper than two, we simply cycle back to our depth 1 suffixes again. So for example, x1_lo would be split into x1_lo_0 and x1_lo_1.
##### Parameters
• foreign_field_modulus := foreign field modulus $f$ (stored in gate coefficients 0-2)
• neg_foreign_field_modulus := negated foreign field modulus $f’$ (stored in gate coefficients 3-5)
• n := the native field modulus is obtainable from F, the native field’s trait bound
##### Witness
• left_input := left foreign field element multiplicand $~\in F_f$
• right_input := right foreign field element multiplicand $~\in F_f$
• quotient := foreign field quotient $~\in F_f$
• remainder := foreign field remainder $~\in F_f$
• carry0 := 2 bit carry
• carry1_lo := low 88 bits of carry1
• carry1_hi := high 3 bits of carry1
• product1_lo := lowest 88 bits of middle intermediate product
• product1_hi_0 := lowest 88 bits of middle intermediate product’s highest 88 + 2 bits
• product1_hi_1 := highest 2 bits of middle intermediate product
• quotient_bound := quotient bound for checking q < f
• quotient_bound_carry := quotient bound addition carry bit
##### Layout
The foreign field multiplication gate’s rows are laid out like this
colForeignFieldMulZero
0left_input0 (copy)remainder0 (copy)
1left_input1 (copy)remainder1 (copy)
2left_input2 (copy)remainder2 (copy)
3right_input0 (copy)quotient_bound01 (copy)
4right_input1 (copy)quotient_bound2 (copy)
5right_input2 (copy)product1_lo (copy)
6carry1_lo (copy)product1_hi_0 (copy)
7carry1_hi (plookup)
8carry0
9quotient0
10quotient1
11quotient2
12quotient_bound_carry
13product1_hi_1
14
#### Xor
Xor16 - Chainable XOR constraints for words of multiples of 16 bits.
• This circuit gate is used to constrain that in1 xored with in2 equals out
• The length of in1, in2 and out must be the same and a multiple of 16bits.
• This gate operates on the Curr and Next rows.
It uses three different types of constraints
• copy - copy to another cell (32-bits)
• plookup - xor-table plookup (4-bits)
• decomposition - the constraints inside the gate
The 4-bit nybbles are assumed to be laid out with 0 column being the least significant nybble. Given values in1, in2 and out, the layout looks like this:
ColumnCurrNext
0copy in1copy in1'
1copy in2copy in2'
2copy outcopy out'
3plookup0 in1_0
4plookup1 in1_1
5plookup2 in1_2
6plookup3 in1_3
7plookup0 in2_0
8plookup1 in2_1
9plookup2 in2_2
10plookup3 in2_3
11plookup0 out_0
12plookup1 out_1
13plookup2 out_2
14plookup3 out_3
One single gate with next values of in1', in2' and out' being zero can be used to check that the original in1, in2 and out had 16-bits. We can chain this gate 4 times as follows to obtain a gadget for 64-bit words XOR:
RowCircuitGatePurpose
0Xor16Xor 2 least significant bytes of the words
1Xor16Xor next 2 bytes of the words
2Xor16Xor next 2 bytes of the words
3Xor16Xor 2 most significant bytes of the words
4ZeroZero values, can be reused as generic gate
Info
We could halve the number of rows of the 64-bit XOR gadget by having lookups for 8 bits at a time, but for now we will use the 4-bit XOR table that we have. Rough computations show that if we run 8 or more Keccaks in one circuit we should use the 8-bit XOR table.
#### Not
We implement NOT, i.e. bitwise negation, as a gadget in two different ways, needing no new gate type for it. Instead, it reuses the XOR gadget and the Generic gate.
The first version of the NOT gadget reuses Xor16 by making the following observation: the bitwise NOT operation is equivalent to the bitwise XOR operation with the all one words of a certain length. In other words, $$\neg x = x \oplus 1^*$$ where $1^$ denotes a bitstring of all ones of length $|x|$. Let $x_i$ be the $i$-th bit of $x$, the intuition is that if $x_i = 0$ then XOR with $1$ outputs $1$, thus negating $x_i$. Similarly, if $x_i = 1$ then XOR with 1 outputs 0, again negating $x_i$. Thus, bitwise XOR with $1^$ is equivalent to bitwise negation (i.e. NOT).
Then, if we take the XOR gadget with a second input to be the all one word of the same length, that gives us the NOT gadget. The correct length can be imposed by having a public input containing the 2^bits - 1 value and wiring it to the second input of the XOR gate. This approach needs as many rows as an XOR would need, for a single negation, but it comes with the advantage of making sure the input is of a certain length.
The other approach can be more efficient if we already know the length of the inputs. For example, the input may be the input of a range check gate, or the output of a previous XOR gadget (which will be the case in our Keccak usecase). In this case, we simply perform the negation as a subtraction of the input word from the all one word (which again can be copied from a public input). This comes with the advantage of holding up to 2 word negations per row (an eight-times improvement over the XOR approach), but it requires the user to know the length of the input.
** NOT Layout using XOR **
Here we show the layout of the NOT gadget using the XOR approach. The gadget needs a row with a public input containing the all-one word of the given length. Then, a number of XORs follow, and a final Zero row is needed. In this case, the NOT gadget needs $\ceil(\frac{|x|}{16})$ Xor16 gates, that means one XOR row for every 16 bits of the input word.
RowCircuitGatePurpose
pubGenericLeading row with the public $1^*$ value
i…i+n-1Xor16Negate every 4 nybbles of the word, from least to most significant
i+nZeroConstrain that the final row is all zeros for correctness of Xor gate
** NOT Layout using Generic gates **
Here we show the layout of the NOT gadget using the Generic approach. The gadget needs a row with a public input containing the all-one word of the given length, exactly as above. Then, one Generic gate reusing the all-one word as left inputs can be used to negate up to two words per row. This approach requires that the input word is known (or constrained) to have a given length.
RowCircuitGatePurpose
pubGenericLeading row with the public $1^*$ value
iGenericNegate one or two words of the length given by the length of the all-one word
##### And
We implement the AND gadget making use of the XOR gadget and the Generic gate. A new gate type is not needed, but we could potentially add an And16 gate type reusing the same ideas of Xor16 so as to save one final generic gate, at the cost of one additional AND lookup table that would have the same size as that of the Xor. For now, we are willing to pay this small overhead and produce AND gadget as follows:
We observe that we can express bitwise addition as follows: $$A + B = (A \oplus B) + 2 \cdot (A & B)$$ where $\oplus$ is the bitwise XOR operation, $&$ is the bitwise AND operation, and $+$ is the addition operation. In other words, the value of the addition is nothing but the XOR of its operands, plus the carry bit if both operands are 1. Thus, we can rewrite the above equation to obtain a definition of the AND operation as follows: $$A & B = \frac{A + B - (A \oplus B)}{2}$$ Let us define the following operations for better readability:
a + b = sum
a ^ b = xor
a & b = and
Then, we can rewrite the above equation as follows: $$2 \cdot and = sum - xor$$ which can be expressed as a double generic gate.
Then, our AND gadget for $n$ bytes looks as follows:
• $n/8$ Xor16 gates
• 1 (single) Generic gate to check that the final row of the XOR chain is all zeros.
• 1 (double) Generic gate to check sum $a + b = sum$ and the conjunction equation $2\cdot and = sum - xor$.
Finally, we connect the wires in the following positions (apart from the ones already connected for the XOR gates):
• Column 2 of the first Xor16 row (the output of the XOR operation) is connected to the right input of the second generic operation of the last row.
• Column 2 of the first generic operation of the last row is connected to the left input of the second generic operation of the last row. Meaning,
• the xor in a ^ b = xor is connected to the xor in 2 \cdot and = sum - xor
• the sum in a + b = sum is connected to the sum in 2 \cdot and = sum - xor
## Setup
In this section we specify the setup that goes into creating two indexes from a circuit:
Note
The circuit creation part is not specified in this document. It might be specified in a separate document, or we might want to specify how to create the circuit description tables.
As such, the transformation of a circuit into these two indexes can be seen as a compilation step. Note that the prover still needs access to the original circuit to create proofs, as they need to execute it to create the witness (register table).
### Common Index
In this section we describe data that both the prover and the verifier index share.
URS (Uniform Reference String) The URS is a set of parameters that is generated once, and shared between the prover and the verifier. It is used for polynomial commitments, so refer to the poly-commitment specification for more details.
Note
Kimchi currently generates the URS based on the circuit, and attach it to the index. So each circuit can potentially be accompanied with a different URS. On the other hand, Mina reuses the same URS for multiple circuits (see zkapps for more details).
Domain. A domain large enough to contain the circuit and the zero-knowledge rows (used to provide zero-knowledge to the protocol). Specifically, the smallest subgroup in our field that has order greater or equal to n + ZK_ROWS, with n is the number of gates in the circuit. TODO: what if the domain is larger than the URS?
Ordering of elements in the domain
Note that in this specification we always assume that the first element of a domain is $1$.
Shifts. As part of the permutation, we need to create PERMUTS shifts. To do that, the following logic is followed (in pseudo code): (TODO: move shift creation within the permutation section?)
shifts[0] = 1 # first shift is identity
for i in 0..7: # generate 7 shifts
i = 7
shift, i = sample(domain, i)
while shifts.contains(shift) do:
shift, i = sample(domain, i)
shift[i] = shift
def sample(domain, i):
i += 1
shift = Field(Blake2b512(to_be_bytes(i)))
i += 1
shift = Field(Blake2b512(to_be_bytes(i)))
return shift, i
Public. This variable simply contains the number of public inputs. (TODO: actually, it’s not contained in the verifier index)
The compilation steps to create the common index are as follow:
1. If the circuit is less than 2 gates, abort.
2. Create a domain for the circuit. That is, compute the smallest subgroup of the field that has order greater or equal to n + ZK_ROWS elements.
3. Pad the circuit: add zero gates to reach the domain size.
4. sample the PERMUTS shifts.
### Lookup Index
If lookup is used, the following values are added to the common index:
LookupSelectors. The list of lookup selectors used. In practice, this tells you which lookup tables are used.
TableIds. This is a list of table ids used by the Lookup gate.
MaxJointSize. This is the maximum number of columns appearing in the lookup tables used by the lookup selectors. For example, the XOR lookup has 3 columns.
To create the index, follow these steps:
1. If no lookup is used in the circuit, do not create a lookup index
2. Get the lookup selectors and lookup tables (TODO: how?)
3. Concatenate runtime lookup tables with the ones used by gates
4. Get the highest number of columns max_table_width that a lookup table can have.
5. Create the concatenated table of all the fixed lookup tables. It will be of height the size of the domain, and of width the maximum width of any of the lookup tables. In addition, create an additional column to store all the tables’ table IDs.
For example, if you have a table with ID 0
123
567
000
and another table with ID 1
89
the concatenated table in a domain of size 5 looks like this:
123
567
000
890
000
with the table id vector:
table id
0
0
0
1
0
To do this, for each table:
• Update the corresponding entries in a table id vector (of size the domain as well) with the table ID of the table.
• Copy the entries from the table to new rows in the corresponding columns of the concatenated table.
• Fill in any unused columns with 0 (to match the dummy value)
6. Pad the end of the concatened table with the dummy value.
7. Pad the end of the table id vector with 0s.
8. pre-compute polynomial and evaluation form for the look up tables
9. pre-compute polynomial and evaluation form for the table IDs, only if a table with an ID different from zero was used.
### Prover Index
Both the prover and the verifier index, besides the common parts described above, are made out of pre-computations which can be used to speed up the protocol. These pre-computations are optimizations, in the context of normal proofs, but they are necessary for recursion.
pub struct ProverIndex<G: KimchiCurve> {
/// constraints system polynomials
#[serde(bound = "ConstraintSystem<G::ScalarField>: Serialize + DeserializeOwned")]
pub cs: ConstraintSystem<G::ScalarField>,
/// The symbolic linearization of our circuit, which can compile to concrete types once certain values are learned in the protocol.
#[serde(skip)]
pub linearization: Linearization<Vec<PolishToken<G::ScalarField>>>,
/// The mapping between powers of alpha and constraints
#[serde(skip)]
pub powers_of_alpha: Alphas<G::ScalarField>,
/// polynomial commitment keys
#[serde(skip)]
pub srs: Arc<SRS<G>>,
/// maximal size of polynomial section
pub max_poly_size: usize,
#[serde(bound = "ColumnEvaluations<G::ScalarField>: Serialize + DeserializeOwned")]
pub column_evaluations: ColumnEvaluations<G::ScalarField>,
/// The verifier index corresponding to this prover index
#[serde(skip)]
pub verifier_index: Option<VerifierIndex<G>>,
/// The verifier index digest corresponding to this prover index
#[serde_as(as = "Option<o1_utils::serialization::SerdeAs>")]
pub verifier_index_digest: Option<G::BaseField>,
}
### Verifier Index
Same as the prover index, we have a number of pre-computations as part of the verifier index.
#[serde_as]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct LookupVerifierIndex<G: CommitmentCurve> {
pub joint_lookup_used: bool,
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub lookup_table: Vec<PolyComm<G>>,
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub lookup_selectors: LookupSelectors<PolyComm<G>>,
/// Table IDs for the lookup values.
/// This may be None if all lookups originate from table 0.
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub table_ids: Option<PolyComm<G>>,
/// Information about the specific lookups used
pub lookup_info: LookupInfo,
/// An optional selector polynomial for runtime tables
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub runtime_tables_selector: Option<PolyComm<G>>,
}
#[serde_as]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct VerifierIndex<G: KimchiCurve> {
/// evaluation domain
#[serde_as(as = "o1_utils::serialization::SerdeAs")]
pub domain: D<G::ScalarField>,
/// maximal size of polynomial section
pub max_poly_size: usize,
/// polynomial commitment keys
#[serde(skip)]
pub srs: OnceCell<Arc<SRS<G>>>,
/// number of public inputs
pub public: usize,
/// number of previous evaluation challenges, for recursive proving
pub prev_challenges: usize,
// index polynomial commitments
/// permutation commitment array
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub sigma_comm: [PolyComm<G>; PERMUTS],
/// coefficient commitment array
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub coefficients_comm: [PolyComm<G>; COLUMNS],
/// coefficient commitment array
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub generic_comm: PolyComm<G>,
// poseidon polynomial commitments
/// poseidon constraint selector polynomial commitment
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub psm_comm: PolyComm<G>,
// ECC arithmetic polynomial commitments
/// EC addition selector polynomial commitment
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
/// EC variable base scalar multiplication selector polynomial commitment
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub mul_comm: PolyComm<G>,
/// endoscalar multiplication selector polynomial commitment
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub emul_comm: PolyComm<G>,
/// endoscalar multiplication scalar computation selector polynomial commitment
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub endomul_scalar_comm: PolyComm<G>,
/// Chacha polynomial commitments
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub chacha_comm: Option<[PolyComm<G>; 4]>,
/// Range check polynomial commitments
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
/// Foreign field addition gates polynomial commitments
#[serde(bound = "Option<PolyComm<G>>: Serialize + DeserializeOwned")]
/// Foreign field multiplication gates polynomial commitments
#[serde(bound = "Option<PolyComm<G>>: Serialize + DeserializeOwned")]
pub foreign_field_mul_comm: Option<PolyComm<G>>,
/// Xor commitments
#[serde(bound = "Option<PolyComm<G>>: Serialize + DeserializeOwned")]
pub xor_comm: Option<PolyComm<G>>,
/// Rot commitments
#[serde(bound = "Option<PolyComm<G>>: Serialize + DeserializeOwned")]
pub rot_comm: Option<PolyComm<G>>,
/// wire coordinate shifts
#[serde_as(as = "[o1_utils::serialization::SerdeAs; PERMUTS]")]
pub shift: [G::ScalarField; PERMUTS],
/// zero-knowledge polynomial
#[serde(skip)]
pub zkpm: OnceCell<DensePolynomial<G::ScalarField>>,
// TODO(mimoo): isn't this redundant with domain.d1.group_gen ?
/// domain offset for zero-knowledge
#[serde(skip)]
pub w: OnceCell<G::ScalarField>,
/// endoscalar coefficient
#[serde(skip)]
pub endo: G::ScalarField,
#[serde(bound = "PolyComm<G>: Serialize + DeserializeOwned")]
pub lookup_index: Option<LookupVerifierIndex<G>>,
#[serde(skip)]
pub linearization: Linearization<Vec<PolishToken<G::ScalarField>>>,
/// The mapping between powers of alpha and constraints
#[serde(skip)]
pub powers_of_alpha: Alphas<G::ScalarField>,
}
## Proof Construction & Verification
Originally, kimchi is based on an interactive protocol that was transformed into a non-interactive one using the Fiat-Shamir transform. For this reason, it can be useful to visualize the high-level interactive protocol before the transformation:
sequenceDiagram
participant Prover
participant Verifier
Note over Prover,Verifier: Prover produces commitments to secret polynomials
Prover->>Verifier: public input & witness commitment
Verifier->>Prover: beta & gamma
Prover->>Verifier: permutation commitment
opt lookup
Prover->>Verifier: sorted
Prover->>Verifier: aggreg
end
Note over Prover,Verifier: Prover produces commitment to quotient polynomial
Verifier->>Prover: alpha
Prover->>Verifier: quotient commitment
Note over Prover,Verifier: Verifier produces an evaluation point
Verifier->>Prover: zeta
Note over Prover,Verifier: Prover provides helper evaluations
Prover->>Verifier: the generic selector gen(zeta) & gen(zeta * omega)
Prover->>Verifier: the poseidon selector pos(zeta) & pos(zeta * omega)
Prover->>Verifier: negated public input p(zeta) & p(zeta * omega)
Note over Prover,Verifier: Prover provides needed evaluations for the linearization
Note over Verifier: change of verifier (change of sponge)
Prover->>Verifier: permutation poly z(zeta) & z(zeta * omega)
Prover->>Verifier: the 15 registers w_i(zeta) & w_i(zeta * omega)
Prover->>Verifier: the 6 sigmas s_i(zeta) & s_i(zeta * omega)
Prover->>Verifier: ft(zeta * omega)
opt lookup
Prover->>Verifier: sorted(zeta) & sorted(zeta * omega)
Prover->>Verifier: aggreg(zeta) & aggreg(zeta * omega)
Prover->>Verifier: table(zeta) & table(zeta * omega)
end
Note over Prover,Verifier: Batch verification of evaluation proofs
Verifier->>Prover: u, v
Note over Verifier: change of verifier (change of sponge)
Prover->>Verifier: aggregated evaluation proof (involves more interaction)
The Fiat-Shamir transform simulates the verifier messages via a hash function that hashes the transcript of the protocol so far before outputing verifier messages. You can find these operations under the proof creation and proof verification algorithms as absorption and squeezing of values with the sponge.
### Proof Structure
A proof consists of the following data structures:
/// Evaluations of a polynomial at 2 points
#[serde_as]
#[derive(Copy, Clone, Serialize, Deserialize, Default, Debug)]
#[cfg_attr(
feature = "ocaml_types",
derive(ocaml::IntoValue, ocaml::FromValue, ocaml_gen::Struct)
)]
#[serde(bound(
serialize = "Vec<o1_utils::serialization::SerdeAs>: serde_with::SerializeAs<Evals>",
deserialize = "Vec<o1_utils::serialization::SerdeAs>: serde_with::DeserializeAs<'de, Evals>"
))]
pub struct PointEvaluations<Evals> {
/// Evaluation at the challenge point zeta.
#[serde_as(as = "Vec<o1_utils::serialization::SerdeAs>")]
pub zeta: Evals,
/// Evaluation at zeta . omega, the product of the challenge point and the group generator.
#[serde_as(as = "Vec<o1_utils::serialization::SerdeAs>")]
pub zeta_omega: Evals,
}
/// Evaluations of lookup polynomials
#[serde_as]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LookupEvaluations<Evals> {
/// sorted lookup table polynomial
pub sorted: Vec<Evals>,
/// lookup aggregation polynomial
pub aggreg: Evals,
// TODO: May be possible to optimize this away?
/// lookup table polynomial
pub table: Evals,
/// Optionally, a runtime table polynomial.
pub runtime: Option<Evals>,
}
// TODO: this should really be vectors here, perhaps create another type for chunked evaluations?
/// Polynomial evaluations contained in a ProverProof.
/// - **Chunked evaluations** Field is instantiated with vectors with a length that equals the length of the chunk
/// - **Non chunked evaluations** Field is instantiated with a field, so they are single-sized#[serde_as]
#[serde_as]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ProofEvaluations<Evals> {
/// witness polynomials
pub w: [Evals; COLUMNS],
/// permutation polynomial
pub z: Evals,
/// permutation polynomials
/// (PERMUTS-1 evaluations because the last permutation is only used in commitment form)
pub s: [Evals; PERMUTS - 1],
/// coefficient polynomials
pub coefficients: [Evals; COLUMNS],
/// lookup-related evaluations
pub lookup: Option<LookupEvaluations<Evals>>,
/// evaluation of the generic selector polynomial
pub generic_selector: Evals,
/// evaluation of the poseidon selector polynomial
pub poseidon_selector: Evals,
}
/// Commitments linked to the lookup feature
#[serde_as]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(bound = "G: ark_serialize::CanonicalDeserialize + ark_serialize::CanonicalSerialize")]
pub struct LookupCommitments<G: AffineCurve> {
/// Commitments to the sorted lookup table polynomial (may have chunks)
pub sorted: Vec<PolyComm<G>>,
/// Commitment to the lookup aggregation polynomial
pub aggreg: PolyComm<G>,
/// Optional commitment to concatenated runtime tables
pub runtime: Option<PolyComm<G>>,
}
/// All the commitments that the prover creates as part of the proof.
#[serde_as]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(bound = "G: ark_serialize::CanonicalDeserialize + ark_serialize::CanonicalSerialize")]
pub struct ProverCommitments<G: AffineCurve> {
/// The commitments to the witness (execution trace)
pub w_comm: [PolyComm<G>; COLUMNS],
/// The commitment to the permutation polynomial
pub z_comm: PolyComm<G>,
/// The commitment to the quotient polynomial
pub t_comm: PolyComm<G>,
/// Commitments related to the lookup argument
pub lookup: Option<LookupCommitments<G>>,
}
/// The proof that the prover creates from a [ProverIndex](super::prover_index::ProverIndex) and a witness.
#[serde_as]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(bound = "G: ark_serialize::CanonicalDeserialize + ark_serialize::CanonicalSerialize")]
pub struct ProverProof<G: AffineCurve> {
/// All the polynomial commitments required in the proof
pub commitments: ProverCommitments<G>,
/// batched commitment opening proof
pub proof: OpeningProof<G>,
/// Two evaluations over a number of committed polynomials
pub evals: ProofEvaluations<PointEvaluations<Vec<G::ScalarField>>>,
/// Required evaluation for [Maller's optimization](https://o1-labs.github.io/mina-book/crypto/plonk/maller_15.html#the-evaluation-of-l)
#[serde_as(as = "o1_utils::serialization::SerdeAs")]
pub ft_eval1: G::ScalarField,
/// The public input
#[serde_as(as = "Vec<o1_utils::serialization::SerdeAs>")]
pub public: Vec<G::ScalarField>,
/// The challenges underlying the optional polynomials folded into the proof
pub prev_challenges: Vec<RecursionChallenge<G>>,
}
/// A struct to store the challenges inside a ProverProof
#[serde_as]
#[derive(Debug, Clone, Deserialize, Serialize)]
#[serde(bound = "G: ark_serialize::CanonicalDeserialize + ark_serialize::CanonicalSerialize")]
pub struct RecursionChallenge<G>
where
G: AffineCurve,
{
/// Vector of scalar field elements
#[serde_as(as = "Vec<o1_utils::serialization::SerdeAs>")]
pub chals: Vec<G::ScalarField>,
/// Polynomial commitment
pub comm: PolyComm<G>,
}
The following sections specify how a prover creates a proof, and how a verifier validates a number of proofs.
### Proof Creation
To create a proof, the prover expects:
• A prover index, containing a representation of the circuit (and optionaly pre-computed values to be used in the proof creation).
• The (filled) registers table, representing parts of the execution trace of the circuit.
Note
The public input is expected to be passed in the first Public rows of the registers table.
The following constants are set:
• EVAL_POINTS = 2. This is the number of points that the prover has to evaluate their polynomials at. ($\zeta$ and $\zeta\omega$ where $\zeta$ will be deterministically generated.)
• ZK_ROWS = 3. This is the number of rows that will be randomized to provide zero-knowledgeness. Note that it only needs to be greater or equal to the number of evaluations (2) in the protocol. Yet, it contains one extra row to take into account the last constraint (final value of the permutation accumulator). (TODO: treat the final constraint separately so that ZK_ROWS = 2)
The prover then follows the following steps to create the proof:
1. Ensure we have room in the witness for the zero-knowledge rows. We currently expect the witness not to be of the same length as the domain, but instead be of the length of the (smaller) circuit. If we cannot add ZK_ROWS rows to the columns of the witness before reaching the size of the domain, abort.
2. Pad the witness columns with Zero gates to make them the same length as the domain. Then, randomize the last ZK_ROWS of each columns.
3. Setup the Fq-Sponge.
4. Absorb the digest of the VerifierIndex.
5. Absorb the commitments of the previous challenges with the Fq-sponge.
6. Compute the negated public input polynomial as the polynomial that evaluates to $-p_i$ for the first public_input_size values of the domain, and $0$ for the rest.
7. Commit (non-hiding) to the negated public input polynomial.
8. Absorb the commitment to the public polynomial with the Fq-Sponge.
Note: unlike the original PLONK protocol, the prover also provides evaluations of the public polynomial to help the verifier circuit. This is why we need to absorb the commitment to the public polynomial at this point.
9. Commit to the witness columns by creating COLUMNS hidding commitments.
Note: since the witness is in evaluation form, we can use the commit_evaluation optimization.
10. Absorb the witness commitments with the Fq-Sponge.
11. Compute the witness polynomials by interpolating each COLUMNS of the witness. As mentioned above, we commit using the evaluations form rather than the coefficients form so we can take advantage of the sparsity of the evaluations (i.e., there are many 0 entries and entries that have less-than-full-size field elemnts.)
12. If using lookup:
• if using runtime table:
• check that all the provided runtime tables have length and IDs that match the runtime table configuration of the index we expect the given runtime tables to be sorted as configured, this makes it easier afterwards
• calculate the contribution to the second column of the lookup table (the runtime vector)
• If queries involve a lookup table with multiple columns then squeeze the Fq-Sponge to obtain the joint combiner challenge $j’$, otherwise set the joint combiner challenge $j’$ to $0$.
• Derive the scalar joint combiner $j$ from $j’$ using the endomorphism (TOOD: specify)
• If multiple lookup tables are involved, set the table_id_combiner as the $j^i$ with $i$ the maximum width of any used table. Essentially, this is to add a last column of table ids to the concatenated lookup tables.
• Compute the dummy lookup value as the combination of the last entry of the XOR table (so (0, 0, 0)). Warning: This assumes that we always use the XOR table when using lookups.
• Compute the lookup table values as the combination of the lookup table entries.
• Compute the sorted evaluations.
• Randomize the last EVALS rows in each of the sorted polynomials in order to add zero-knowledge to the protocol.
• Commit each of the sorted polynomials.
• Absorb each commitments to the sorted polynomials.
13. Sample $\beta$ with the Fq-Sponge.
14. Sample $\gamma$ with the Fq-Sponge.
15. If using lookup:
• Compute the lookup aggregation polynomial.
• Commit to the aggregation polynomial.
• Absorb the commitment to the aggregation polynomial with the Fq-Sponge.
16. Compute the permutation aggregation polynomial $z$.
17. Commit (hidding) to the permutation aggregation polynomial $z$.
18. Absorb the permutation aggregation polynomial $z$ with the Fq-Sponge.
19. Sample $\alpha’$ with the Fq-Sponge.
20. Derive $\alpha$ from $\alpha’$ using the endomorphism (TODO: details)
21. TODO: instantiate alpha?
22. Compute the quotient polynomial (the $t$ in $f = Z_H \cdot t$). The quotient polynomial is computed by adding all these polynomials together:
• the combined constraints for all the gates
• the combined constraints for the permutation
• TODO: lookup
• the negated public polynomial and by then dividing the resulting polynomial with the vanishing polynomial $Z_H$. TODO: specify the split of the permutation polynomial into perm and bnd?
23. commit (hiding) to the quotient polynomial $t$ TODO: specify the dummies
24. Absorb the the commitment of the quotient polynomial with the Fq-Sponge.
25. Sample $\zeta’$ with the Fq-Sponge.
26. Derive $\zeta$ from $\zeta’$ using the endomorphism (TODO: specify)
27. If lookup is used, evaluate the following polynomials at $\zeta$ and $\zeta \omega$:
• the aggregation polynomial
• the sorted polynomials
• the table polynonial
28. Chunk evaluate the following polynomials at both $\zeta$ and $\zeta \omega$:
• $s_i$
• $w_i$
• $z$
• lookup (TODO)
• generic selector
• poseidon selector
By “chunk evaluate” we mean that the evaluation of each polynomial can potentially be a vector of values. This is because the index’s max_poly_size parameter dictates the maximum size of a polynomial in the protocol. If a polynomial $f$ exceeds this size, it must be split into several polynomials like so: $$f(x) = f_0(x) + x^n f_1(x) + x^{2n} f_2(x) + \cdots$$
And the evaluation of such a polynomial is the following list for $x \in {\zeta, \zeta\omega}$:
$$(f_0(x), f_1(x), f_2(x), \ldots)$$
TODO: do we want to specify more on that? It seems unecessary except for the t polynomial (or if for some reason someone sets that to a low value)
29. Evaluate the same polynomials without chunking them (so that each polynomial should correspond to a single value this time).
30. Compute the ft polynomial. This is to implement Maller’s optimization.
31. construct the blinding part of the ft polynomial commitment see https://o1-labs.github.io/mina-book/crypto/plonk/maller_15.html#evaluation-proof-and-blinding-factors
32. Evaluate the ft polynomial at $\zeta\omega$ only.
33. Setup the Fr-Sponge
34. Squeeze the Fq-sponge and absorb the result with the Fr-Sponge.
35. Absorb the previous recursion challenges.
36. Compute evaluations for the previous recursion challenges.
37. Evaluate the negated public polynomial (if present) at $\zeta$ and $\zeta\omega$.
38. Absorb the unique evaluation of ft: $ft(\zeta\omega)$.
39. Absorb all the polynomial evaluations in $\zeta$ and $\zeta\omega$:
• the public polynomial
• z
• generic selector
• poseidon selector
• the 15 register/witness
• 6 sigmas evaluations (the last one is not evaluated)
40. Sample $v’$ with the Fr-Sponge
41. Derive $v$ from $v’$ using the endomorphism (TODO: specify)
42. Sample $u’$ with the Fr-Sponge
43. Derive $u$ from $u’$ using the endomorphism (TODO: specify)
44. Create a list of all polynomials that will require evaluations (and evaluation proofs) in the protocol. First, include the previous challenges, in case we are in a recursive prover.
45. Then, include:
• the negated public polynomial
• the ft polynomial
• the permutation aggregation polynomial z polynomial
• the generic selector
• the poseidon selector
• the 15 registers/witness columns
• the 6 sigmas
• optionally, the runtime table
46. if using lookup:
• add the lookup sorted polynomials
• add the lookup aggreg polynomial
• add the combined table polynomial
• if present, add the runtime table polynomial
47. Create an aggregated evaluation proof for all of these polynomials at $\zeta$ and $\zeta\omega$ using $u$ and $v$.
### Proof Verification
TODO: we talk about batch verification, but is there an actual batch operation? It seems like we’re just verifying an aggregated opening proof
We define two helper algorithms below, used in the batch verification of proofs.
#### Fiat-Shamir argument
We run the following algorithm:
1. Setup the Fq-Sponge.
2. Absorb the digest of the VerifierIndex.
3. Absorb the commitments of the previous challenges with the Fq-sponge.
4. Absorb the commitment of the public input polynomial with the Fq-Sponge.
5. Absorb the commitments to the registers / witness columns with the Fq-Sponge.
6. If lookup is used:
• If it involves queries to a multiple-column lookup table, then squeeze the Fq-Sponge to obtain the joint combiner challenge $j’$, otherwise set the joint combiner challenge $j’$ to $0$.
• Derive the scalar joint combiner challenge $j$ from $j’$ using the endomorphism. (TODO: specify endomorphism)
• absorb the commitments to the sorted polynomials.
7. Sample $\beta$ with the Fq-Sponge.
8. Sample $\gamma$ with the Fq-Sponge.
9. If using lookup, absorb the commitment to the aggregation lookup polynomial.
10. Absorb the commitment to the permutation trace with the Fq-Sponge.
11. Sample $\alpha’$ with the Fq-Sponge.
12. Derive $\alpha$ from $\alpha’$ using the endomorphism (TODO: details).
13. Enforce that the length of the $t$ commitment is of size PERMUTS.
14. Absorb the commitment to the quotient polynomial $t$ into the argument.
15. Sample $\zeta’$ with the Fq-Sponge.
16. Derive $\zeta$ from $\zeta’$ using the endomorphism (TODO: specify).
17. Setup the Fr-Sponge.
18. Squeeze the Fq-sponge and absorb the result with the Fr-Sponge.
19. Absorb the previous recursion challenges.
20. Compute evaluations for the previous recursion challenges.
21. Evaluate the negated public polynomial (if present) at $\zeta$ and $\zeta\omega$.
NOTE: this works only in the case when the poly segment size is not smaller than that of the domain.
22. Absorb the unique evaluation of ft: $ft(\zeta\omega)$.
23. Absorb all the polynomial evaluations in $\zeta$ and $\zeta\omega$:
• the public polynomial
• z
• generic selector
• poseidon selector
• the 15 register/witness
• 6 sigmas evaluations (the last one is not evaluated)
24. Sample $v’$ with the Fr-Sponge.
25. Derive $v$ from $v’$ using the endomorphism (TODO: specify).
26. Sample $u’$ with the Fr-Sponge.
27. Derive $u$ from $u’$ using the endomorphism (TODO: specify).
28. Create a list of all polynomials that have an evaluation proof.
29. Compute the evaluation of $ft(\zeta)$.
#### Partial verification
For every proof we want to verify, we defer the proof opening to the very end. This allows us to potentially batch verify a number of partially verified proofs. Essentially, this steps verifies that $f(\zeta) = t(\zeta) * Z_H(\zeta)$.
1. Commit to the negated public input polynomial.
2. Run the Fiat-Shamir argument.
3. Combine the chunked polynomials’ evaluations (TODO: most likely only the quotient polynomial is chunked) with the right powers of $\zeta^n$ and $(\zeta * \omega)^n$.
4. Compute the commitment to the linearized polynomial $f$. To do this, add the constraints of all of the gates, of the permutation, and optionally of the lookup. (See the separate sections in the constraints section.) Any polynomial should be replaced by its associated commitment, contained in the verifier index or in the proof, unless a polynomial has its evaluation provided by the proof in which case the evaluation should be used in place of the commitment.
5. Compute the (chuncked) commitment of $ft$ (see Maller’s optimization).
6. List the polynomial commitments, and their associated evaluations, that are associated to the aggregated evaluation proof in the proof:
• recursion
• public input commitment
• ft commitment (chunks of it)
• permutation commitment
• index commitments that use the coefficients
• witness commitments
• coefficient commitments
• sigma commitments
• lookup commitments
#### Batch verification of proofs
Below, we define the steps to verify a number of proofs (each associated to a verifier index). You can, of course, use it to verify a single proof.
1. If there’s no proof to verify, the proof validates trivially.
2. Ensure that all the proof’s verifier index have a URS of the same length. (TODO: do they have to be the same URS though? should we check for that?)
3. Validate each proof separately following the partial verification steps.
4. Use the PolyCom.verify to verify the partially evaluated proofs.
## Optimizations
• commit_evaluation: TODO
TODO
|
# Tag Info
6
Ok, Xenapior and Reynolds together have the right idea. But the explanation is a bit lacking so here is a image to explain it all and some further musings. First let us start by drawing an image (yes i know that is what they say in school for you to do but nobody does it). From the image we can see that there are 2 equal right triangles $V_2, A, C$ and \$...
6
An alternative way to formulate the problem is to define a function that gives the distance between points on the two curves, as a function of the curves' parameters. Then attempt to find the global minimum of this function. If the curves intersect, the minimum will be zero; otherwise the minimum will be some positive distance. To be explicit, given a pair ...
5
[Disclaimer: I think the following should work but have not actually coded it myself] I couldn't think of a "trivial" method of producing a yes/no answer but the following would be a reasonable approach to a practical solution to the question. Let's assume our curves are A(s) and B(t) with control points {A0, A1..An} and {B0,..Bm} respectively. It seems ...
4
Depends on the CAD program. But yes basically they do just what you describe they triangulate the model into a mesh/lines on demand and then display that. How they chose to do this depends on the CAD, most modernized codebases probably use buffers. But older cads may in fact use the old immediate mode draw calls. This said it is possible that the ...
3
Its called a geometric constraints solver (a good primer on subject). You can find a open source solver as part of Open Cascade but its a bit convoluted to get going. A simpler solution for just solving, but also 3D solver capable, is geosolver. Making your own (algebraic solution being easiest to write) is also not that hard, just a bit of work to make it ...
2
The cut length from the vertex is x*ctan(t/2), where t is the angle at this vertex.
2
I suppose you want an arc of C0 and C1 continuity between the line and an arc. As illustrated above, you already have a vertex A which is the intersection of an edge and an arc of which the center positioned at O and radius equal to R. The question is thus pure mathematical: given A,O,R, edge direction BA, and a corner radius r, find C,B, and T. For ...
2
Since you have a limited set of tools you are not actually doing a classical fitting. What you have is a discrete problem. And since you are looking for a somewhat easily drawn fit, no more than twice segmented for example. One way to approach this is to find all the points that match your curvature requirements. Then find the point x units away from point ...
1
The relative size of the spacing of knots is irrelevant for the NURBS curve. The only thing that matters is that they keep the relation. Note this may not be wise as parametrization may have other uses behind the scenes. Image 1: 3 differently parametrized knots result in same curve if knot values are relatively the same. So you can scale and offset knot ...
1
Since you're working on CAD software, you probably want some precise results. Here an algorithm that could work: For each side: Compute the segment's equation. Compute each round corner's circle equation. Compute the intersections between the segment and each circle. The 2 intersection points are the new endpoints for the line segment. This doesn't handle ...
1
There is no general algorithm for packing problems. Only some of the special cases have known, and optimal, solutions. If you are packing one shape then finding a reasonable solution is possible. Like the known cases of hexagonal packing etc. However, if you have multiple diffenently sized objects then easy just flew out of the door. Some heurestics have ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
High-energy pulsar light curves in an offset polar cap B-field geometry
# High-energy pulsar light curves in an offset polar cap B-field geometry
## Abstract
The light curves and spectral properties of more than 200 -ray pulsars have been measured in unsurpassed detail in the eight years since the launch of the hugely successful Fermi Large Area Telescope (LAT) -ray mission. We performed geometric pulsar light curve modelling using static, retarded vacuum, and offset polar cap (PC) dipole -fields (the latter is characterized by a parameter ), in conjunction with standard two-pole caustic (TPC) and outer gap (OG) emission geometries. In addition to constant-emissivity geometric models, we also considered a slot gap (SG) -field associated with the offset-PC dipole -field and found that its inclusion leads to qualitatively different light curves. We therefore find that the assumed -field and especially the -field structure, as well as the emission geometry (magnetic inclination and observer angles), have a great impact on the pulsar’s visibility and its high-energy pulse shape. We compared our model light curves to the superior-quality -ray light curve of the Vela pulsar (for energies MeV). Our overall optimal light curve fit (with the lowest value) is for the retarded vacuum dipole field and OG model. We found that smaller values of are favoured for the offset-PC dipole field when assuming constant emissivity, and larger values are favoured for variable emissivity, but not significantly so. When we increased the relatively low SG -fields we found improved light curve fits, with the inferred pulsar geometry being closer to best fits from independent studies in this case. In particular, we found that such a larger SG -field (leading to variable emissivity) gives a second overall best fit. This and other indications point to the fact that the actual -field may be larger than predicted by the SG model.
\ShortTitle
High-energy pulsar light curves in an offset-PC -field geometry \FullConference4th Annual Conference on High Energy Astrophysics in Southern Africa
25-27 August, 2016
Cape Town, South Africa
## 1 Introduction
The field of -ray pulsars has been revolutionised by the launch of the Fermi Large Area Telescope (LAT; [3]). Over the past eight years, Fermi has detected over 200 -ray pulsars and has furthermore measured their light curves and spectral characteristics in unprecedented detail. Fermi’s Second Pulsar Catalog (2PC; [2]) describes the properties of some 117 of these pulsars in the energy range 100 MeV100 GeV. In this paper, we will focus on the GeV band light curves of the Vela pulsar [1], the brightest persistent source in the -ray sky.
Physical emission models such as the slot gap (SG; [32]) and outer gap (OG; [10, 38]) fall short of fully explaining (global) magnetospheric characteristics, e.g., the particle acceleration and pair production, current closure, and radiation of a complex multi-wavelength spectrum. More recent developments include global magnetospheric models such as the force-free (FF) inside and dissipative outside (FIDO) model [24, 25], the wind models of, e.g., [36], and particle-in-cell simulations (PIC; [8, 9]). Although much progress has been made using these physical (or emission) models, geometric light curve modeling [16, 41, 42, 22, 37] still presents a crucial avenue for probing the pulsar magnetosphere in the context of traditional pulsar models. The most commonly used emission geometries include the two-pole caustic (TPC; the SG model may be its physical representation; [15]) and OG models and may be used to constrain the pulsar geometry (i.e., magnetic inclination angle and the observer viewing angle with respect to the spin axis ), as well as the -ray emission region’s location and extent. This may provide vital insight into the boundary conditions and help constrain the accelerator geometry of next-generation full radiation models.
The assumed -field structure is essential for predicting the light curves seen by the observer using geometric models, since photons are expected to be emitted tangentially to the local -field lines in the corotating pulsar frame [12]. Even a small difference in the magnetospheric structure will therefore have an impact on the light curve predictions. Additionally, we have also incorporated an SG -field associated with the offset-PC dipole -field (making this latter case an emission model), which allows us to calculate the emissivity in the acceleration region in the corotating frame from first principles.
In this paper, we investigate the impact of different magnetospheric structures (i.e., the static dipole [18], retarded vacuum dipole (RVD; [14]), and an offset-PC dipole -field solution [19, 20]), as well as the SG -field on the pulsar visibility and -ray pulse shape. In combination with the different -field solutions mentioned above, we assume standard TPC and OG emission geometries. In Section 2 we briefly describe the offset-PC dipole -field and its corresponding SG -field implemented in our code [16, 4]. We also investigate the effect of increasing the -field by a factor of a 100. In Section 3, we present our phase plots and model light curves for the Vela pulsar, and we compare our results to previous multi-wavelength studies. Our conclusions follow in Section 4.
## 2 The Offset-PC Magnetosphere
### 2.1 B-field structure
Several -field structures have been studied in pulsar models, including the static dipole, the RVD (a rotating vacuum magnetosphere which can in principle accelerate particles but do not contain any charges or currents), the FF (filled with charges and currents, but unable to accelerate particles since the accelerating -field is screened everywhere; [11]), and the offset-PC dipole. The offset-PC dipole solution analytically mimics deviations from the static dipole near the stellar surface and is azimuthally asymmetric, with field lines having a smaller curvature radius over half of the PC (in the direction of the PC offset) compared to those of the other half [19, 20]. Such small distortions in the -field structure can be due to retardation and asymmetric currents, thereby shifting the PCs by small amounts in different directions. A more realistic pulsar magnetosphere, i.e., a dissipative solution [29, 26, 28, 39, 27], would be one that is intermediate between the RVD and the FF fields.
The symmetric case involves an offset of both PCs, with respect to the magnetic () axis in the same direction and applies to neutron stars with some interior current distortions that produce multipolar components near the stellar surface [19, 20]. We study the effect of this simpler symmetric case on predicted light curves. The general expression for a symmetric offset-PC dipole -field in spherical coordinates in the magnetic frame (indicated by the primed coordinates, where ) is [20]
B′OPCs ≈ μ′r′3[cosθ′^r′+12(1+a)sinθ′^θ′−ϵsinθ′cosθ′sin(ϕ′−ϕ′0)^ϕ′], (1)
where the symbols have the same meaning as before [19, 20]. We choose the offset direction to be in the plane. The -field lines are distorted in all directions, with the distortion depending on parameters (related to the magnitude of the shift of the PC from the magnetic axis) and (we choose in what follows, with the offset being in the direction). If we set the symmetric case reduces to a symmetric static dipole.
The difference between our offset-PC field and a dipole field that is offset with respect to the stellar centre can be most clearly seen by performing a multipolar expansion of these respective fields. An offset dipolar field may be expressed (to lowest order) as the sum of a centred dipole and quadropolar terms ). Conversely, our offset-PC field may be written as
B′OPCs(r′,θ′,ϕ′) ≈ B′dip(r′,θ′)+O(ϵr′3). (2)
Therefore, we can see that our offset-PC model (Eq. [2]) consists of a centred dipole plus terms of order or . Since and , the latter terms present perturbations (e.g., poloidal and toroidal effects) to the centred dipole. These perturbed components of the distorted magnetic field were derived under the solenoidality condition [19, 20].
### 2.2 Incorporating a corresponding SG E-field
It is important to take the accelerating -field (-field parallel to the local -field,) into account when such expressions are available, since this will modulate the emissivity in the gap as opposed to geometric models where we assume constant per unit length in the corotating frame. For the SG case we implement the full -field in the rotational frame corrected for general relativistic (GR) effects (e.g., [32, 33]).
The low-altitude solution is given by (A.K. Harding 2015, private communication)
E∥,low ≈ −3E0νSGxa{κη4e1Acosα+14θ1+aPCη[e2AcosϕPC (3) +14ϵκe3A(2cosϕ′0−cos(2ϕPC−ϕ′0))]sinα}(1−ξ2∗),
where the symbols in Eq. (3) have the same meaning as in previous works [31, 32, 33, 5, 7, 4]. We choose the negative -axis toward to coincide with , labeling the “favourably curved” -field lines.
We approximate the high-altitude SG -field by [33]
E∥,high ≈ −38(ΩRc)3B0f(1)νSGxa{[1+13κ(5−8η3c)+2ηηLC]cosα (4) Missing or unrecognized delimiter for \Big
The critical scaled radius is where the high-altitude and low-altitude -field solutions are matched, with the critical radius, the stellar radius, , and the light cylinder radius (where the corotation speed equals the speed of light).
To obtain a general -field valid from to we use ([33]; Equation [59]):
E∥,SG≃E∥,lowexp[−(η−1)/(ηc−1)]+E∥,high. (5)
We matched the low-altitude and high-altitude -field solutions by solving on each -field line, where is the period and its time derivative [4].
### 2.3 Increasing the relatively low E-field
In the curvature radiation reaction (CRR, where the energy gain rate equals the CR loss rate) limit, we can determine the CR cutoff of the CR photon spectrum as follows [40]
ECR∼4E3/4∥,4ρ1/2curv,8GeV, (6)
with cm the curvature radius of the -field line and statvolt cm. Since the SG -field (see Section 2.2) is low (implying a CR cutoff around a few MeV), the phase plots for emission MeV display small caustics (Section 3.1) which result in “missing structure”. Therefore, we investigate the effect on the light curves of the offset-PC dipole -field and SG model combination when we increase the -field. As a test we multiply Eq. (5) by a factor 100. Using the above expression the estimated cutoff energy for our increased SG -field is now GeV, which is in the energy range of Fermi ( MeV).
## 3 Results
### 3.1 Phase plots and light curves
As an example we show phase plots and their corresponding light curves for the offset-PC dipole, for both the TPC (assuming uniform ) and SG (assuming variable ) models. Figure 1 is for the TPC model for . For larger values of the caustics extend over a larger range in , with the emission forming a “closed loop,” which is also a feature of the static dipole -field at . The TPC model is visible at nearly all angle combinations, since some emission occurs below the null charge surface (the geometric surface across which the charge density changes sign; [17]) for this model, in contrast to the OG model. However, for and below no light curves are visible, i.e., no emission is observed due to the “closed loop” structure of the caustics. The TPC light curves exhibit relatively more off-pulse emission than the OG ones. In the TPC model, emission is visible from both magnetic poles, forming double peaks in some cases, whereas in the OG model emission is visible from a single pole. One does obtain double peaks in the OG case, however, when the line of sight crosses the caustic at two different phases.
If we compare Figure 1 with the static dipole case (for ; not shown), we notice that a larger PC offset results in qualitatively different phase plots and light curves, e.g., modulation at small . Also, the caustics occupy a slightly larger region of phase space and seem more pronounced for larger and values. The light curve shapes are also slightly different.
Figure 2 is for the offset-PC dipole -field and , for a variable due to using an SG -field solution (with CR the dominating process for emitting -rays; see Sections 2.2). The caustic structure and resulting light curves are qualitatively different for various compared to the constant case. The caustics appear smaller and less pronounced for larger values (since becomes lower as increases), and extend over a smaller range in . If we compare Figure 2 with the case for (for variable ; not shown) we note a new emission structure close to the PCs for small values of and . This reflects the boosted -field on the “favourably curved” -field lines. In Figure 2 a smaller region in phase space filled. The light curves generally display only one broad peak with less off-peak emission compared to Figure 1. As and increase, more peaks become visible, with emission still visible from both poles as seen for larger and values, e.g., and .
If we compare Figures 1 with 2, we notice that when we take into account, the phase plots and light curves change considerably. For example, for in the constant case, a “closed loop” emission pattern is visible in the phase plot, which is different compared to the small “wing-like” emission pattern in the variable case. Therefore, we see that both the -field and -field have an impact on the predicted light curves. This small “wing-like” caustic pattern is due to the fact that we only included photons in the phase plot with energies MeV. Given the relatively low -field, there are only a few photons with energies exceeding MeV.
In Figure 3 we present the phase plots and light curves for the SG -field (increased by a factor 100) for the offset-PC dipole and SG model solution, with . If we compare Figure 3 with Figure 2 we notice that more phase space is filled by caustics, especially at larger . At the visibility is again enhanced. The caustic structure becomes wider and more pronounced, with extra emission features arising as seen at larger and values. This leads to small changes in the light curve shapes. At smaller values, the emission around the PC forms a circular pattern that becomes smaller as increases. These rings around the PCs become visible since the low -field is boosted, leading to an increase in bridge (region between the first and second peak of a light curve) emission as well as higher signal-to-noise ratio. At low the background becomes feature-rich, but not at significant intensities, however.
### 3.2 Comparison of best-fit parameters for different models
We next follow the same approach as a previous study [37] to compare the various optimal solutions of the different models. We determine the difference between the scaled of the optimal model, , and the other models () using
Δξ2 =ξ2−ξ2opt=Ndof(χ2/χ2opt−1). (7)
with degrees of freedom . We considered two approaches: we found the best fit (i) per -field and model combination (), and (ii) overall (for all -field and model combinations, )1.
In Figure 4 we label the different -field structures assumed in the various models as well as the overall comparison along the -axis, and plot and on the -axis. We represent the TPC geometry with a circle, the OG with a square, and for the offset-PC dipole field we represent the various values for constant by different coloured stars, for variable by different coloured left pointing triangles, and for the case of by different coloured upright triangles, as indicated in the legend. The dashed horizontal lines indicate the confidence levels we obtained for degrees of freedom. These confidence levels are used as indicators of when to reject or accept an alternative fit compared to the optimum fit.
For the static dipole field the TPC model gives the optimum fit and the OG model lies within of this fit, implying that the OG geometry may provide an acceptable alternative fit to the data in this case. For the RVD field the TPC model is significantly rejected beyond the level (not shown on plot), and the OG model is preferred. We show three cases for the offset-PC dipole field, including the TPC model assuming constant , the SG model assuming variable , and the latter is for an -field multiplied by a factor of . The optimal fits for the offset-PC dipole field and TPC model reveal that a smaller offset is generally preferred for constant , while a larger offset is preferred for variable (but not significantly), with all alternative fits falling within of these. However, when we increase -field, a smaller offset is preferred for the SG and variable case. When we compare all model and -field combinations with the overall best fit (i.e., rescaling the values of all combinations using the optimal fit involving the RVD -field and OG model), we notice that the static dipole and TPC model falls within , whereas the static OG model lies within . We also note that the usual offset-PC dipole -field and TPC model combination (for all values) is above (with some fits ), but the offset-PC dipole -field and SG model combination (for all values) is significantly rejected (). However, the case of the offset-PC dipole field and a higher SG -field for all values leads to a recovery, since all the fits fall within or and delivers an overall optimal fit for , second only to the RVD and OG model fit.
Several multi-wavelength studies have been performed for Vela, using the radio, X-ray, and -ray data, in order to find constraints on and . We graphically summarise the best-fit and , with errors, from this and other works in Figure 5. We notice that the best fits generally prefer a large or or both. It is encouraging that many of the best-fit solutions lie near the inferred from the pulsar wind nebula (PWN) torus fitting [35], notably for the RVD -field. A significant fraction of fits furthermore lie near the diagonal, i.e., they prefer a small impact angle, most probably due to radio visibility constraints [22]. For an isotropic distribution of pulsar viewing angles, one expects values to be distributed as between , i.e., large values are much more likely than small values, which seems to agree with the large best-fit values we obtain. There seems to be a reasonable correspondence between our results obtained for geometric models and those of other authors, but less so for the offset-PC dipole -field, and in particular for the SG -field case. The lone fit near may be explained by the fact that a very similar fit, but one with slightly worse , is found at . If we discard the non-optimal TPC / SG fits, we see that the optimal fits will cluster near the other fits at large and . Although our best fits for the offset-PC dipole -field are clustered, it seems that increasing leads to a marginal decrease in for the TPC model (light green) and opposite for SG (dark green), but not significantly. For our increased SG -field case (brown) we note that the fits now cluster inside the gray area above the fits for the static dipole and TPC, and offset-PC dipole for both the TPC and SG geometries.
## 4 Conclusions
We investigated the impact of different magnetospheric structures (i.e., static dipole, RVD, and a symmetric offset-PC dipole fields) on predicted -ray pulsar light curve characteristics. For the offset-PC dipole field we only considered the TPC (assuming uniform ) and SG (modulating the using the -field which is corrected for GR effects up to high altitudes) models. We concluded that the magnetospheric structure and emission geometry have an important effect on the predicted -ray pulsar light curves. However, the presence of an -field may have an even greater effect than small changes in the -field and emission geometries.
We fit our model light curves to the observed Fermi-measured Vela light curve for each -field and geometric model combination. We found that the RVD field and OG model combination fit the observed light curve the best for and an unscaled . As seen in Figure 4, for the RVD field an OG model is significantly preferred over the TPC model, given the characteristically low off-peak emission. For the other field and model combinations there was no significantly preferred model (per -field), since all the alternative models may provide an acceptable alternative fit to the data, within . The offset-PC dipole field for constant favoured smaller values of , and for variable larger values, but not significantly so (). When comparing all cases (i.e., all -fields), we noted that the offset-PC dipole field for variable was significantly rejected ().
Since we wanted to compare our model light curves to Fermi data we increased the usual low SG -field by a factor of 1002 (leading to a spectral cutoff GeV). The increased -field also had a great impact on the phase plots, e.g., extended caustic structures and new emission features as well as different light curve shapes emerged. We noted that a smaller was again (as in the TPC case) preferred, although not significantly (). When we compared this case to the other -field and model combinations, we found statistically better fits for all values, with an optimal fit at and for being second in quality only to the RVD and OG model fit.
We found reasonable correspondence between our results obtained for geometric models and those of other independent studies. We noted that the optimal fits generally clustered near the other fits at large and . For our increased SG -field and offset-PC dipole combination, we noted that these fits now clustered at larger and .
There have been several indications that the SG -field may be larger than initially thought, as confirmed by this study. (i) Population synthesis studies found that the SG -ray luminosity may be too low, pointing to an increased -field and / or particle current through the gap, e.g., [37]. (ii) If the -field is too low, one is not able to reproduce the observed spectral cutoffs of a few GeV (Section 2.3; [2]). (iii) A larger -field (increased by a factor of 100) led to statistically improved fits with respect to the light curves. (iv) The inferred best-fit and parameters for this -field clustered near the best fits of independent studies. (v) A larger SG -field also increased the particle energy gain rates leading to CRR being reached close to the stellar surface.
Independent multi-wavelength studies have considered many other pulsars, in addition to the Vela pulsar. For example, Ng & Romani [34, 35] used torus and jet fitting to constrain of a few X-ray pulsars, and obtained a consistent value of . Johnson et al. [22] and Pierbattista et al. [37] fitted the radio and -ray light curves of millisecond and younger pulsar populations respectively using standard geometric models. DeCesar et al. [13] constrained the and angles of a handful of pulsars using standard emission geometries coupled with the FF -field. Overall, there seems to be reasonable consistency between the best-fit geometries derived using the various models.
A number of studies have lastly considered signatures in the polarisation domain for different -field geometries, radiation mechanisms, and emission sites, e.g., [16, 9, 21]. This avenue may well prove very important in future to aid in differentiating between the various pulsar models, in addition to spectral and light curve measurements.
###### Acknowledgements.
We thank Marco Pierbattista, Tyrel Johnson, Lucas Guillemot, and Bertie Seyffert for fruitful discussions. This work is based on the research supported wholly / in part by the National Research Foundation of South Africa (NRF; Grant Numbers 87613, 90822, 92860, 93278, and 99072). The Grantholder acknowledges that opinions, findings and conclusions or recommendations expressed in any publication generated by the NRF supported research is that of the author(s), and that the NRF accepts no liability whatsoever in this regard. A.K.H. acknowledges the support from the NASA Astrophysics Theory Program. C.V. and A.K.H. acknowledge support from the Fermi Guest Investigator Program.
### Footnotes
1. We therefore first scale the values using the optimal value obtained for a particular -field, and second we scale these using the overall optimal value irrespective of -field.
2. This number is not unreasonable, especially in light of the observed spectral high-energy spectral cutoffs. Since pulsars have high local -field strengths and we expect that , such high -fields are realistic. A larger gap width is also likely, and this will further increase .
### References
1. A. A. Abdo, M. Ackermann, W. B. Atwood et al., Fermi Large Area Telescope Observations of the Vela Pulsar, ApJ 696 1084 (2009).
2. A. A. Abdo, M. Ajello, A. Allafort et al., The Second Fermi Large Area Telescope Catalog of Gamma-Ray Pulsars, ApJS 208 17 (2013).
3. W. B. Atwood, A. A. Abdo, M. Ackermann et al., The Large Area Telescope on the Fermi Gamma-Ray Space Telescope Mission, ApJ 697 1071 (2009).
4. M. Barnard, C. Venter, and A. K. Harding, The Effect of an Offset Polar Cap Dipolar Magnetic Field on the Modeling of the Vela Pulsar’s Gamma-Ray Light Curves, ApJ 832 107 (2016).
5. M. Breed, C. Venter, A. K. Harding, and T. J. Johnson, Implementation of an Offset-dipole Magnetic Field in a Pulsar Modelling Code, in proceedings of SAIP2013: the 58 Ann. Conf. of the SA Institute of Physics ed. R. Botha and T. Jili, 350 (2014).
6. M. Breed, C. Venter, A. K. Harding, and T. J. Johnson, The Effect of Different Magnetospheric Structures on Predictions of Gamma-Ray Pulsar Light Curves, in proceeding of SAIP2012: the 57 Ann. Conf. of the SA Institute of Physics ed. J. Janse van Rensburg, 316 (2015).
7. M. Breed, C. Venter, A. K. Harding, and T. J. Johnson, The Effect of an Offset-dipole Magnetic Field on the Vela Pulsar’s Gamma-Ray Light Curves, in proceedings of SAIP2014: the 59 Ann. Conf. of the SA Institute of Physics ed. C. Engelbrecht and S. Karataglidis, 311 (2015).
8. B. Cerutti, A. A. Philippov, and A. Spitkovsky, Modelling High-Energy Pulsar Light Curves from First Principles, MNRAS 457 2401 (2016).
9. B. Cerutti, J. Mortier, and A. A. Philippov, Polarized Synchrotron Emission from the Equatorial Current Sheet in Gamma-Ray Pulsars, MNRAS 463 L89 (2016).
10. K. S. Cheng, C. Ho, and M. Ruderman, Energetic Radiation from Rapidly Spinning Pulsars. I Outer Magnetosphere Gaps. II VELA and Crab, ApJ 300 500 (1986).
11. I. Contopoulos, D. Kazanas, and C. Fendt, The Axisymmetric Pulsar Magnetosphere, ApJ 511 351 (1999).
12. J. K. Daugherty, and A. K. Harding, Electromagnetic Cascades in Pulsars, ApJ 252 337 (1982).
13. M. E. DeCesar, Using Fermi Large Area Telescope Observations to Constrain the Emission and Field Geometries of Young Gamma-Ray Pulsars and to Guide Millisecond Pulsar Searches, PhD thesis, Univ. of Maryland, College Park (2013).
14. A. J. Deutsch, The Electromagnetic Field of an Idealized Star in Rigid Rotation in Vacuo, AnAp 18 1 (1955).
15. J. Dyks, and B. Rudak, Two-Pole Caustic Model for High-Energy Light Curves of Pulsars, ApJ 598 1201 (2003).
16. J. Dyks, A. K. Harding, and B. Rudak, Relativistic Effects and Polarization in Three High-Energy Pulsar Models, ApJ 606 1125 (2004).
17. P. Goldreich, and W. H. Julian, Pulsar Electrodynamics, ApJ 157 869 (1969).
18. D. J. Griffiths, Introduction to Electrodynamics, ed.; San Francisco: Pearson Benjamin Cummings (1995).
19. A. K. Harding, and A. G. Muslimov, Pulsar Pair Cascades in a Distorted Magnetic Dipole Field, ApJL 726 L10 (2011).
20. A. K. Harding, and A. G. Muslimov, Pulsar Pair Cascades in Magnetic Fields with Offset Polar Caps, ApJ 743 181 (2011).
21. A. K. Harding, and C. Kalapotharakos, in preparation, (2017).
22. T. J. Johnson, C. Venter, A. K. Harding et al., Constraints on the Emission Geometries and Spin Evolution of Gamma-Ray Millisecond Pulsars, ApJS 213 6 (2014).
23. S. Johnston, G. Hobbs, S. Vigeland et al., Evidence For Alignment of the Rotation and Velocity Vectors in Pulsars, MNRAS 364 1397 (2005).
24. C. Kalapotharakos, and I. Contopoulos, Three-dimensional Numerical Simulations of the Pulsar Magnetosphere: Preliminary Results, A&A 496 495 (2009).
25. C. Kalapotharakos, A. K. Harding, and D. Kazanas, Gamma-Ray Emission in Dissipative Pulsar Magnetospheres: From Theory to Fermi Observations, ApJ 793 97 (2014).
26. C. Kalapotharakos, D. Kazanas, A. K. Harding, and I. Contopoulos, Toward a Realistic Pulsar Magnetosphere, ApJ 749 2 (2012).
27. J. G. Li, Electromagnetic and Radiative Properties of Neutron Star Magnetospheres, PhD thesis, Princeton Univ., New Jersey (2014).
28. J. Li, A. Spitkovsky, and A. Tchekhovskoy, Resistive Solutions for Pulsar Magnetospheres, ApJ 746 60 (2012).
29. A. Lichnerowicz, Relativistic Hydrodynamics and Magnetohydrodynamics, ed.; New York: Benjamin, Inc. (1967).
30. W. Lowrie, A Student’s Guide to Geophysical Equations, ed.; Cambridge University Press (2011).
31. A. G. Muslimov, and A. K. Harding, Toward the Quasi-Steady State Electrodynamics of a Neutron Star, ApJ 485 735 (1997).
32. A. G. Muslimov, and A. K. Harding, Extended Acceleration in Slot Gaps and Pulsar High-Energy Emission, ApJ 588 430 (2003).
33. A. G. Muslimov, and A. K. Harding, High-Altitude Particle Acceleration and Radiation in Pulsar Slot Gaps, ApJ 606 1143 (2004).
34. C.-Y. Ng, and R. W. Romani, Fitting Pulsar Wind Tori., ApJ 601 479 (2004).
35. C.-Y. Ng, and R. W. Romani, Fitting Pulsar Wind Tori. II. Error Analysis and Applications, ApJ 673 411 (2008).
36. J. Pétri, and G. Dubus, Implication of the Striped Pulsar Wind Model for Gamma-Ray Binaries, MNRAS 417 532 (2011).
37. M. Pierbattista, A. K. Harding, I. A. Grenier et al., Light-curve Modelling Constraints on the Obliquities and Aspect Angles of the Young Fermi Pulsars, A&A 575 A3 (2015).
38. R. W. Romani, and I.-A. Yadigaroglu, Gamma-Ray Pulsars: Emission Zones and Viewing Geometries, ApJ 438 314 (1995).
39. A. Tchekhovskoy, A. Spitkovsky, and J. G. Li, Time-dependent 3D Magnetohydrodynamic Pulsar Magnetospheres: Oblique Rotators, MNRAS 435 L1 (2013).
40. C. Venter, and O. C. de Jager, Accelerating High-energy Pulsar Radiation Codes, ApJ 725 1903 (2010).
41. C. Venter, A. K. Harding, and L. Guillemot, Probing Millisecond Pulsar Emission Geometry Using Light Curves from the Fermi/Large Area Telescope, ApJ 707 800 (2009).
42. K. P. Watters, R. W. Romani, P. Weltevrede, and S. Johnston, An Atlas for Interpreting Gamma-Ray Pulsar Light Curves, ApJ 695 1289 (2009).
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minumum 40 characters
|
Note: this page contains legacy resources that are no longer supported. You are free to continue using these materials but we can only support our current worksheets, available as part of our membership offering.
# Teaching division to your child
This section is a brief overview of math division. It covers the concept of sharing in equal amounts, the basic division operation and long division. The sections most relevant to you will depend on your child’s level. Use the information and resources to help review and practice what your child’s teacher will have covered in the classroom.
## Introducing division
When you start teaching division to your child you should introduce division as being a sharing operation where objects are shared (or divided) into a number of groups of equal number.
Once you have built an understanding of the concept of division you can try using these division worksheets. When teaching early division you should also discuss that division has an opposite. Discuss how division is about separating sets, while the opposite type of math, called multiplication is about combining sets. Explore this relationship with your child as it will be important when recalling basic facts to solve division problems. Introduce fact families (e.g. 5 x 3 =15, 3 x 5 = 15, 15 ÷ 3 = 5, 15 ÷ 5 = 3).
## Dividing numbers
After your child grasps the concept of dividing and the relationship with multiplication you can start working with numbers. Be sure your child is familiar with the format and signs for division
With the concept grasped, teaching division will become more about guided practice to help your child to become familiar with the division operation (although it’s really going to be a different type of multiplication practice.) Start by practicing division by 1, 2 and 3 and then gradually move up to 9. Use the worksheets to help.
## Division with remainders
Your child will most likely come across or ask about situations where division “does not work.” These can be explained with the introduction of the remainder. It is an important idea to understand as the division of larger numbers will require the “carrying” of this remainder.
## Teaching division with larger numbers
There are a number of methods for dividing larger numbers. One of these is shown below:
These printable worksheets will provide practice with similar types of division problems.
## Long Division
There are different methods for dividing multi-digit numbers (long division). One way is a combination of estimation/ trial and error and multiplication. There is also a commonly used algorithmic method that is well explained and illustrated here at mathisfun.com.The example below shows the same algorithmic steps alongside place value blocks to help show what is actually happening during the division process.
Watch an animated mini-lesson showing how to do long division . Note. It shows the same steps as below.
Divide 368 by 16. In other words, we take 368 and share it with 16 equal groups.
Start with the hundreds. There are 3 hundreds. We cannot share 3 equally with 16 groups.
We need to break the hundreds into tens. 3 hundreds equal 30 tens. So with the 6 tens we started with we now have 36 tens. We can start sharing. We can share 2 tens with each of the 16 groups.
We have used up 32 of our tens. We still have four tens to share.
We need to break the tens into ones. 4 tens equal 40 ones. So with the 8 ones we started with we now have 48 ones. We can share 3 ones with each of the 16 groups.
Once you have worked through the steps above with your children try a “hands on” division exercise using money. For example, share $2.38 equally with 14 groups (238 ÷ 14). Start with the dollars; you cannot share two dollars equally so you will need to change them into twenty$0.10 coins. Next share the tens; you can put one ten (a $0.10 coin) into each of the fourteen groups. You will have nine$0.10 coins left which you will need to change into ninety $0.01 coins. This leaves ninety-eight$0.01 coins which can be shared with 7 in each of the groups.
Repeat with different amounts of money and numbers of groups. Write the algorithmic steps as you go.
You can practice long division with this worksheet generator or you can try this Multiplication/ Division Worksheet Generator. It also .provides limitless division questions that can be printed out..
## Recap
This brief overview should highlight the close relationship between division and multiplication.
|
# Conjecture, parametrization and data distribution
Up front warning: This is a very inside baseball post, and I’m the only person who plays this particular variant of the game. This blog post is mostly a mix of notes to self to and sharing my working.
I’m in the process of trying to rewrite the Hypothesis backend to use the Conjecture approach.
At this point the thing I was originally worried was intractable – shrinking of data – is basically solved. Conjecture shrinks as well as or better than Hypothesis. There are a few quirks to still pay attention to – the shrinking can always be improved, and I’m still on the fence as to whether some of the work I have with explicit costing and output based shrink control is useful (I think it’s probably not), but basically I could ship what I have today for shrinking and it would be fine.
However I’m discovering another problem: The other major innovative area of Hypothesis is its parametrized approach to data generation. More generally, I’m finding that getting great quality initial data out of Conjecture is hard.
This manifests in two major ways:
1. It can be difficult to get good data when you also have good shrinking because you want to try nasty distributions. e.g. just generating 8 bytes and converting it to an IEEE 754 binary float representation produces great shrinking, but a fairly sub-par distribution – e.g. the probability of generating NaN is 1 in 2048 (actually very slightly lower).
2. The big important feature of Hypothesis’s parametrization is correlated output. e.g. you can’t feasibly generate a list of 100 positive integers by chance if you’re generating each element independently. Correlated output is good for finding bugs.
1 is relatively easily solved by letting data generators participate in the initial distribution: Instead of having the signature draw_bytes(self, n) you have the signature draw_bytes(self, n, distribution=uniform). So you can let the floating point generator specify an alternative distribution that is good at hitting special case floating point numbers without worrying about how it affects distributions. Then, you run the tests in two modes: The first where you’re building the data as you go and use the provided distributions, the second where you’re drawing from a pre-allocated block of data and ignore the distribution entirely.
This is a bit low-level unfortunately, but I think it’s mostly a very low level problem. I’m still hoping for a better solution. Watch this space.
For the second part… I think I can just steal Hypothesis’s solution to some degree. Instead of the current case where strategies expose a single function draw_value(self, data) they can now expose functions draw_parameter(self, data) and draw_value(self, data, parameter). A normal draw call then just does strategy.draw_value(data, strategy.draw_parameter(data)), but you can use alternate calls to induce correlation.
There are a couple problems with this:
1. It significantly complicates the usage pattern: I think the parametrization is one of the bits of Hypothesis people who look at the internals least understand, and one of the selling points of Conjecture was “You just write functions”. On the other hand I’m increasingly not sold on “You just write functions” as a good thing: A lot of the value of Hypothesis is the strategies library, and having a slightly more structured data type there is quite useful. It’s still easy to go from a function from testdata to a value to a strategy, so this isn’t a major loss.
2. It’s much less language agnostic. In statically typed languages you need some way to encode different strategies having different parameter types, ideally without this being exposed in the strategy (because then strategies don’t form a monad, or even an applicative). You can solve this problem a bit by making parameters an opaque identifier and keeping track of them in some sort of state dictionary on the strategy, but that’s a bit gross.
3. Much more care with parameter design is needed than in Hypothesis because the parameter affects the shrinking. As long as shrinking of the parameter works sensibly this should be OK, but this can become much more complicated. An example of where this gets complicated later.
4. I currently have no good ideas how parameters should work for flatmap, and only some bad ones. This isn’t a major problem because you can fall back to a slightly worse distribution but it’s annoying because Conjecture previously had the property that the monadic and applicative interfaces were equivalently good.
Here’s an example of where parametrization can be a bit tricky:
Suppose you have the strategy one_of(s1, …, sn) – that is, you have n strategies and you want to pick a random one and then draw from that.
One natural way to parametrize this is as follows: Pick a random non-empty subset of {1, .., n}. Those are the enabled alternatives. Now pick a parameter for each of these options. Drawing a value is then picking a random one of the enabled alternatives and feeding it its parameter.
There are a couple major problems with this, but the main one is that it shrinks terribly.
First off: The general approach to shrinking directions Hypothesis takes for alternation is that earlier branches are preserved. e.g. if I do integers() | text() we’ll prefer integers. If I do text() | integers() we’ll prefer text. This generally works quite well. Conjecture’s preference for things that consume less data slightly ruins this (e.g. The integer 1 will always be preferred to the string “antidisestablishmentarianism” regardless of the order), but not to an intolerable degree, and it would be nice to preserve this property.
More generally, we don’t want a bad initial parameter draw to screw things up for us. So for example if we have just(None) | something_really_complicated() and we happen to draw a parameter which only allows the second, but it turns out this value doesn’t matter at all, we really want to be able to simplify to None.
So what we need is a parameter that shrinks in a way that makes it more permissive. The way to do this is to:
1. Draw n bits.
2. Invert those n bits.
3. If the result is zero, try again.
4. Else, return a parameter that allows all set bits.
The reason for this is that the initially drawn n bits will shrink towards zero, so as you shrink, the parameter will have more set bits.
This then presents two further problems that need solving.
The next problem is that if we pick options through choice(enabled_parameters) then this will change as we enable more things. This may sometimes work, but in general will require difficult to manage simultaneous shrinks to work well. We want to be able to shrink the parameter and the elements independently if at all possible.
So what we do is rejection sampling: We generate a random number from one to n, then if that bit is set we accept it, if not we start again. If the number of set bits is very low this can be horrendously inefficient, but we can short-circuit that problem by using the control over the distribution of bytes suggested above!
The nice thing about doing it this way is that we can mark the intermediate draws as deletable, so they get discarded and if you pay no attention to the instrumentation behind the curtain it looks like our rejection sampling magically always draws the right thing on its first draw. We can then try bytewise shrinking of the parameter, which leads to a more permissive set of options (that could then later allow us to shrink this), and the previously chosen option remains stable.
This then leads to the final problem: If we draw all the parameters up front, adding in more bits will cause us to read more data because we’ll have. This is to draw parameters for them. This is forbidden: Conjecture requires shrinks to read no more data than the example you started from (for good reason – this both helps guarantee the termination of the shrink process and keeps you in areas where shrinking is fast).
The solution here is to generate parameters lazily. When you pick alternative i, you first check if you’ve already generated a parameter for it. If you have you use that, if not you generate a new one there and then. This keeps the number and location of generated parameters relatively stable.
In writing this, a natural generalization occurred to me. It’s a little weird, but it nicely solves this problem in a way that also generates to monadic bind:
1. parameters are generated from data.new_parameter(). All this is in an integer counter.
2. There is a function data.parameter_value(parameter, strategy) which does the same lazy calculation keyed off the parameter ID: If we already have a parameter value for this ID and strategy, use that. If we don’t, draw a new one and store that.
3. Before drawing from it, all strategies are interned. That is, replaced with an equivalent strategy we’ve previously seen in this test run. This means that if you have something like booleans().flatmap(lambda b: lists(just(b))), both lists(just(False)) and lists(just(True)) will be replaced with stable strategies from a pool when drawing. This means that parameters get reused.
I think this might be a good idea. It’s actually a better API, because it becomes much harder to use the wrong parameter value, and there’s no worry about leaking values or state on strategy objects, because the life cycle is fairly sharply confined to that of the test. It doesn’t solve the problem with typing this well, but it solves the problem of using it incorrectly well enough that an unsafe cast is probably fine if you’re unable to do so.
Anyway, brain dump over. I’m not sure this made sense to anyone but me, but it helped me think through the problems quite a lot.
This entry was posted in Hypothesis on by .
# Let Hypothesis making your choices for you
I had a moment of weakness this morning and did some feature development on Hypothesis despite promising not to. The result is Hypothesis 1.14.0.
This adds a bunch of interesting new strategies to the list. One I’d like to talk about in particular is the new choices strategy.
What does it do?ell, it gives you something that behaves like random.choice only under Hypothesis’s control and subject to minimization. This more or less solves the problem I had a long and complicated post about a while ago for picking elements from a list. You can now do something like:
from hypothesis import given, strategies as st @given(st.lists(st.integers(), min_size=1), st.choices()) def test_deletion(values, choice): v = choice(values) values.remove(v) assert v not in values
Then running this will print something like:
_____________________________________________ test_deletion ______________________________________________
test_del.py:4: in test_deletion
def test_deletion(values, choice):
src/hypothesis/core.py:583: in wrapped_test
print_example=True, is_final=True
src/hypothesis/executors/executors.py:25: in default_executor
return function()
src/hypothesis/core.py:365: in run
return test(*args, **kwargs)
test_del.py:7: in test_deletion
assert v not in values
E assert 0 not in [0]
----------------------------------------------- Hypothesis -----------------------------------------------
Falsifying example: test_deletion(values=[0, 0], choice=choice)
Choice #1: 0
===================
Note that the choices are printed as they are made. This was one of the major obstacles to implementing something like this in the past: The lack of the ability to display the results from within the test. The new note API offers a solution to this.
This entry was posted in Hypothesis, Python on by .
# New improved development experience for Hypothesis
As part of my drive to make Hypothesis more of a community project, one of the things I need to do is to ensure it’s easy for new people to pick up, and easy for people who have a different environment to use.
There are a couple major consistent sources of issues people have with Hypothesis development:
1. It requires the availability of a lot of different versions of Python. I use pyenv heavily, so this hasn’t been a major problem for me, but other people don’t so are less likely to have, say, both Python 3.5 and Python 3.4 installed (because of reasons some build tasks require one, some the other).
2. A full test run of Hypothesis takes a very long time. If you don’t parallelise it it’s in the region of 2 hours.
3. Some of the build steps are very counterintuitive in their behaviour – e.g. “tox -e lint” runs a mix of linting and formatting operations and then errors if you have a git diff. This is perfectly reasonable behaviour for running on a CI, but there’s no separate way of getting the formatter to fix your code for you.
Part of the problem in 3 is that tox is a test runner, not a general task runner, and there was a lack of good unified interface to the different tasks that you might reasonably want to run.
So I’ve introduced a new unified system which provides a much better developer experience, gives a single interface to all of the normal Hypothesis development tasks, and automates a lot of the issues around managing different versions of Python. Better yet, it’s based on a program which is widely deployed on most developers’ machines, so there’s no bootstrapping issue.
I am, of course, talking about a Makefile.
No, this isn’t some sort of sick joke.
Make is actually pretty great for development automation: It runs shell commands, checks if things are up to date, and expresses dependencies well. It does have some weird syntactic quirks, and writing portable shell isn’t exactly straightforward, but as an end user it’s pretty great.
In particular because the makefile can handle installing all of the relevant pythons for you (I shell out to pyenv‘s build plugin for this but don’t otherwise use pyenv for this) the juggling many pythons problem goes away.
Other highlights:
• ‘make check-fast’ for running a fast subset of the tests
• ‘make format’ for reformatting your code to the Hypothesis style
• ‘make check-django’ and ‘make check-pytest’ for testing their respective integrations (there’s also ‘make check-nose’ for checking Hypothesis works under nose and I never giggle when typing that at all).
You can see the full Makefile here, and the CONTRIBUTING.rst documents some of the other common operations.
Here’s an asciinema of it in action:
This entry was posted in Hypothesis, programming, Python on by .
# Future directions for Hypothesis
There’s something going on the Hypothesis project right now: There are currently three high quality pull requests open from people who aren’t me adding new functionality.
Additionally, Alexander Shorin (author of the characters strategy one) has a CouchDB backed implementation of the Hypothesis example database which I am encouraging him to try to merge into core.
When I did my big mic drop post it was very unclear whether this was going to happen. One possible outcome of feature freeze was simply that Hypothesis was going to stabilize at its current level of functionality except for when I occasionally couldn’t resist the urge to add a feature.
I’m really glad it has though. There’s a vast quantity of things I could do with Hypothesis, and particularly around data generation and integrations it’s more or less infinitely parallellisable and doesn’t require any deep knowledge of Hypothesis itself, so getting other people involved is great and I’m very grateful to everyone who has submitted work so far.
And I’d like to take this forward, so I’ve updated the documentation and generally made the situation more explicit:
Firstly, it now says in the documentation that I do not do unpaid Hypothesis feature development. I will happily take sponsorship for new features, but for the rest of it I will absolutely help you every step of the way in writing and designing the feature, but it’s up to the community to actually drive the work.
Secondly, I’ve now labelled all enhancements that I think are accessible for someone else to work on. Some of these are large-ish and people will need me (or, eventually, someone else!) to lend a hand with, but I think they all have the benefit of being relatively self-contained and approachable without requiring too deep an understanding of Hypothesis.
Will this work? Only time (and effort) will tell, but I think the current set of pull requests demonstrates that it can work, and the general level of interest I see from most people I introduce Hypothesis to seems to indicate that it’s got a pretty good fighting chance.
This entry was posted in Hypothesis, Python on by .
# Finding more bugs with less work
I was at PyCon UK this weekend, which was a great conference and I will definitely be attending next year.
Among the things that occurred at this conference is that I gave my talk, “Finding more bugs with less work”. The video is up, and you can see the slides here.
I may do a transcript at some point (like I did for my django talk), but I haven’t yet.
This entry was posted in Hypothesis, programming, Python on by .
|
# Convergence of a Normed Linear Space
I am considering the normed linear space $C[0,1]$ with norm $$\lVert x\rVert_{\infty} = \max\{\vert{x(t)}\vert: t\in [0,1]\}.$$
Now I want to show that if the sequence $\{x_n\}_{n\in \mathbb{N}}$ in $(C[0,1], \lVert\cdot\rVert_{\infty})$ converges to $x \in C[0,1]$, then it also converges pointwise to $x$.
Ideally I would like to understand how the solution is formed, I have seen other examples by my lecturer but these were specific ones and I am struggling to apply it to this general case. Any help is greatly appreciated!
• You have $|x(t)| \le \|x\|_\infty$ so pointwise convergence is immediate. In particular, $|x(t)-x_n(t)| \le \|x-x_n\|_\infty$. – copper.hat Feb 7 '18 at 18:44
Yes, it is true. Take $\varepsilon>0$. Now, take $p\in\mathbb N$ such that$$n\geqslant p\implies\|x-x_n\|<\varepsilon.$$Then, for each $t\in[0,1]$,$$n\geqslant p\implies\bigl|x(t)-x_n(t)\bigr|\leqslant\|x-x_n\|<\varepsilon.$$
|
# corPagel
From ape v5.4
0th
Percentile
##### Pagel's lambda'' Correlation Structure
The correlation structure from the present model is derived from the Brownian motion model by multiplying the off-diagonal elements (i.e., the covariances) by $\lambda$. The variances are thus the same than for a Brownian motion model.
Keywords
models
##### Usage
corPagel(value, phy, form = ~1, fixed = FALSE)
# S3 method for corPagel
corMatrix(object, covariate = getCovariate(object),
corr = TRUE, ...)
# S3 method for corPagel
coef(object, unconstrained = TRUE, …)
##### Arguments
value
the (initial) value of the parameter $\lambda$.
phy
an object of class "phylo".
form
a one sided formula of the form ~ t, or ~ t | g, specifying the taxa covariate t and, optionally, a grouping factor g. A covariate for this correlation structure must be character valued, with entries matching the tip labels in the phylogenetic tree. When a grouping factor is present in form, the correlation structure is assumed to apply only to observations within the same grouping level; observations with different grouping levels are assumed to be uncorrelated. Defaults to ~ 1, which corresponds to using the order of the observations in the data as a covariate, and no groups.
fixed
a logical specifying whether gls should estimate $\lambda$ (the default) or keep it fixed.
object
an (initialized) object of class "corPagel".
covariate
an optional covariate vector (matrix), or list of covariate vectors (matrices), at which values the correlation matrix, or list of correlation matrices, are to be evaluated. Defaults to getCovariate(object).
corr
a logical value specifying whether to return the correlation matrix (the default) or the variance-covariance matrix.
unconstrained
a logical value. If TRUE (the default), the coefficients are returned in unconstrained form (the same used in the optimization algorithm). If FALSE the coefficients are returned in natural'', possibly constrained, form.
further arguments passed to or from other methods.
##### Value
an object of class "corPagel", the coefficients from an object of this class, or the correlation matrix of an initialized object of this class. In most situations, only corPagel will be called by the user.
##### References
Freckleton, R. P., Harvey, P. H. and M. Pagel, M. (2002) Phylogenetic analysis and comparative data: a test and review of evidence. American Naturalist, 160, 712--726.
Pagel, M. (1999) Inferring the historical patterns of biological evolution. Nature, 401,877--884.
##### Aliases
• corPagel
• coef.corPagel
• corMatrix.corPagel
Documentation reproduced from package ape, version 5.4, License: GPL-2 | GPL-3
### Community examples
Looks like there are no examples yet.
|
## Precalculus (6th Edition) Blitzer
We know that the order in which the four actors are selected makes a difference as the all the actors would be cast for a different role. Since the order matters, we use permutations. The four officers are to be selected from a club of twenty members. So, $n=20,r=4$. Hence, \begin{align} & _{20}{{P}_{4}}=\frac{20!}{\left( 20-4 \right)!} \\ & =\frac{20!}{16!} \\ & =\frac{20\times 19\times 18\times 17\times 16!}{16!} \\ & =116,280 \end{align}
|
# Why does an affine transformation $A$ when constrained by $A^TA=\lambda^2I$ result in a similarity transformation?
Why does an affine transformation $$A$$ when constrained by $$A^TA=\lambda^2I$$ result in a similarity transformation?
I came across this when studying linear transformations in these notes which says:
The similarity group is obtained from the affine group by requiring that $$A$$ be orthogonal: $$A^TA=\lambda^2I$$
Can't seem to wrap my head around this one.
Let $$\mathbf {x, y} \in \mathbb{R}^n$$ be unit vectors.
Then, the angle $$\theta_1$$ between them is given by $$\cos \theta_1 = \mathbf{x^\top y}$$.
Also, angle between $$\mathbf{Ax}$$ and $$\mathbf{Ay}$$ is given by :
$$\cos \theta_2 = \dfrac{(\mathbf{Ax})^\top (\mathbf{Ay})}{\|\mathbf{Ax}\|\|\mathbf{Ay}\|}$$.
Now, $$\|\mathbf{Ax}\|_2^2 = (\mathbf{Ax})^\top(\mathbf{Ax}) = \mathbf{x^\top A^\top Ax} = \mathbf{\lambda ^2 x^\top x}$$.
Thus, $$\mathbf{\|Ax\|} = \mathbf{|\lambda|}$$.
$$\implies \cos \theta_2 = \mathbf{\dfrac{\lambda^2 x^ \top y}{\lambda^2}} = \mathbf{x^ \top y} \implies \theta _1 = \theta _2$$.
Thus, $$\mathbf{A}$$ preserves angles and is a similarity transformation.
Orthogonality is forced by requiring a scalar multiple of the identity; if any two columns $$a_i, a_j$$ were not orthogonal, there would be a nonzero entry in the corresponding entries $$b_{ij}, b_{ji}$$ of the product.
Orthogonality results in the absence of any shear/twist in the transformation, restricting it to only reflection, rotation and translation.
The eigenvalue squares as scalars allow for non-normality, as it is possible to scale the original space without affecting the angles between lines in it, provided all axes of the space are scaled by the same amount. If $$A$$ was instead orthonormal, you force a lack of scaling as well.
See also this question, for which answers explain a converse point, why similarity transformation is a subtype of affine transformation.
Let $$A=RU$$ be the polar decomposition of $$A$$, where $$R$$ is positive semi-definite and $$U$$ is unitary.
Then $$R^2=A^TA=\lambda^2I$$; being diagonalizable this forces $$R=|\lambda|I$$ and thus $$A=|\lambda|U$$, a similarity transformation.
|
# Multidimensional Balanced Allocation for Multiple Choice & $(1+\beta)$ Processes
Allocation of balls into bins is a well studied abstraction for load balancing problems. The literature hosts numerous results for sequential (single dimensional) allocation case when $m$ balls are thrown into $n$ bins; such as: for multiple choice paradigm the expected gap between the heaviest bin and the average load is $O(\frac{\log\log(n)}{\log(d)})$~\cite{petra-heavy-case}, $(1+\beta)$ choice paradigm with $O(\frac{\log(n)}{\beta})$ gap~\cite{kunal-beta} as well as for single choice paradigm having $O(\sqrt{\frac{m\log(n)}{n}})$ gap~\cite{mm-thesis}. However, for multidimensional balanced allocations very little is known. Mitzenmacher~\cite{md-mm} proved $O(\log\log(nD))$ gap for the multiple choice strategy and $O(\log(nD))$ gap for single choice paradigm (where $D$ is the total number of dimensions with each ball having exactly $f$ populated dimensions) under the assumption that for each ball $f$ dimensions are uniformly distributed over the $D$ dimensions. In this paper we study the symmetric multiple choice process for both unweighted and weighted balls as well as for both multidimensional and scalar modes. Additionally, we present the results on bounds on gap for the $(1+\beta)$ choice process with multidimensional balls and bins.
In the first part of this paper, we study multidimensional balanced allocations for the symmetric $d$ choice process with $m >> n$ unweighted balls and $n$ bins. We show that for the symmetric $d$ choice process and with $m = O(n)$, the upper bound (assuming uniform distribution of $f$ populated dimensions over $D$ total dimensions) on the gap is $O(\ln\ln(n))$ w.h.p.. This upper bound on the gap is within $D/f$ factor of the lower bound. This is the first such tight result along with detailed analysis for $d$ choice paradigm with multidimensional balls and bins. %Further, we provide upper bounds on the gap when $f$ is a random variable.
This improves upon the best known prior bound of $O(\log\log(nD))$~\cite{md-mm}. For the general case of $m >> n$ the expected gap is bounded by $O(\ln\ln(n))$. For variable $f$ and non-uniform distribution of the populated dimensions (using analysis for weighted balls), we obtain the upper bound on the expected gap as $O(\log(n))$.
Further, for the multiple round parallel balls and bins, using symmetric $d$-choice process in multidimensional mode, we show that the gap is also bounded by $O(\log\log(n))$ for $m = O(n)$. The same bound holds for the expected gap when $m >> n$.
Our analysis also has the following strong implications for the sequential scalar case. For the weighted balls and bins and general case $m >> n$, we show that the upper bound on the expected gap is $O(\log(n))$ (assuming $E[W] = 1$ and second moment of the weight distribution is finite) which improves upon the best prior bound of $n^c$ ($c$ depends on the weight distribution that has finite fourth moment) provided in~\cite{kunal-weighted}. Our analysis also provides a much easier and elegant proof technique (as compared to~\cite{petra-heavy-case}) for the $O(\log\log(n))$ upper bound on the gap for scalar unweighted $m >> n$ balls thrown into $n$ bins using the symmetric multiple choice process.
Moreover, we study multidimensional balanced allocations for the $(1+\beta)$ choice process and the multiple ($d$) choice process. We show that for the $(1+\beta)$ choice process and $m=O(n)$ the upper bound (assuming uniform distribution of $f$ populated dimensions over $D$ total dimensions) on the gap is \textbf{$O(\frac{\log(n)}{\beta})$}, which is within $D/f$ factor of the lower bound. For fixed $f$ with non-uniform distribution and for random $f$ with Binomial distribution the expected gap remains \textbf{$O(\frac{\log(n)}{\beta})$} and is independent of the total number of balls thrown, $m$. This is the first such tight result along with detailed analysis for $(1+\beta)$ paradigm with multidimensional balls and bins.
By: Ankur Narang, Sourav Dutta and Souvik Bhattacherjee
Published in: RI11018 in 2011
LIMITED DISTRIBUTION NOTICE:
This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.
md-bb.techrep.pdf
|
## Internet users (2014)
You can clearly see the higher shares of internet users are concrentrated in countries in the developed regions (e.g. USA, Canada, Western Europe, Australia). Contrariwise emerging and developing regions (e.g. Central Africa, Southeast Asia) can only show low shares of internet users. So there is a obvious correlation between the level of economic development and the access for people to the internet. But the significance of having access to the internet considering the globalization are getting more and more important nowadays. Therefore it can be assumed that in future…
|
# What is the vertex form of the equation of the parabola with a focus at (200,-150) and a directrix of y=135 ?
Nov 22, 2015
The directrix is above the focus , so this is a parabola that opens downward .
#### Explanation:
The x-coordinate of the focus is also the x-coordinate of the vertex . So, we know that $h = 200$.
Now, the y-coordinate of the vertex is halfway between the directrix and the focus:
$k = \left(\frac{1}{2}\right) \left[135 + \left(- 150\right)\right] = - 15$
vertex $= \left(h , k\right) = \left(200 , - 15\right)$
The distance $p$ between the directrix and the vertex is:
$p = 135 + 15 = 150$
Vertex form : $\left(\frac{1}{4 p}\right) {\left(x - h\right)}^{2} + k$
Inserting the values from above into the vertex form and remember that this is downward opening parabola so the sign is negative :
$y = - \left(\frac{1}{4 \times 150}\right) {\left(x - 200\right)}^{2} - 15$
$y = - \left(\frac{1}{600}\right) {\left(x - 200\right)}^{2} - 15$
Hope that helped
|
# how do I create a Line chart from a data frame?
#### Serphentelm
##### New Member
i'm quite the newbie with R and i want to create a line chart starting from this dataset:
where every line points out the number of the smell detected (so in my case case 11 for version 1.2; 11 for version 1.3 and so on) and on the x axis i want to put every version that i've under exam.
The problem is that i don't know how to keep trace of a "counter" in r, counting how many smells there are for each version
#### consuli
##### Member
It seems, you do not have a concrete imagination, how (line) charts are created in R.
Thus, here is an example using ggplot2 (the Standard)
Code:
library(ggplot2)
dat1 <- data.frame( sex = factor(c("Female","Female","Male","Male")), time = factor(c("Lunch","Dinner","Lunch","Dinner"), levels=c("Lunch","Dinner")), total_bill = c(13.53, 16.81, 16.24, 17.42) )
p <- ggplot(data=dat1, aes(x=time, y=total_bill, group=sex)) +
geom_line() +
geom_point()
#### Serphentelm
##### New Member
thanks for the reply, so i managed to get to this
|
# Interactive data visualization with cranvas
2012-10-27
One of the advantage of R over other popular statistical packages is that it now has "natural" support for interactive and dynamic data visualization. This is, for instance, something that is lacking with the Python ecosystem for scientific computing (Mayavi or Enthought Chaco are just too complex for what I have in mind).
Some time ago, I started drafting some tutors on interactive graphics with R. The idea was merely to give an overview of existing packages for interactive and dynamic plotting, and it was supposed to be a three-part document: first part presents basic capabilities like rgl, aplpack, and iplot (alias Acinonyx)--this actually ended up as a very coarse draft; second part should present ggobi and its R interface; third and last part would be about the Qt interface, with qtpaint and cranvas. I hope I will find some time to finish this project as it might provide useful complements to my introductory statistical course on data visualization and statistics with R.
I recently updated the Qt interface (during the summer I had some problems with the linking stage, probably because of external dependencies on the Qt framework, but it seems it has been solved in the meantime), and I'm really happy with what cranvas has to offer. On a Mac, the follwoing shortcuts are useful:
• Del/F5 to delete/undelete observations
• ? for identify mode
• S followed by Ctrl-click to vary brushing size; S then click to release and return to dynamic brushing
Of course, it does not necessarily compete with what can be achieved using D3 (but see mbostock’s blocks), although we soon reach the limit between data visualization and info visualization (joint newsletter of the Statistical Computing & Statistical Graphics Sections of the ASA, PDF). While browsing some of Mike Bostock demos, I came across this paper which suggest that animated transitions can significantly improve graphical perception:
Heer, J and Robertson, G (2007). Animated Transitions in Statistical Data Graphics. IEEE Information Visualization (InfoVis)
I'm not a big fan of such animated graphics, unless they intend to be displayed during a talk to emphasize a specific point. For day-to-day statistical stuff, we don't really need that level of sophistication. We just want to be able to link plots together and highlight observations according to, say, an auxiliary variable or the magnitude of resiudals from a regression model, and study multidimensional datasets with projection techniques and efficient multivariate visualization techniques. However, I should note that the r2d3 project (by Hadley Wickham) looks promising and it will probably allow more interactivity with data displays. It looks like Python aficionados are going into the same direction, e.g. IPython Notebook and d3.js Mashup, but see d3 Wiki (§ Interoperability).
## Articles with the same tag(s):
Multi-Group comparison in Partial Least Squares Path Models
Yet another gray theme for R base graphics
Writing a book
R, pipes and Co.
R Graphs Cookbook
Emacs Org-mode and literate programming
user2014
Reproducible research with R
Python for interactive scientific data visualization
Bar charts of counts or frequencies in Stata
|
# Does microsoft restrict our possibilities?
This topic is 3658 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Yes, that's the point: Does Microsoft restrict our possibilities as game programmers? I had wondered this question several times before, but I cannot get an answer. Everybody knows that DirectX9 programs needs an OBJECT, and a DEVICE. Once defined your object, you MUST initialize it, with the line:
pD3D = Direct3DCreate9( D3D_SDK_VERSION );
This line initializes our Direct3D object, and in between brackets, you MUST type "D3D_SDK_VERSION". That generates lots of question, but the main one is: "If there is a SDK VERSION, why not a COMMERCIAL VERSION only Microsoft can use?" I think it exists, so, in some way, we can say we are restricted by the SDK VERSION. What's your opinion? is there any commercial version? Does other company games use DirectX9 SDK VERSION? Finally, I would like to say that I am neither american nor british person, I am just from Spanish, and I am an English student. Thank you all, and sorry my English. Looking forward to hear about you, Skinner [Edited by - darkcube on December 6, 2008 1:28:02 PM]
##### Share on other sites
Quote:
Original post by darkcubeI think it exists
Why? What purpose would be served by Microsoft keeping game developers from making full use of its SDK?
##### Share on other sites
Are you mad? You think Microsoft has a super-advanced 'special' version of DirectX that they reserve for themselves? That would be an absolutely terrible idea, and I'm sure you'd see why if you thought about it for a while.
##### Share on other sites
Would be cool though :p
MS are just drip feeding us when in reality they have the technology to make games so immersive you can actually see that Lara Croft wears a padded bra :p Don't know where that last bit came from; weird mood today, my development PC is broke :|
##### Share on other sites
I wish I could say this is the most stunningly idiotic thread I've ever seen on GameDev, but I'm pretty sure it doesn't even make the top 25.
##### Share on other sites
Promit: be nice. Guthur: be normal.
##### Share on other sites
No, I'm sorry to say your conspiracy theory doesn't hold up.
You initialize the Direct3D system with D3D_SDK_VERSION, which is a macro defined in a direct3d header that you #include. It's just a number, representing which version of Direct3D you are compiling with.
Now when this line compiles, your exe file will always initialize Direct3D with that version number. If you copy the exe file to a system with a newer version of Direct3D, your exe file will use the new DirectX DLL, but will call that create function with the older version number. The new DirectX will know to use the old version's syntax.
Let me see if I can reword this... Let's suppose you've installed DirectX 9.0 on your computer along with your C++ compiler. You write your program with that line and compile. DirectX 9.0 has defined D3D_SDK_VERSION to be some number, let's say 900 (just for this example - this is not the real number). So the compiler writes in the EXE file, "dynamically link with whatever DirectX DLL is on the system" and then tells that DLL, "Direct3DCreate9( 900 );".
So if you move this EXE to a system with a newer DirectX version, 9.1, then it will be using a new DLL with different functions and such. But the program will use that DLL and give it that version number "900" and the DLL knows to pretend like it's actually a version 9.0 DLL.
So, in essence, it's to preserve backwards compatibility. It tells the DLL which version you compiled with, so that any new changed features will still work in the old ways as they were at the time of compilation.
Does that make sense? :)
~Ricket
##### Share on other sites
Quote:
Original post by hymermanAre you mad? You think Microsoft has a super-advanced 'special' version of DirectX that they reserve for themselves? That would be an absolutely terrible idea, and I'm sure you'd see why if you thought about it for a while.
Not for themselves, but for selected companies whose games would run faster if they would pay more to get better version :)
##### Share on other sites
Quote:
Original post by SneftelGuthur: be normal.
Roger that :) If i can bury my head in code again sometime soon people tend to not notice the odd idiosyncratic comment :p Anyway just finished downloading FX Composer, going to play with that for a bit.
##### Share on other sites
It is there as a compile-time constant to mark what version of directX the code was compiled against, while also allowing the same code to be compiled against future versions of the sdk. Actually, I'd greatly prefer this over developers having to actually dig up and explicitly say what precise version it is that they are working with. This way, it is always filled in for you, yet in a way explicitly stated so you don't have to go wading through various D3D releases to find out what is the version you want. This way, it just works.
It is not a super secret version of directx that is somewhow super duper powerful compared to your vanilla directX. Microsoft is not restringing your capacity to program by requiring you to actually say what version of the sdk you are using.
Yeah, this is a pretty absurd concept.
Actually, I'd say you owe microsoft for your game dev capacity, as right now they are using their girth to push technology forward in this respect, with the hardware community breaking their own backs in an attempt to keep up with bleeding edge specs like DirectX10 when it first came out. Microsoft goes through great effort at great expense to keep windows and transitively the PC in general as the absolutely high power gaming platform out there. Microsoft is big enough to say what goes, and they do it on your behalf, so that you can write a program on their technology, and take it anywhere and have it Just Work.
1. 1
2. 2
3. 3
Rutin
13
4. 4
5. 5
• 26
• 10
• 9
• 9
• 11
• ### Forum Statistics
• Total Topics
633696
• Total Posts
3013390
×
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 23 May 2015, 07:08
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
Author Message
TAGS:
Current Student
Joined: 17 Mar 2010
Posts: 36
Followers: 1
Kudos [?]: 11 [3] , given: 6
3
KUDOS
1
This post was
BOOKMARKED
I found 'Essay Snark' (essaysnark.blogspot.com) to be the best consultant that I worked with. I have been following the blog (or blahg) from a long time. On reading his blog, I found that he has profound knowledge in the MBA admissions process and essay reviews. I purchased their Kellogg strategy guide as a starting point for Kellogg essays. I think that is a great resource and helps you structure your essays. Also he offered personalized advice even before I was a paid customer with him.
I took a comprehensive service from a "reputed" admissions consultant. After my consultant gave a green signal, I just sent my essays to ES for a sanity check before submission. ES literally ripped apart my essays -- in a good way though (that's what we expect from a consultant right?). He offered a page of constructive feedback on the essays and also in-line comments on the essays. After reading his comments there were many areas in my essays where I thought sshtt I should have caught this error or inconsistency (or even my admission consultant should have). Generally ES only offers one set of reviews. After his comments, I made significant changes in my essays and he was kind enough to do a second, third round reviews. He also reviewed my resume for free.
Extremely satisfied with ES, I also took his help with two other schools. I also ended up rewriting an essay completely and making significant changes to essays based on his comments (keep in mind the consultant that I worked with for these schools told these were "good-to-go"). After reading the comments/feedback from ES I was pretty sure that had I submitted the essays sent to ES for review, I would have received a direct ding.
I don't have any admits in hand right now, so will update you guys on how that goes. At least, I feel that I was able to put my best foot forward.
ES is an anonymous blogger/consultant. I never had any issue, though. His turnaround times were excellent. There were couple of instances where he stayed around late to review my essays.
PM me if you have any questions.
_________________
My MBA journey : mbafanatic.blogspot.com
Joined: 08 Oct 2011
Posts: 61
Followers: 3
Kudos [?]: 15 [3] , given: 0
3
KUDOS
Expert's post
I don't normally respond to reviews online, but in this case I feel it is necessary. Yes, I did email this client and asked him to remove his post, because it contained information that was either untrue or misleading.
Namely:
1. The idea that I would supply him with his career goals. I would never do this, as it is unethical. We coach people to get the best out of them (read any of our verified reviews and you will see this is a staple), we guide them towards their passion as those make for authentic and powerful essays, and we make sure that the goals they do arrive at are vetted and appropriate for the pursuit of an MBA. However, we do not create content. It was important for me to respond and say that I would never violate this ethical responsibility. Further...
2. The goal creation process was successful, as evidenced by the fact that this individual received an invitation to interview at Columbia. We prepare people for the interview process, but how they perform is up to them. He obviously did not perform well and as a result, he was not offered admission. The fact that this particular individual got the interview to Columbia was, to my mind, a huge win and something I remain proud of.
I'm not hiding behind an anonymous user name, so feel free to PM me or email me for any additional information. I'm sorry that I had to respond as I know it is a little bit lame when consultants get involved in the proceedings, but the risk of remaining cool and above the fray is totally offset by the risk of having our reputation for ethical work compromised.
Respectfully,
Current Student
Joined: 12 Oct 2011
Posts: 36
Schools: HBS '15 (M)
GMAT 1: 760 Q49 V45
GPA: 3.94
Followers: 6
Kudos [?]: 47 [3] , given: 0
3
KUDOS
After enjoying a relaxing holiday break where I could finally stop worrying about b-school admissions, I thought I’d share my experience with the admissions consultants with whom I interacted during my application process.
As background, I am an American with work experience in consulting and private equity. I applied to HBS, Stanford, and Kellogg, and was accepted at my top choice, HBS. I got dinged at Stanford and withdrew my app from Kellogg once I found out I got into HBS, so not sure if I would have been accepted there or not.
I’m sure some people here are wondering whether admissions consultants are worth their high fees in the first place. I was initially dubious as to the value of admissions consultants, as I assumed I could choose the “right” schools and write reasonable essays on my own. And since I was coming from a fairly traditional background (“a dime a dozen” in the words of Sandy Kreisberg over at Poets and Quants!), I didn’t think I’d need help “packaging” myself in my apps. But I figured I’d check them out just in case, and am glad I did.
I setup preliminary interviews with several consultants. Here are my thoughts:
Alex @ MBA Apply: his best quality is he’s a straight shooter – no BS. Some people may not react well to his bluntness, but if you’re the type of candidate who needs a kick-in-the-pants to get moving on your apps, he could be a good choice. Appreciated his preliminary thoughts on my profile, but ultimately didn’t choose him because I connected better with other consultants.
they were my least favorite. Instead of a phone call consultation, they wrote me this super-long email evaluating my profile and explaining their service offering. Despite its length, their evaluation of my profile was void of much content, and in some instances didn’t make a lot of sense. For example, they said my extracurricular activities were an “addressable vulnerability in my current profile,” and that they could “help [me] identify the gaps and present meaningful leadership roles that require only minimal time and won’t appear expedient.” First of all, my EC involvement is one of the strongest portions of my profile. Second, I find it difficult to imagine any new EC I could sign-up for literally 4 months before R1 applications are due that would enable me to secure “meaningful leadership roles” but “would not appear expedient.” Long story short, it was clear that they had a template email that they dropped my profile into without giving it much thought.
my preliminary consultation was with Angela Guido, who then referred me to Akiba Smith-Francis. Angela was great, but our conversation was fairly cursory (that is not a criticism – that was by design). My conversation with Akiba was great – she was extremely articulate and clearly understood the nuances of the various schools I was considering. She’s also a published author, which says something of her writing talents. mbaMission was my second choice, and I’m sure they would have done a great job had I chosen to work with them.
I ultimately chose to work with Adam Hoff from Amerasia. I’ll go into more detail below, but it was a fantastic experience overall. In terms of preliminary consultation, Adam differentiated himself by demonstrating a clear understanding of the unique aspects of various programs (btw, this is actually more important than I realized), brainstorming unique and interesting ways to tell my story (despite my very traditional background), and building a strong connection with me that led me to believe we’d work well together. So I pulled the trigger on a 12-hour package with Amerasia.
Like I said, I had a fantastic experience working with Adam, and would definitely work with him again. Here are some key highlights:
Knowledge of the schools: One thing I came to appreciate throughout the process was how much Adam knew about the different programs. My original plan was to write fairly similar essays for each program, but Adam taught me the unique nuances between each program that I’d never pick up on my own. In some cases I used different stories as a result, while in other cases I simply told the same story through a different lens in order to highlight a slightly different aspect of my personality/professional background. This is a key part of the application process that I didn’t think much about beforehand and would encourage all of you to consider.
Idea generation: I thought Adam did a wonderful job of helping me craft my “story” so that it best conveyed my prior accomplishments, yet also set the stage for what I wanted to accomplish both during and after b-school. He helped me brainstorm strong essay ideas that revealed unique aspects of my personality while also being sure to demonstrate strong evidence of professional achievement. Because schools keep reducing the number of essays/word counts, making each essay really count becomes even more difficult, so I really appreciated Adam’s skill in this area.
Technical writing skill: first of all, let me be clear that Adam will not (a) tell you what you should write about or (b) actually write anything for you. If that’s what you want, look elsewhere. But I would caution you that doing so is not only unethical, but adcoms are also very adept at identifying false or manipulated applications. Anyway, Adam is an incredible writer. A lawyer by training, he is very skilled at structuring essays and communicating through writing. His input was extremely valuable in helping me convey the ideas I wanted to convey through my essays. In other words, the idea was always in my head – he just helped me write it in the best way possible. He’s also great at picking up grammatical errors, odd syntax, etc., although as a native English speaker this wasn’t a huge issue for me.
Response time: I have read on the forum that many applicants have issues with their consultant’s response time. Adam was always great in this regard – he always got back to me within 2 business days, and usually within 12-24 hours. He keeps somewhat odd hours – usually working really late at night – but I typically had a response back from him the next morning after I sent him an email.
Motivation: I didn’t really need his help to push me to turn in my next draft of essays, etc. But he was great at really pushing me to make my essays the best they could possibly be. He wouldn’t stop after 1, 2, or 3 iterations – we worked the essays as much as they needed to be worked until they were done. Let’s just say I was sick of reading them by the end of the process! That said, he also always let me know when it was done … meaning, he’d tell me to stop editing it! You can drive yourself crazy by endlessly editing those stupid things, so getting his explicit stamp of approval that it was ready to submit was very reassuring.
Working relationship: an added bonus of working with Adam is that he’s a fun guy! We talked about sports, family, and other random topics throughout the process. Not only did this help Adam get to know me better (which enabled him to provide more value advice), but it also helped make what is a very stressful process slightly less stressful. I feel like he and I are friends now, even though we’ve never met in person.
Final note: while I loved working with Adam, we didn’t always agree on everything, and there were times I elected not to take his advice – I knew my application was ultimately my responsibility.
Earlier I mentioned that I was initially dubious of the value admissions consultants can provide. I hope this review illustrates that I completely changed my mind on that after working with Adam. There is no way my applications would have been as good (or even nearly as good) without his help. If you’re considering using a consultant, I highly recommend Amerasia, and Adam Hoff in particular.
Current Student
Status: I will not be denied. Or waitlisted.
Joined: 20 Mar 2011
Posts: 80
Location: India
Concentration: Strategy, Entrepreneurship
Schools: Tuck '15 (M)
GMAT 1: 770 Q49 V46
GMAT 2: Q V
GPA: 3.5
WE: Consulting (Computer Software)
Followers: 8
Kudos [?]: 64 [3] , given: 13
3
KUDOS
**This is a long and in-depth review**
Let's face it - US business schools, especially the elite ones that you and I aspire to be a part of, have some extremely stringent acceptance criteria with an almost blackbox-like process to boot.
Before admissions consultants became mainstream (mainly thanks to the penetration of the Internet), applying to business schools was a lonely, nerve-wracking activity - especially if you were from an over-represented demographic like India or China. Nerve-wracking because you had no idea of what constituted a good or bad response to an essay question, or how to go about structuring it; Lonely because at the most, you had a couple of friends who had even heard about the schools you were applying to, and even then, they wouldn't have enough in-depth knowledge to help you write compelling essays.
I began researching MBA programs in 2005. At the time, the only ‘consultants’ I had were Montauk (‘How To Get Into The Top MBA Programs’) and Strunk Jr and White (‘The Element Of Style'). By 2011, when I began the application process, I decided that in addition to my due diligence (attending events, chatting with alumni, professors, etc) I could use the expertise and perspective of a non-Indian who was on the pulse of the admissions process – someone who could look at my essays from the perspective of the admissions committee and help me paint a clear picture of myself. And thus began my hunt for the ‘Best’ admissions consultant.
After two months of initial consults with 10 different admissions consulting firms – Clear Admit, The MBA Exchange, Accepted.com, Stacy Blackman Consulting, Forster-Thomas, MBA Crystal Ball, MBAApply, VeritasPrep, MBAMission, and Amerasia – I finally decided to work with Paul Lanzillotti at Amerasia. Don’t get me wrong – I didn’t pick Paul because he painted the rosiest picture during the initial consults; he didn’t. I picked him because we connected, and we connected well. My reasoning was simple – I need to not only be comfortable sharing details of my personal and professional life with my consultant, I needed someone who would push me, probe further for details, and not pull any punches when critiquing my essays. I saw all this in Paul, and over the course of our engagement, my gut instinct was proven right.
This is not to say that Paul and I didn’t disagree on anything – in fact, we frequently disagreed. But therein lay the beauty; every disagreement would result in us coming up with something far better than what we started with. We had different perspectives, and kept hammering at each sentence in each essay till we were both satisfied with it. Sometimes, simply flipping the order of sentences in a paragraph would work miracles. Sometimes, we’d scratch everything and head back to the drawing board. Some essays were born ready (took a couple of edits to finish) whereas others had to be sculpted and refined multiple times (more than 8 edits).
Paul’s knowledge of various schools is extensive; more importantly, he has a skill for converting life experiences into strong essays – and this is where I knew I was getting my money’s worth. There were experiences that I did not think much about, but were potential essay responses in his perspective – today, some of those have actually made their way into my essays for multiple schools.
Additionally, Paul was always available on call or mail, and would respond within a few hours at the most. He was upfront about delays (if any), and made sure that I had his undivided attention when on a call.
Ultimately, we worked together for 4 schools, and I was waitlisted at 2 of them. Since two of the schools were R3, Paul honoured his R3 guarantee and helped me craft 2 R1 apps this year. Additionally, I retook the GMAT and took extra courses to boost my GPA. The result? I applied to 4 schools and have been invited to interview at all, and so far, accepted at Tuck (and looking forward to the rest).
So is Paul/Amerasia the best consultant/company?
I can’t say; that’s like asking whether HBS or Stanford is the best business school – its really down to personal choice. Yes, he was the best consultant for me. Ultimately, its your application, and your hard-earned money that will be spent on a consultant, and you need to take a well-thought out decision based on the initial consults. I did my research and chose Paul, and did not once regret it.
My experience with Paul was extremely good, and I would gladly recommend that you consider his services when scouting for an admissions consultant.
Cheers,
MBAWanderlust
PS – I’d be happy to have this review verified by moderators.
Current Student
Joined: 05 Apr 2010
Posts: 99
Followers: 9
Kudos [?]: 36 [2] , given: 3
2
KUDOS
1
This post was
BOOKMARKED
transatlantiker wrote:
Has anyone of you worked with Alex, MBA Exchange, ClearAdmit or MBA Mission and could give me some feedback (via PM if you don't want to post here)?
My background is IB and I'm targetting some of the Top 10 schools (R1). Have already done quite a lot of work on my own but want to maximise my chances of getting into a top program.
Thanks a lot.
As I didn't have someone with top b-school experience to have review my essays, I hired Alex to be my essay reviewer. His turnaround time and commentary have been exemplary. I have interviews currently lined up with INSEAD, which was the only school I applied to. But I know that based on what I had written previously, I wouldn't have made it this far. He had to snap me out of a rut and he was good at that and knew exactly what the school was looking for and how I fit in to their profile. One thing on top of that was, aside from grammar and structure, the writing was entirely mine. All my ideas and thoughts - which are extremely key when it comes to the interview later because they need to be a coherent story. He had to jolt me a few times and there were several painful wholesale rewrites, but they were very important in developing my story, and he coached me through the whole process. Good luck with your applications!
_________________
Here are a collection of what I thought were my more helpful posts:
PM me and let me know if these helped!
Last edited by HankScorpio on 09 Nov 2010, 19:12, edited 1 time in total.
Intern
Joined: 16 Mar 2011
Posts: 6
Schools: Kellogg (Accepted), Wharton (Interviewed)
WE 1: Manager in Valuation/IB
Followers: 0
Kudos [?]: 3 [2] , given: 0
2
KUDOS
Moderator's warning: User's first post. lower credibility.
Edited to add: I've been a long-time reader of this forum but didn't feel compelled to post until now. My review is legit. Feel free to PM me for more details. =)
-----------------------
I think an admissions consultant is invaluable in the MBA application process. First, my background. I'm 31, with 8 years of experience in finance, female, 3.7 GPA from a public university, and 720 GMAT. I'm looking to transition out of finance and into entrepreneurship.
In round 1, I applied to Harvard, Stanford, and Chicago without a consultant. I got dinged from H/S and interviewed at Chicago.
For the interview prep, I contacted Stacy Blackman Consulting, Sandy, and Amerasia. Sandy was booked up; I spoke to a woman at Stacy Blackman and Adam at Amerasia. The woman at Stacy Blackman was nice but I didn't feel we connected. When I spoke to Adam, he gave me pointers on Booth and didn't pressure me to sign up for his services. This was really nice, 'cause I had a million things going on and couldn't make a decision anyway.
Adam's pointers on Booth were spot on. Unfortunately, due to my crazy work schedule, I didn't adequately prepare for my interview (I worked until 3 AM the night before my interview). I didn't have time to get excited about the school and went into my interview with a lackluster attitude. Needless to say, I wasn't admitted.
In round 2, I hired Adam for his essay and interview services. I applied to Wharton and Kellogg, schools that he noted were friendlier to older applicants. Here's how the essay writing process went: I turned in a first draft for Wharton, he (very nicely and bluntly) told me it had to be completely re-worked. My essays painted me as unfocused, and my reasons for transitioning from finance to entrepreneurship weren't well-supported. It took about 3-4 drafts before I got it right. My essays pre-Adam and post-Adam were dramatically different; my friends commented that my post-Adam essays drastically improved. Here's what I consider to be Adam's biggest strengths in essay prep:
- He's very good at helping you organize your story and painting you in the best light. He knows immediately when something sounds wishy-washy and will help you re-work it. He understands the strongest aspect of your application and will help you highlight this.
- He provides solid advice on how to attack each essay, what to avoid, what to focus on, etc.
- Lastly, Adam is great to work with. His upbeat attitude and follow-up emails helped me stay on track during the grueling process.
On Monday, I found out I was admitted to Kellogg. I also received an interview invite from Wharton, with decision still pending. My only regret during this process is not hiring Adam earlier in round 1. I am so grateful to Adam, not only for his help in essay-writing and interview prep, but also for being a cheerleader, a great motivator, and for keeping me focused when I was being pulled in so many different directions. For anyone considering a consultant, I strongly recommend Adam.
Last edited by orangemba on 17 Mar 2011, 13:58, edited 1 time in total.
Senior Manager
Status: Current Student
Joined: 14 Oct 2009
Posts: 370
Schools: Chicago Booth 2013, Ross, Duke , Kellogg , Stanford, Haas
Followers: 13
Kudos [?]: 101 [2] , given: 53
2
KUDOS
m7 wrote:
No, I did not have a bad experience and hopefully I won't, with the help of fellows here. Just read some bad reviews, some good reviews and then (in the same blog) accusations that certain good ones are fake... So I do not know what to believe and just asking for honest opinions (bashing someone for the sole reason of not getting into top school is NOT an honest post.)
I think the moderators of this forum are doing a good job of labeling the posts that come from people who have not yet proven themselves as trustworthy, contributing members. That's not to say they are fake, it's just a heads up to take the review with a grain of salt. I will add to that I think any review, even those from long-time trusted members, should be taken with a grain of salt, because one person's experience may not be the same for all. Most of the companies offer a free consultation, try calling a few and see which ones you feel comfortable with.
_________________
Manager
Status: Berkeley Haas 2013
Joined: 23 Jul 2009
Posts: 192
Followers: 1
Kudos [?]: 32 [2] , given: 16
2
KUDOS
One more ....
mbastudio
Pluses-
Great book, helpful blogs, decent edits, 2 day return, to the point (no BS), great interview prep (Wharton), great guy to talk to for advise.
Minuses-
Paying for profiling may not be much value if you buy the book and do it yourself; not as flexible in editing terms and number of edits as other companies(my experience only), little pricey compared to other so called "boutique shops", not much ideas about stories or better way of putting things together (felt like mine was only pure edits)
Overall comment - I think you should do your profiling on your own and save on full package cost. Chose the 3 X edit package per school for edits (similar pricing to PE - maybe Avi will fit your needs better). Definitely worth every penny for interview prep (though I was dinged after W interview )
Intern
Joined: 27 Aug 2010
Posts: 4
Followers: 0
Kudos [?]: 2 [2] , given: 0
2
KUDOS
Hi,
I am recommending the services from Natalie and accepted. com. I get to know Natalie through this community and it has been a truly grateful, pleaseant experience to work with such seasoned professionals. She have helped me to get into one of the top 10 MBA in the nation and I truly appreciate for her caring atttiude, energy and professional serivices that she have prepared me in this journey.
Natalie basically tansformed me from a raw material into a presentable statue. My background is Ivy League, medium-ranked GPA, 720 GMAT, 2+ yrs I-bankng experience in New York. However, Natalie not only help me edit the essays but also teaches me how to strcuture the essays into a presentable format. I truly appreciate her help.
Manager
Joined: 27 Jun 2010
Posts: 130
Followers: 1
Kudos [?]: 6 [2] , given: 0
2
KUDOS
I have been looking for an admissions consultant for a while and from what I read on the GC and else where, I scheduled two calls, one with MBAMission - Jeremy and other one with Amerasia - Paul.
MBAMission. I had a 30 minute discussion with Jeremy where he basically commented on my profile, what could have been my strengths and weaknesses and he answered my questions. Then he asked me to send him one of my applications past year and we scheduled another call where he went through my old application, pointed out the week parts of my essays and potential improvement areas.
I like talking to Jeremy but if he told me that if I hire them I would working with another consultant from the company.
Amerasia: Before the talk, Paul had answered a lot of PM's from GC and he had given good advices / comments. So I decided to have a call with him too. before the call I send him one of my old applications and our call was similar to the one with MBAMission. Paul gave his comments on all of 4 essays on that application, and again pointed out the potential improvements in each of them. We had scheduled the call for 30 mins. but it took almost an hour and he answered all of my questions in detail. After the call he sent me a bunch of documents like sample essays , school guides, interview guides etc.
I am slightly inclined towards Amerasia (it is slightly cheaper and I felt very comfortable talking with Paul, I am not sure if I'll feel the same with the consultant MBAMission assigns me) currently but still not sure which one to go with. But based on my initial calls but guys seem great..
Joined: 26 Dec 2008
Posts: 2444
Location: Los Angeles, CA
Followers: 77
Kudos [?]: 541 [2] , given: 0
2
KUDOS
blahblahGMAT wrote:
Av has an excellent point.. Try the 1~2 school package.. There are two advantages.
1. You will know how the admission consultant works..
2. You might be able to save some by doing the others by yourself, using the information and research from the 2 schools you've applied to.. They pretty much ask similar information.
To be honest, that's what I encourage all my clients to do.
Signing up for a bunch of schools upfront (i.e. 4-6 schools) is a bit scary. As a consultant, I don't like it because it reveals expectations that the client may be seeing the consultant as a surrogate. Or a co-dependency problem. To me, I see admissions consulting simply as a resource - and not a surrogate for the applicant.
As a client, you should sign up 1-2 schools at a time -- and chances are, you won't need more than that. While the essays aren't exactly the same from one school to the next, most people are able to intelligently adapt essay drafts from one school to another. An applicant doesn't need a consultant on every school they are applying to.
My competitors can sell the 4+ school packages all they want (more power to them), but in my experience over the past 8 years, I really never saw the need for clients to sign up for more than 3 schools at most (and if there's additional help on the 4th or 5th school, it's literally piecemeal stuff that could be billed hourly). Again, most clients are happy with just 1-2 on a comprehensive basis.
Now, you may ask why I would convey something that on the surface would be against my own interest - but it's not. Quite the opposite. My reasoning actually makes a lot of business sense, but I won't say why because I don't want my competitors to follow suit
_________________
Intern
Joined: 18 Apr 2011
Posts: 29
Followers: 0
Kudos [?]: 12 [2] , given: 1
2
KUDOS
Hi All,
If anyone wants mor einformation on Sandy (in addition to what Redjam has posted), please free to PM me
Personally I would give Sandy 3 or 3.5 at the most - he is good but he sure does have his negatives.
Intern
Joined: 23 Mar 2011
Posts: 1
Followers: 0
Kudos [?]: 2 [2] , given: 29
2
KUDOS
Some moderators remove posts at their will and without explanation. Interesting....So sharing is not always possible here. I know this first hand. I posted my opinions about Sandy, Stacy, Amerasia and some other consultants and all was removed. Perhaps consultants have some saying in this? Very interesting.....
So what is the purpose of these forums exactly? Are we here for sugar coating, fake praises in a controlled environment or for an exchange of our honest opinions in order to help each other? We communicate through PMs anyway but new members do not have the advantage of reading genuine posts here.
Try the "beatthegmat" and see the difference.
Founder
Affiliations: AS - Gold, UA - Silver, HH-Diamond
Joined: 04 Dec 2002
Posts: 12770
Location: United States (WA)
GMAT 1: 750 Q49 V42
GPA: 3.5
WE: Information Technology (Hospitality and Tourism)
Followers: 2659
Kudos [?]: 12758 [2] , given: 3844
2
KUDOS
Expert's post
m7 wrote:
Some moderators remove posts at their will and without explanation. Interesting....So sharing is not always possible here. I know this first hand. I posted my opinions about Sandy, Stacy, Amerasia and some other consultants and all was removed. Perhaps consultants have some saying in this? Very interesting.....
So what is the purpose of these forums exactly? Are we here for sugar coating, fake praises in a controlled environment or for an exchange of our honest opinions in order to help each other? We communicate through PMs anyway but new members do not have the advantage of reading genuine posts here.
Try the "beatthegmat" and see the difference.
I am the evil one who has been secretly removing your posts and leading thousands of unsuspecting gmat club members to their peril!
This is off-topic but I feel it needs to be resolved. I thought we have covered this, but allow me to elaborate:
You have not actually used an admissions consultant to post a legit review and that was not made clear in several of your comments (I had to read through a potpourri of your posts to figure that out). All of your posts have been saved by the way and preserved for the posterity in a moderator-only-access forum where we move questionable posts like these. I have issued a warning to you then and explained the reasons behind it. I am issuing the second now (this will suspend your account).
I have no intention of removing any posts or fanning the flames one way or another - everyone one should read reviews and use their head. There are plenty of useless positive reviews around the internet. This is not the purpose of this thread. I have not started GMAT Club in 2002 to be policing threads and to try to twist the truth. I know it may look otherwise sometimes, but I have a life.
_________________
Founder of GMAT Club
Just starting out with GMAT? Start here... | Want to know your GMAT Score? Try GMAT Score Estimator
Need GMAT Book Recommendations? Best GMAT Books
Co-author of the GMAT Club tests
Have a blog? Feature it on GMAT Club!
GMAT Club Premium Membership - big benefits and savings
Founder
Affiliations: AS - Gold, UA - Silver, HH-Diamond
Joined: 04 Dec 2002
Posts: 12770
Location: United States (WA)
GMAT 1: 750 Q49 V42
GPA: 3.5
WE: Information Technology (Hospitality and Tourism)
Followers: 2659
Kudos [?]: 12758 [2] , given: 3844
2
KUDOS
Expert's post
Gryphon wrote:
BB,
Would you be willing to disclose whether consultants have to pay a fee to pitch their services on your site?
GMATClub is one of the best resources out there for MBA aspirants and admissions consultants are becoming a more pervasive element of the MBA landscape. For those of us that aren't bankers/management consultants, the costs associated with these MBA consultants are not insignificant. It's somewhat disconcerting that merely because some of us can't afford to use a consultant we may not gain entry into a program while somebody of greater means, but not necessarily a better candidate will get that spot because they could afford a professional editor. I guess that's life though. It seems like everybody with an HBS MBA is in the admissions consulting business. In the end, MBA consulting is a rather inefficient, opaque market and I'd be interested in knowing whether the consulting companies that are prevalent on the site pay a fee to be here? If so, would that represent a potential conflict of interest? Some of us can only afford it for 1-2 schools and we want to make an informed decision to give ourselves the best shot.
As it relates to M7, I didn't memorize each of his posts, but I don't recall M7 ever asserting that he had previously used any of the consultants and had a poor experience. If memory serves me correct, M7 disclosed his experiences with different consultants based on the free diagnostic sessions that they offer. M7 was particularly pointed in his comments with respect to Stacy Blackman by telling her to address problems in her organization rather than make apologies. Ultimately, I don't think that M7 was ever disingenous or misrepresented himself as someone who had actually employed a consultant's services. That said, you've got the posts and I don't so I'm open to being told that I'm wrong and M7 misrepresented himself, but I don't think that he did.
Thanks.
Thank you for bringing this up. Allow me to happily clarify. I can assure you that not a single post has been removed or that anything has been changed to benefit anyone. I hope you realize that GMAT Club has been around for 8 years and it would be quite a waste of all that effort? I guess think of google who would start inflating the organic search results of paid search sponsors - they would ruin their reputation and nobody would use google anymore. I don't think Google would want to do it and neither would I - the benefit is simply too immediate and too small to warrant such shortsighted and greedy move.
Historically GMAT Club has had a very strict policy back in the day when any time a new consultant other than approved (Accepted, Veritas, and MBA Apply) would show up, they would be removed/banned/etc. I don't know why or how it evolved but that's what several of the moderators felt was right and I was happy to support this.
We have since opened up the Ask Admissions Consultant forums to ALL admissions consultants. We do not charge anyone to post/participate. That seemed to make the most sense and has benefited the community, though it has made the "Ask Admissions Consultants" forum a bit of a zoo.
We do have some real estate on the site to advertise and we offer that space to vendors including Google. It is not performance based, it is just placement and if a vendor wants to have exposure, they can get it. If 500 people sign up or 1, there is no differences. This allows us to fund projects and keep the development going and pays for several people's time including my own but if you take into consideration how much time I have spent/invested and keep investing and how much I got paid back, my hourly rate is extremely low, hence I have a real job.
As to the posts of M7, see it reproduced below. Here is specifically what was posted and I don't think that was posted in the spirit of GMAT Club or in the spirit of helpfulness. I frankly thought this person has had a very negative experience with her but in reality, he/she never have even used her services. I felt this was an unfair post and added no value compared to the hate it contained - do you disagree? I think one way do this post would be to quote/provide sources and perhaps links to reviews (maybe fix 4 spelling errors) - that would be a fantastic post. I contacted M7 and upon the exchange, removed that post. (I have contacted a number of other users who have posted/reviewed consultants with clarifying questions or asking to add more color to their experience. Sometimes to validate their extra positive review or sometimes to understand if a certain consultant had poor practices. This was the only time I ever removed anyone's post in this thread outside of spam, which by the way, we do not delete, just move to a hidden forum).
m7 wrote:
Stacy or Sandy? Neither, based on my encounters. Stay away from both.
P.S. Stacy, instead of replying to countless negative feedbacks on different sites with "wow, I am upset, why, I am good!", go and make serious changes in your firm, bad reputaiton follows your name everywhere. Then you can show that you really care as you claim and you would not have to write looong replies here trying to reconvince past (very unhappy) clients and convince potential ones.
Now, if you feel that I have inaccurately represented Stacy's services, do a search on GMAT Club, there are plenty of posts of various degrees (positive and negative). Here is one. I think that's a legit post and i have no plans to do anything with it. Just for the background, I have a very involved full time job (I don't live off GMAT Club) and a family which keep me very busy. I don't have time to mess around with a guy who has nothing else to do but complain and be ridiculous. However, I will spend time to keep GMAT Club free of trolls and elitists that I have seen on another forum in 2002 when I started the GMAT Club. PM me if you have any detailed questions - I am not a fan of mass-CC emails.
_________________
Founder of GMAT Club
Just starting out with GMAT? Start here... | Want to know your GMAT Score? Try GMAT Score Estimator
Need GMAT Book Recommendations? Best GMAT Books
Co-author of the GMAT Club tests
Have a blog? Feature it on GMAT Club!
GMAT Club Premium Membership - big benefits and savings
Senior Manager
Status: Happy to join ROSS!
Joined: 29 Sep 2010
Posts: 279
Concentration: General Management, Strategy
Schools: Ross '14 (M)
Followers: 18
Kudos [?]: 106 [2] , given: 48
2
KUDOS
I consider using professional experience in my application process: just like in case of GMAT or TOEFL essays, I want to learn best practices for structure/tone. After reading this topic thread my consideration is:
- consultants are not standardized products such as iphone (xbox, snickers bar etc), therefore each of us will have a totally different experience working with them. Therefore client reviews are good, but it makes sense to schedule a trial session and get your own impression. You'll work with a person, not with a gadget, so chemisty matters!
- many companies sell their own admission books. It makes a lot of sense to read those to grasp an idea whether this company services fit you. Start from there - for USD 20-40 you get a first-hand impression about the author/company + some admission hints.
- do you prework! If you're GMAT is 600 and you target Wharton (or you chose a name from Top-20) and your consultant did not address that issue during the trial interview... 'Houston, we've got a problem'! I mentioned a stupid example, but you've got an idea. A good consultant shall provide some insights that you did not even consider before the call!
Last edited by AN225 on 10 Jun 2011, 13:47, edited 1 time in total.
Director
Joined: 02 Jan 2008
Posts: 597
Location: Detroit, MI
Followers: 3
Kudos [?]: 29 [2] , given: 0
2
KUDOS
Just finished up my MBA at Ross, so this review may be a little dated, but I thought I'd chip in with an opinion on MBA Exchange since it seems there are few opinions on here.
I worked directly with Dan, the founder of MBAEx and applied to 4 schools. I have a very non-traditional background (owned a painting company, and worked for a small niche IT firm). Had a 3.4 GPA from a Big Ten school, and a 730 GMAT. To say that I wasn't exactly lighting the world on fire with my resume is probably a pretty accurate statement.
My experience with MBAEx was truly fantastic. I'm not saying I wouldn't have gotten in without their help, but I am positive they substantially increased my chances of acceptance. From helping me accurately gauge to what schools to apply, to helping me craft my personal story throughout my essays, Dan was extremely helpful. I applied to 4 Top 10-15 schools, and decided to attend Michigan. I was extremely happy with my experience and would give MBA Exchange, and Dan specifically the highest of recommendations. I still maintain close contact with Dan, and he always answers emails or phone calls within 24 hours, and often much sooner. He was always there for me during the process to answer questions and concerns, and I am confident he will continue to be a close contact for years to come.
Please PM me or email me if you have any specific questions.
PS - Ross was awesome, and I couldn't have been happier with my final school choice.
Retired Moderator
Joined: 02 Feb 2009
Posts: 374
Concentration: General Management, Strategy
GMAT 1: 690 Q48 V35
GMAT 2: 730 Q49 V42
Followers: 45
Kudos [?]: 152 [2] , given: 27
2
KUDOS
2
This post was
BOOKMARKED
I want to echo Newton's comments above. I used EssaySnark for a free essay review. He (or she?) was very helpful - gave me excellent insights and also in depth analysis on what was right and what wasn't. Do note that a) there are no guarantees of each essay that is submitted getting reviewed and b) all essays are critiqued on the public blog, important information hidden of course.
Further, I have remained in touch with essaysnark over email for a few follow up questions around how to engage his services. He has been by far the MOST flexible person and someone I would have absolutely loved to work with. Alas, have already submitted all my applications for R1 and am unsure if I will be submitting anything in Round 2. Flexibility is evidenced in timelines of delivery, nature of engagement, enough details about what's inlcuded/what's not and then a LOT of follow up questions between us.
All in all, I have my money bet on EssaySnark as a more affordable, yet highly qualified admissions consultant. He is not a full consultant though, in the sense no phone calls. Only essay reviews although EXTREMELY well detailed - you would not need a phone call in that way.
_________________
Latest Blog Entry:
09-05-13 Its been too long ... Updates & the Tuck Loan
Senior Manager
Status: Happy to join ROSS!
Joined: 29 Sep 2010
Posts: 279
Concentration: General Management, Strategy
Schools: Ross '14 (M)
Followers: 18
Kudos [?]: 106 [2] , given: 48
2
KUDOS
MBAapply.com Alex Chu's review
----
I just received a school decision so I can contribute to the forum by sharing my feedback on working with MBAapply.com guru Alex Chu.
A year ago, when I started an MBA saga, I was reading all available materials on the MBA program and the application process. Thus I came across Alex’s book "The MBA Field Guide" that literary covers all aspects of MBA application (by depth/quality of material I can only compare it with Rhyme’s guides). Later on I read his posts on the forum and came across his video advices on who to blow an interview http://mbaapply.com/interviewdonts.htm, so Alex appeared on my radar. Once I dealt with the GMAT, I looked for a consultant to help shape my essays (I had plenty of ideas/material, but vague understanding of the structure/selling points).
So, why Alex?
Uprightness/Integrity
First, I read this thread, shortlisted some names and then asked for free profile evaluation with four or five other companies. I was looking for an expert who can assist rather than milk me. Thus I immideately dropped all the companies offering crazy sliding schemes (pay for eight - get ten etc) – IMHO that makes no sense! If you are looking for specific experience and opportunities, you probably can shorlist three-four schools MAXIMUM! If your school list is more than four names, you are going after a diploma, not after knowledge/experience. Also, gone were the companies that work with fixed hours allocation as I had no idea how much time I would actually spend. I also deselected all canned evaluation replies. That is not serious: the cost of consulting is USD 2-3-xk and in many cases the only 'test' option for us to judge consultants is a profile evaluation. With such approach, I narrowed down my list to Alex and Avi Gordon (who wrote an excellent book on building your profile). Since Avi was on vacation and I really liked Alex's replies, I started with Alex.
Individual approach
Probably the only canned stuff I received from Alex was questionnaire. Apart from it Alex’s replies were up to MY points, customized and written in my language (in his explanations he would refer to realities of Europe where I live, not those of California where he lives)
Quality and patience
It probably took AT LEAST A DOZEN of iterations to get right my goal essay (I could not shape my ideas well enough). Alex patiently checked my corrections and reverted with advices. Note he won’t write stuff for you, but will explain what you need to think about, what will be better selling points and why. Response time was like 12 hours or sometimes even 2-3hr (Alex, do you have a life?
Fun
Alex has a very smart and often funny way of communicating, so you won’t be bored with "one size fits all" vague advices (‘be yourself’). In addition to application questions, I asked him on a lot of other related topics (job prospects etc) and received EXCELLENT advices that answered my points and also triggered some thinking.
Things to consider:
I worked with Alex via email. I don’t know whether it is his policy, but eventually I understand it is probably better than Skype – I have all valuable info stored in my mailbox and can access it anytime.
Unlike other consultants who advertise they will load you with schedules and be pushy, Alex won’t be kicking your a… (or maybe I was too fast working weeks before deadlines). IMHO, that is a personal choice – I think MBA students are adults who can manage their lives and don’t need a babysitter reminding ‘’it’s time to drink your glass of milk and submit that Harvard application’.
Results: INSEAD - accepted (prepared essays, CV with Alex; interviews - myself). Ross - interviewed and waiting (all prepared myself using experience working with Alex).
Intern
Joined: 02 Feb 2012
Posts: 1
Followers: 0
Kudos [?]: 2 [2] , given: 0
2
KUDOS
1
This post was
BOOKMARKED
Forum Moderator - This is my first post. Please contact me if you need to verify anything. Please don’t label it with anything less than its true value.
Edit: Verified; long time & legit member
-BB
I am a longtime GMATclub user – well, a passive user. Today, I decided to take my first step in giving an honest feedback plus some suggestions to other GMATClubber because there are things that nobody is seriously talking about.
+ Contact potential admission consultant – I contacted Amerasia, Veritas, and Stacy Blackmann.
+ Prepare a list of questions that you want to cover on your very first call (or initial consultation). Usually, this call is free of charge. You can ask as many “relevant and pressing” questions as you desire. But refrain from obvious questions such as "what are my chances". Come on!! Seriously@#$@#$
+ Be clear on which colleges you are applying to. Some consultants are familiar with the colleges so they can give more pointers on your research. For example, only Adam ( from Amerasia Consulting) asked me to look into some categorically similar colleges in Europe while others just engaged in conversation.
+ Be clear on the payment and promotion.
+ Discuss your weakness upfront. No shame in showing your vulnerability. Acceptance of your weakness is a door to surprising opportunities. Trust me on this!!
+ Discuss your profile in brief ( such as GMAT, past attempts, current role, future goal, etc) and get a quick feedback on the list of colleges you are planning to apply to.
+ Many consultants will ask you to fill the detailed questionnaire. Do so only if you feel that you need to send across your message clearly. Not required if you have not decided on the consultant.
Result : After much research and deliberation, I went with Amerasia Consulting. I got interview calls from most of my selected colleges, and have been admitted to few of them. But finally, I confirmed my place in HKUST. HKUST was my second choice. My first choice was INSEAD, but I didn't get an interview call. Oh! Well…
Bottomline You can take help of admission consultant for smoothing the rough edges. And if you decide to do that, I can personally vouch for Amerasia Consulting.
If anyone has any questions, shoot pm me.
PS : While looking for a good admission consultant, I narrowed my list to Amerasia Consulting and Stacy Blackman. Stacy seemed to be very pricey, which I was not able to afford anyway. And Stacy company works in group, but I prefer one person looking at my essays. So my rejection of Stacy , by no means, states anything other than my preference. And I was not really impressed with Veritas. Period.
Last edited by bb on 10 Feb 2012, 09:34, edited 3 times in total.
verification
Go to page Previous 1 2 3 4 5 6 7 8 9 10 11 ... 26 Next [ 511 posts ]
Similar topics Replies Last post
Similar
Topics:
34 "Best" Admissions Consulting Companies - 2015 Season 35 16 Dec 2014, 13:16
29 "Best" Admissions Consulting Companies - 2014 32 12 Feb 2014, 16:20
Who are the Best Admission Consulting companies? 1 26 Nov 2012, 18:32
Who is the best company to use for admissions consulting? 5 09 Sep 2011, 07:33
Best Admission Consultant in Bangalore 11 07 Nov 2010, 09:23
Display posts from previous: Sort by
|
# zbMATH — the first resource for mathematics
On Benford’s law for continued fractions. (English) Zbl 0728.11036
A sequence of real numbers $$(q_ n)$$ is said to obey Benford’s law, if the decimal logarithms (lg $$q_ n)$$ are uniformly distributed modulo 1. H. Jager and P. Liardet [Indagationes Math. 50, 181-197 (1988; Zbl 0655.10045)] have proved Benford’s law for the sequence of denominators $$q_ n(\omega)$$ of the continued fraction expansion of a quadratic irrational $$\omega$$. In the present article this statement is generalized to regular Hurwitz continued fractions. Furthermore Benford’s law is obtained for almost all real numbers $$\omega$$.
Reviewer: R.F.Tichy (Graz)
##### MSC:
11K06 General theory of distribution modulo $$1$$ 11K55 Metric theory of other algorithms and expansions; measure and Hausdorff dimension
##### Keywords:
Benford’s law; regular Hurwitz continued fractions
Full Text:
##### References:
[1] Herrmann, Asymptotische Gleichverteilungseigenschaften von Summen schwach abhängiger Zufallsgrößen, Math. Nachr. 114 pp 263– (1983) · Zbl 0551.60008 · doi:10.1002/mana.19831140120 [2] Jager, Distributions arithmétiques des dénominators de convergents de fractions continues, Indag. Math. 91 pp 181– (1988) · doi:10.1016/S1385-7258(88)80026-X [3] Kanemitsu, Proc. of the 5-th Japan-USSR Symposium on Prob. Th. (Lecture notes in Mathematics 1299 pp 158– (1988) [4] A. Khintchine 1956 [5] Kuipers, Uniform distribution of sequences (1974) · Zbl 0281.10001 [6] O. Perron 1954 [7] Schatte, Zur Verteilung der Mantisse in der Gleitkommadarstellung einer Zufallsgröß, Zeituchr. f. Angew. Math. u. Mech. 58 pp 553– (1973) · Zbl 0267.60025 · doi:10.1002/zamm.19730530807 [8] Schatte, On sums modulo 2$$\deg$$ of independent random variables, Math. Nachr. 110 pp 243– (1983) · Zbl 0523.60016 · doi:10.1002/mana.19831100118 [9] Schatte, On mantissa distributions in computing and Benford’s law, J. Inf. Process. Cybern. EIK 24 pp 443– (1988) · Zbl 0662.65040
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
# The most general procedure for quantization
+ 8 like - 0 dislike
588 views
I recently read the following passage on page 137 in volume I of 'Quantum Fields and Strings: A course for Mathematicians' by Pierre Deligne and others (note that I am no mathematician and have not gotten too far into reading the book, so bear with me):
A physical system is usually described in terms of states and observable. In the Hamiltonian framework of classical mechanics, the states form a symplectic manifold $(M,\omega)$ and the observables are functions on $M$. The dynamics of a (time invariant) system is a one parameter group of symplectic diffeomorphisms; the generating function is the energy or Hamiltonian. The system is said to be free if $(M,\omega)$ is an affine symplectic space and the motion is by a one-parameter group of symplectic transformations. This general descriptions applies to any system that includes classical particles, fields, strings and other types of objects.
The last sentence, in particular, has really intrigued me. It implies a most general procedure for quantizing all systems encountered in physics. I haven't understood the part on symplectic diffeomorphisms or free systems. Here are my questions:
1. Given a constraint-free phase-space, equipped with the symplectic 2-form, we can construct a Hilbert space of states and a set of observables and start calculating expectation values and probability amplitudes. Since the passage says that this applies to point particles, fields and strings, I assume this is all there is to quantization of any system. Is this true?
2. What is the general procedure for such a construction, given $M$ and $\omega$?
3. For classical fields and strings what does this symplectic 2-form look like? (isn't it of infinite dimension?)
4. Also I assume for constrained systems like in loop quantum gravity, one needs to solve for the constraints and cast the system as a constraint-free before constructing the phase, am I correct?
5. I don't know what 'the one-parameter group of symplectic diffeomorphisms' are. How are the different from ordinary diffeomorphisms on a manifold? Since diffeomorphisms may be looked at as a tiny co-ordinate changes, are these diffeomorphisms canonical transformations? (is time or its equivalent the parameter mentioned above?)
6. What is meant by a 'free' system as given above?
7. By 'affine' I assume they mean that the connection on $M$ is flat and torsion free, what would this physically mean in the case of a one dimensional-oscillator or in the case of systems with strings and fields?
8. In systems that do not permit a Lagrangian description, how exactly do we define the cotangent bundle necessary for the conjugate momenta? If we can't, then how do we construct the symplectic 2-form? If we can't construct the symplectic 2-form, then how do we quantize the system?
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ys$\varnothing$csOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
# Rati Gelashvili: Leader Election and Renaming with Optimal Message Complexity
Wednesday, April 23, 2014 - 5:00pm to 6:00pm
Location:
32-G575
Speaker:
Rati Gelashvili
Biography:
MIT
Abstract: Asynchronous message-passing system is a standard distributed model, where $n$ processors communicate over unreliable channels, controlled by a strong adaptive adversary. The asynchronous nature of the system and the fact that $t < n / 2$ processes may fail by crashing are the great obstacles for designing efficient algorithms.
\emph{Leader election (test-and-set)} and \emph{renaming} are two fundamental distributed tasks. We prove that both tasks can be solved using expected $O( n^2 )$ messages---the same asymptotic complexity as a single all-to-all broadcast---and that this message complexity is in fact optimal.
|
# Bolsig
BOLSIG+ is a user-friendly Windows application for the numerical solution of the Boltzmann equation for electrons in weakly ionized gases in uniform electric fields, conditions which typically appear in the bulk of collisional low-temperature plasmas. Under these conditions the electron distribution is determined by the balance between electric acceleration and momentum loss and energy loss in collisions with neutral gas particles. The main purpose of BOLSIG+ is to obtain the electron transport coefficients and collision rate coefficients from collision cross section data. The principles of BOLSIG+ can be summarized as follows: the Electric field and all collision probabilities are assumed to be uniform; the angular dependence of the electron distribution is approximated by the classical two-term expansion; the change in the electron number density due to ionization or attachment is accounted for by an exponential growth model; using the above assumptions, the Boltzmann equation reduces to a convection-diffusion continuity-equation with a non-local source term in energy space, which is discretized by an exponential scheme and solved for the electron energy distribution function by a standard matrix inversion technique.
## Keywords for this software
Anything in here will be replaced on browsers that support the canvas element
|
# Scientific Workflows with Zotero
I’d like to start a discussion on possible Logseq-Zotero workflows for scientific research.
The goal is to work towards a complete workflow that leverages the strengths of Zotero, Logseq, and a word processor. The first question is where to draw the boundaries between the different programs.
These are the typical steps that a researcher might follow:
1. Capture references
• Collect articles by searching Scopus, Google Scholar etc.
• Zotero is excellent, it is unlikely Logseq will be able to compete with the Zotero connector machinery
2. Manage references
• Maintain a database of articles
• Zotero seems to be the standard, even though others might prefer JabRef or similar
• This is not a space for Logseq to compete in
3. Annotate documents
• Zotero 6.0 added a great PDF reader and note editor, which still has some limitations
• Math formulas are not supported
• Only pdf is supported, no epub, html, djvu
• Code snippets are not supported
• There is no linking/referencing system that comes anywhere close to
Logseq’s capabilities
• This is nearly a draw between Zotero and Logseq, but Logseq has a slight edge:
• Zotero has the advantage of closer integration with the literature database
• Logseq has the edge with respect to annotation and information management
• Zotero is not very open, annotations are stored in a database and currently there is no easy way to export them
• If Logseq was to provide more formats (epub, html, djvu), it could be far superior
4. Assemble information
• Combine information extracted from multiple individual documents
• Logseq was designed for this and is vastly superior.
• It is highly unlikely Zotero will ever be competitive in this space
5. Outline new article
• Create an outline of a new article
• Similar to 4., but some differences
• Needs ability to easily reference external materials, own diagrams etc.
• Export of content to next stage needs to be seamless and not lose any information
• While Logseq is an amazing outliner, export is not perfect. Need an easy way to copy and paste outlines into Word, including images and references. Ideally Logseq would export a .docx file with the reference information stored in field codes (for Zotero bibliographies), or as \cite{} fields (for BibTeX).
• Candidates for outlining are Logseq and Word.
6. Write articles
• Currently most people are using Word and LaTeX
• Many constraints exist to fit into existing workflows (Templates from publishers, coworkers not used to other formats, need Word collaboration features etc.)
• While there are some attempts for scientific writing in Markdown (see e.g. Scientific Writing with Markdown | Jaan Tollander de Balsch), formatting requirements (footnotes, references, templates, typesetting) go beyond capabilities of basic Markdown
• For many fields, Word (or LaTeX) will remain the default option for a long time
## How to split workflow between Zotero and Logseq?
The first big question is where to switch from Zotero to Logseq in the workflow. Zotero is superior for collecting and managing references (1. and 2.) and Logseq is superior for annotation and information assembly (3. and 4.).
While Zotero now has a solid annotation feature, I think it makes sense to annotate in Logseq instead, as this allows to seamlessly include the annotation in other documents, which would not be possible in Zotero.
Has anyone done an in-depth comparison between Zotero and Logseq PDF annotation? Are there any downsides of Logseq?
## How to transfer data from Zotero to Logseq?
The next question is how to integrate Zotero and Logseq for a workflow that uses Zotero for collecting and managing references, and Logseq for annotating documents.
Options for integrating Logseq with Zotero and other reference managers:
• Loose integration through files: zotero writes a .bib or .csl-jason file and logseq opens these files for citing
• Simple, automatically updated export to files has already been implemented in BetterBibtex
• Loose coupling with Zotero, if Zotero is down everything still works
- Would also work with JabRef and other reference managers
• No automatic creation of back-links from Zotero
• Tight integration with a custom Zotero client plugin: A plugin that runs in the Zotero client provides direct access to the Zotero database through a local web server. The plugin could provide bidirectional coupling and Logseq could modify Zotero items.
- No need for .bib export
• Can automatically add a note to a Zotero item that links back to all Logseq pages that reference the item
• Zotero currently has no client-API
- Currently only option is to install local server into Zotero using the debug-bridge and then send js commands
• Overly tight integration with Zotero: if Zotero is down or there is a problem with the plug-in Logseq doesn’t work either.
• Integration using the Zotero web-API
• Not an option
• Expensive, needs unlimited Zotero subscription for any realistically-sized library
• No privacy, need to sync entire Zotero collection and annotations to cloud
• Doesn’t work when offline or when Zotero is down
• High latency, documents (potentially very large) and information not sourced locally
While it is tempting to try to set up a direct integration with the Zotero client, the lack of a supported client-API makes this approach somewhat sketchy. At the moment, the only realistic option is to use Zotero+BetterBibtex to write automatically updated .bib files, which can then be imported by Logseq. Probably BetterBibtex needs to export a more complete set of information for each file, including the item identifiers, so that Logseq can automatically add zotero://select links, but this is a minor issue.
Did I miss any options for Zotero integration?
## How to get outlines from Logseq into Word/TeX?
The third question is how to turn a Logseq outline into a complete article. Most likely, Word and LaTeX will stay with us for a while. While Logseq can export to html and hopefully soon pandoc, this process isn’t very robust and doesn’t seem to work well for e.g. images, formulas, and references. Realistically, one will need to manually re-enter all references, formulas, and images into the pasted text. It might be best to do the outlining directly in Word.
Has anyone any experience actually outlining an article in Logseq and transferring the content to Word?
Any thoughts on other workflows?
17 Likes
Thank you for the well-done right up! I can’t say that am a power-user in Zotero yet so perhaps others can fill in their experiences, but here are my thoughts on some of your questions:
### How to split workflow between Zotero and Logseq
Has anyone done an in-depth comparison between Zotero and Logseq PDF annotation? Are there any downsides of Logseq?
• Zotero 6 Annotation Pros:
• Text Search
• Can edit highlight annotations
• Highlight annotations have appropriate spacing between lines (there is an extra space between words at the end of a line and the first word in the next line which is missing from logseq highlight annotations).
• Can highlight images
• Can export annotations to Zotero’s new note format (at the cost of cloud space if you have image annotations), and then export to markdown (I have not tested how it works with image annotations)
• Very stable (no data loss or links breaking)
• MarkdownDBConnect plugin in Zotero can link to Obsidian, logseq and other software to add an icon on articles in the Zotero Database. This helps differentiate between articles I have created a note for in Logseq and others that I haven’t made a note for yet. It’s simple to setup especially if you’re using citekeys as your markdown file names.
• If I annotate the pdf file directly, the annotations show up in the sidebar similar to if I made the highlight in Zotero. However, I can’t edit the highlighted text.
• Zotero 6 Cons:
• As you mentioned, there is no note or article linking feature which logesq is best for.
• Without exporting annotations are stuck inside Zotero. However there are hyperlinks at the end of each annotation that can open local Zotero when we need to see the context.
• Logseq PDF Annotation Pros:
• Highlights text and images that can be easily referenced anywhere in logseq.
• Highlight annotations can be edited to include anything that can be rendered in logseq (mathjax, code, bold, italics, links…)
• Zotero settings in logseq allows for importing of links to pdfs from our Zotero database.
• Logseq PDF Annotation Cons:
• No pdf text search
• Image Highlights don’t work with Zotero PDFs with spaces in the name: github issue. There is a small fix for that currently in the issue comments, but requires file renaming with Zotfile.
• Current Zotero plugin in logseq isn’t customizable like the one in Obsidian (no customizable template for yaml properties) which results in creating too many pages for all the authors. The search option is also very slow compared to Obsidian and shows less information (missing the authors, year of publication). Otherwise it does what it needs to do.
• UI zoom scaling resets while editing or resizing the logseq window. When that happens the view also resets to the beginning of the file.
• If the pdf file has highlights already, they do show up in the logseq pdf viewer, but they don’t fill up the logesq annotation file, unlike in zotero.
The most stable and consistent workflow I would think is to take all my notes on Zotero, and then export the notes and images to markdown. Unfortunately I’m more used to taking notes and summarizing as I read which leads to me annotating in logseq more. The caveat here being I need to screenshot figures and diagrams instead of linking in image annotation since that is still buggy at the moment.
If the logseq team fixes zoom scaling bug, and image highlight bug with zotero pdfs, then I think the workflow where zotero is used to capture articles and logesq for annotation and linking would work well. Especially for those who work mostly with text and less with figures/ diagrams.
Note: I use Windows 10
4 Likes
### How to transfer data from Zotero to Logseq?
Did I miss any options for Zotero integration?
Another method I’ve seen floating about is to use Obsidian’s Zotero Integration Plugin to make the markdown file in Logseq with a custom template (see the post). It’s essentially the same as the ‘loose-integration’ approach you described.
### How to get outlines from Logseq into Word/Tex?
I can see the value of outlining in logseq itself because it’ll keep a record of where I used my ideas and and what new connections I can make. I find I like to make the outlines in the software where I will make my full draft. Every time I make an outline in Logseq, I end up rewriting it anyway (for the reasons you pointed out).
Then again I have less experience with the output part of the workflow, so perhaps someone else could chime in?
2 Likes
I think many of us are setting up Zotero with Zotfile and are able to use Zotero for free.
The set up is based on this article Zotero hacks: unlimited synced storage and its smooth use with rmarkdown • Ilya Kashnitsky with quite a bit of tweaking. The underlying mechanism is to use Zotero proprietary sync for everything except attachment files because it is free for this purpose, and all the attachments such as PDFs can be sync by using Zotfile and a 3rd party sync service (like Google Drive).
Setting up Zotero to play nicely with Logseq in order to preserve the annotations when moving the Logseq Graph around is another headache. There’re some effort to improve this UX in the work (see the Discord thread here) but no idea when this will be done.
I’m writing this just to argue that cost should not be a reason to not use “Integration using the Zotero web-API”. Your other 3 reasons are valid.
there’s a pull request to address this, but apparently there’s some incompatibility with the old implementation, and no idea when it will be done feat(pdf): fix formatting of copied text
Highlighting figures has been very stable for me and I do a lot of this. Maybe there’s something wrong in your setup. My issue with PDF annotations in Logseq is that there are many moving parts that can go wrong (usually in the file name). You can get help with that in Discord by others, or tag me at @Nhan.
3 Likes
You mentioned quite a few issues with doing annotations in Logseq that I wasn’t aware of. They are not unsolvable, so let’s hope that they will be fixed soon.
I’ve accumulated a lot of annotations in Zotero (using the old notes and now the new annotations), but it feels very limited. Having the ability to add block-level tags is quite nice.
I’ll need to have a closer look at the MarkdownDBConnect plugin, this type of plugin could solve the backlink issue for the loosely coupled approach via a bib file.
You are right about Zotero storage. I think it is also possible to directly sync the storage folder with syncthing or similar, just the database itself has to be sync’ed through the Zotero server.
I had a look at how the Zotero annotations are stored in the database: The annotations are stored individually in the sqlite file. Images are stored as regular Items in the storage folder. So most likely most users will be able to stay under the free tier if they sync the storage folder manually.
For me it is still not an option to upload all my database to the Zotero cloud due to privacy concerns, but it might be ok for some.
Personally, I’d like to move away from Zotero for anything beyond collecting and managing items. The architecture of Zotero is too closed for my taste. Moving items around is surprisingly difficult if not impossible, for example, moving items between libraries resets the created date, which would mess up my workflow. Also, Zotero’s tagging and filtering is lacking compared to Logseq, no hierarchies etc.
Hi there, a scientist is here. A heavy user of Zotero, Zettlr, etc. Very recently new Zotero plugin was announced, seems that the author is keeping it well updated. It is still not well known, but look promising for fast outlining and linking when working with PDF’s.
Thanks for interesting discussion!
5 Likes
Another card carrier here.
I’d agree with the previous responses: a well thought out writeup of issues surrounding what is potentially a very useful workflow.
I’d hesitate to call myself a power-user in any of the programs under review (LibreOffice / LaTeX / zotero / logseq), despite a reasonable amount of experience in all.
For me, tight integration between zotero & logseq would be ideal. It strikes me that a useful avenue to pursue might be along the lines of zotero plugins for Libre(MS)office, which appear to reference local storage.
An equally workable solution would be for logseq to be able to import .bib files, much like LaTeX’ bibliography. This would obviate the need to work with large bibliographies.
For me, logseq’s ability to directly reference PDF’s in notes is a game changer
It might also be worthwhile asking what you require of each component of your workflow. I don’t require much more from logseq other than concept linkage & and export of a few dot points. I don’t require much more from zotero than to store references for searching. Any writing that needs to be done, I’m doing in the end program (LibreOffice or LaTeX, as the case may be) so that I can leverage the strengths of each component. However, it is useful to export a series of dot points with notes and references through (eg) pandoc (such as Zettlr) to the end program.
\$0.02.
2 Likes
As far as integration goes, I found out that Zotero is not very open and that it is quite difficult to get access to the data locally.
I looked at the office integration a while ago, and it was very complex and limited protocol, that was also completely different between MS Word (COM-based, I think) and OO. There is also this protocol:
Overall, I am torn about the Zotero integration. I see that Zotero is developing quite slowly and I feel that relying on Zotero internals might be dangerous in the long run. My library has become very large, and the Zotero citation picker has become extremely slow, a problem shared by many users.
For some reason, Zotero does not provide a local API to access the database, so there is no official way to interact with a local Zotero instance (which is needed for privacy reasons and to work offline). Zotero also plans to switch to Electron, a switch which might or might not affect any plugins Logseq would rely on
For these reasons, I feel that the safest route is to go through .bib files (which would also open workflows with other reference managers).
An option for a tight integration could be to have a scanner that goes through the Logseq documents, finds any links to zotero, then opens Zotero and adds linked documents back to the markdown files. If the Zotero plugin goes down for whatever reason, it wouldn’t stop Logseq from working. I think this would be the best and most stable solution, short of an officially supported local API that exposes the full database (similar to Calibre’s API and the Content Server).
I agree with you that writing needs to be done in a word processor or LaTeX for the time being.
1 Like
I’m new to Zotero so I don’t know much about it. Is it that you feel the development is slow or is this relative to another reference manager? Do you have an alternative in mind?
Could you give a few links of example of workflow using .bib file? I don’t know anything and would like to learn about this.
Zotero also plans to switch to Electron
They’ve talked about it for 5 years and the latest is “won’t be […] anytime soon” ha ha.
Zotero is a great program and I don’t see anything coming even remotely close, but still I have the feeling that Zotero is starting to lag behind. I am sure many problems are due to technical debt from being tied to the browser platform, this also makes it difficult to interface with 3rd party software. If you compare Zotero to Calibre, the latter has a much more vibrant developer company that has created a huge amount of plugins.
Over the years, I have run into many limitations of Zotero, such as
• no easy way to transfer items between libraries while maintaining all information
• no way to support complex workflows
• search is very slow
• too much emphasis on cloud sync, which has privacy issues
• citation picker is very slow
• no supported local API
• tag system is primitive compared to how it should be.
• no way to automatically populate collections based on tags (search folders have to hierarchy)
• no automatic renaming of tags.
• Zotero notes are great, but they lack Logseq’s features for assembling the information into other documents. Can’t tag individual blocks in Zotero’s Notes, tags are per note.
• The new note support is great, but it still doesn’t support TeX, and currently there is no good way to export notes. Writing a note is a substantial investment (many hours per article), and I don’t like my notes to end up in a format that I can’t export properly. I don’t want to rely on a plugin either that might stop working in a few years when they move to Electron.
All of these issues could be addressed with a couple lines of Python, but the lack of a local API makes this difficult and one has to rely on the unofficial debug-bridge or write a Zotero plugin.
The Zotero development is also not very open, they have a mailing list, but no public roadmap.
I don’t want to be too critical of Zotero, like I said, it is a unique program, but I am still worried about putting too much of my intellectual work into the Zotero ecosystem.
There is a plugin for Better Bibtex that automatically writes a bib file and keeps it sync’ed. It still misses some information that would be useful (such as Zotero ID’s for zotero://select links, but probably the author would be willing to add those).
Logseq could then parse this file. This has some major advantages, it still works if Zotero is down and it doesn’t rely on the cloud, so no latency or privacy issues.
I wrote some more comments here.
That’s a good example for the lack of openness. Three years ago it was supposed to happen within half a year and now it has been postponed forever without much of an explanation. I don’t care about the GUI, but if the switch eventually happens it might break add-ons. I am also not very inclined to write add-ons for this reason.
3 Likes
I recommend a zotero plug-in called “Zotero IF pro max”. For highlighting content marked up by Zotero’s own PDF reader, it supports automatic generation and export of markdown files, with or without highlighting colors. The location of the exported file is the location of Logseq’s data. It is designed for Obsidian, but Logseq is also applicable.
The problem is that it’s a Chinese plugin, and that you have to pay for it. I’m not sure if it’s available in English. If you guys would like to try using translation software, I’m sure it would be very helpful. (Zotero IF Pro Max 首次使用须知
2 Likes
I just noticed GitHub - sawhney17/logseq-citation-manager — has anyone tried it?
1 Like
It works great! I have issues that Logseq doesn’t work with relative links (see Comprehensive Zotero Plugin - #42 by Luhmann ), but that is a Logseq bug.
It might be related due to me having the Zotero storage folder in a different location.
zotero-better-notes is great on this.
I take all my notes in zotero with zotero-better-notes, and then export markdown and sync them under Logseq folder.
Each note has a link to the reference pdf in zotero.
You can open the pdf from the note in Logseq with one click.
It work great.
1 Like
geo_fan also mentions zotero plugin zotero-better-notes above, Scientific Workflows with Zotero - #8 by geo_fan
1 Like
I’ve tried zotero-better-notes, but the markdown file exported to logset is like, all of my annotations are all in one block. It’s kind of annoying I have to say
@yangjincai what export settings do you use from zotero-better-notes? (see snapshot below).
I screenshot an arbitrary selection, but I feel like whatever combination I try, the links aren’t working within LogSeq. But this is an amazing project, I hope I can get it working.
Hi, @Flaunster , I use this export setting.
and if you want the [[bi-directional links]] work, you need to remove the random tag (avoid conflict) in export file names.
Zotero → Edit → Note Template Editor → ExportMDFileName:
related discussion in zotero-better-notes issue125.
1 Like
Thank you @yangjincai !!! You just saved me untold hours trying to figure that out.
Also, the Zotero-better-notes plug-in sync is unidirectional…so accidentally overwriting notes seems over (especially over time when you forget about the syn and revisit a paper).
It’s frustrating because this solution is SO close to working if it could just sync both ways. Do any developers out there have a sense of how much work this would require to develop bi-directional sync? Like would it be an arm and leg to hire a freelancer or just a leg?
|
## Matlab Wavelet Filter
• A UWT-based FDM is proposed in real-time to overcome the limitations of WT. Both wavelets have support of length 29. How to apply Average filter, Weighted filter and Median Filter to Noisy Image? How to Implement Bitplane slicing in MATLAB? How to apply DWT (Discrete Wavelet Transform) to Image? LSB Substitution Steganography MATLAB Implementation. The 1-D fBm is generated by scaling the modulus and randomizing the phase of gaussians in FFT, while the 2-D fBm is authored by Olivier Barriere. The toolbox also includes apps and functions for decimated and nondecimated discrete wavelet analysis of signals and images, including wavelet packets and dual-tree transforms. Web resources about - Wavelet domain wiener filter matlab implementation regarding - comp. Both linear and circular Gabor filters are studied and analyzed in this aspect and how these filters are better in comparison to linear filters is also analyzed. MATLAB implementation of Legendre wavelets. Finding Definite Integral Using MATLAB. Learn more about signal, signals, scaling, wavelet, wavelets, signal processing, digital signal processing, wavelet transform MATLAB. This video is unavailable. matlab_map, programs which illustrate the use of MATLAB's mapping toolbox to draw maps of the world, countries, the US, or individual states. Apps are included in many MATLAB products. Hi all, I've just started to do some programming on wavelet using MATLAB. Welcome to the home page of the Wavelet Tour book. matlab_kmeans, programs which illustrate the use of Matlab's kmeans() function for clustering N sets of M-dimensional data into K clusters. There is a definite tradeoff between de-blurring and de-noising. 11 Filter Banks and Wavelets with MATLAB. The orthogonal refinement filters have a simple analytical expression in the Fourier domain as a function of the order α, which may be non-integer. All figures and tables in the accompanying paper:. Filter by language. Preliminaries Haar wavelet compression is an efficient way to perform both lossless and lossy image compression. The basic idea behind wavelet denoising, or wavelet thresholding, is that the wavelet transform leads to a sparse representation for many real-world signals and images. DB4 WAVELET BUILT FROM FILTER POINTS Figure 1. Learn more about wavelet, image processing. In practical cases, the Gabor wavelet is used as the discrete wavelet transform with either continuous or discrete input signal, while there is an intrinsic disadvantage of the Gabor wavelets which makes this discrete case beyond the discrete wavelet constraints: the 1-D and 2-D Gabor wavelets do not have orthonormal bases. WAVELET, a MATLAB library which contains some utilities for computations involving wavelets. Then, compare the signs of the values when the qmf phase parameter is set to 0 or 1. Download Haar_wavelet_filter. This video gives the single level compression of an image using Haar wavelet in matlab Skip navigation Haar Wavelet Transform using Matlab DataGridView BindingSource Filter Part 1/2. Both wavelets have support of length 29. Ever wonder what a wavelet is or what the MathWorks’ “Wavelet Toolbox” is actually useful for? Check out Kirthi’s videos that describe the concept and uses for a wavelet. I run this code in matlab:. Try this code %Read Input Image Input_Image=imread(‘rose. If you plot the analysis and synthesis scaling functions and wavelets for the 'bior1. Daubechies 4 wavelet (db4) properties, filters and functions Wavelet Browser by PyWavelets. The reversed signs indicates a phase shift of π radians, which is the same as multiplying the DFT by e i π. Proceedings 2000 International Conference on Image Processing (Cat. Biorthogonal filter banks do have linear phase. This web page gathers materials to complement the third edition of the book A Wavelet Tour of Signal Processing, 3rd edition, The Sparse Way, of Stéphane Mallat. The first wavelet corresponds to the wavelet filter with center frequency equal to 200 Hz, and the last wavelet corresponds to the wavelet filter with center frequency equal to 50 Hz. We will now look at two types of wavelet transforms: the Continuous Wavelet Transform and the Discrete Wavelet. The toolbox includes algorithms for continuous wavelet analysis, wavelet coherence, synchrosqueezing, and data-adaptive time-frequency analysis. You can create a DWT filter bank and visualize wavelets and scaling functions in time and frequency. And these two templates dot get the final bilateral filter templates. zip) Unzip SWv1. The mother wavelet has $$\int \psi(t). The default wavelet used in the filter bank is the analytic Morse (3,60) wavelet. The normal ECG waveform. The input, x, is a double-precision real- or complex-valued vector, or a single-variable regularly sampled timetable and must have at least four samples. 2-D Discrete Wavelet Transform. All functions and Wavelet Analyzer app tools involving either the discrete wavelet transform (1-D and 2-D) or wavelet packet transform (1-D and 2-D), use the specified DWT extension mode. levels, boundary="periodic", fast=TRUE) Arguments X A univariate or multivariate time series. Legendre wavelets can be easily loaded into the MATLAB wavelet toolbox—The m-files to allow the computation of Legendre wavelet transform, details and filter are (freeware) available. Ortiz-Lima 1, J. 0 (April 24, 2014) ftc algorithm removed + bugs fixes + new functions. Try this code %Read Input Image Input_Image=imread('rose. implementing Isotropic Wavelets and Riesz Filter for multiscale phase analysis. filter-bank model, where the filters called Gabor filters are derived from Gabor elementary functions. Definite integrals can also be used in other situations, where the quantity required can be expressed as the limit of a. 2 De-Noising Audio Signals Using MATLAB Wavelets Toolbox Adrian E. Download with Google Download with Facebook. To avoid this, we perform a circular shift in both the analysis and synthesis filter banks. wavelist() list. For the same input, this dwt function and the DWT block in the DSP System Toolbox™ do not produce the same results. The same wavelet may therefore be referred to as "CDF 9/7" (based on the filter sizes) or "biorthogonal 4, 4" (based on the vanishing moments). 5Hz Daubechies 4 undecimated filter bank 7. Both wavelets have support of length 29. The wavelet coefficients d a,b are derived as follows: where k ε R+, l ε R. m , respectively. All you have to do filter the signal using a 1-D gabor filter or use 'dwt' for wavelet transform function in MALTAB. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes". FREQUENCY DOMAIN USING MATLAB®: A STUDY FOR BEGINNERS inria-00321613, version 1 - 15 Sep 2008. Orthogonal filters cannot have linear phase with the exception of the Haar wavelet filter. zip") In matlab, type "add_all_paths" INSIDE the SWv1. Use cwtfilterbank to create a continuous wavelet transform (CWT) filter bank. The application of the Morlet wavelet analysis is also used to discriminate abnormal heartbeat behavior in the electrocardiogram (ECG). Since it is a tight frame, it obeys the generalized form of Parseval's Equality: The vector-length (L2-norm). And these two templates dot get the final bilateral filter templates. OFDM SYSTEM. Two Properties of SVD and Its Application in Data Hiding. A choice of wavelets. In order to use a built-in wavelet the name parameter must be a valid wavelet name from the pywt. The toolbox is able to transform FIR filters into lifting scheme. Since the wavelets are biorthogonal, set the wavelet type to be 2. 1 is the default) and type "help wavelet" at the Matlab prompt for a list of available wavelet toolbox commands. PyWavelets is very easy to use and get started with. signal namespace, The coefficients for the FIR low-pass filter producing Daubechies wavelets. Use dwtfilterbank to create a discrete wavelet transform (DWT) filter bank. Use Wavelet Toolbox™ functions to analyze signals and images using decimated (downsampled) and nondecimated wavelet transforms. For continuous wavelets see pywt. Learn more about bandstop, wavelet, filters, transform Wavelet Toolbox. The output decomposition structure consists of the wavelet decomposition vector c and the bookkeeping vector l, which contains the number of coefficients by level. Highpass (wavelet) filters for the DWT filter bank, returned as an L-by-2 matrix. Launch Matlab (v 6. pdf), Text File (. 75 Hz e wavelet 7. The finite support width Legendre family is denoted by legd (short name). Advantages over analog image processing: - Allows a much wider range of algorithms to be applied to the input data - Avoid problems such as the build-up of noise and signal distortion during. FYI: This is a question I posted on the MATLAB answers site, but am not getting any real feedback or views, so I am reposting on Stack Overflow (Matlab Answers Link). Optimal performance is acquired with db10 wavelet. *FREE* shipping on qualifying offers. 2) is available as part of the Matlab installation on Athena. The decomposition is done with respect to either a particular wavelet (see wfilters for more information) or particular wavelet decomposition filters. These examples are accompanied by Matlab programs to illustrate how the DWT programs are used. Ask Question Asked 6 years, 8 months ago. The wavelet transform discretizes the scales using the specified number of wavelet filters. org 38 | Page Up to now many methods of removing the baseline wander are proposed. To use the wavelet transform for image processing we must implement a 2D version of the analysis and synthesis filter banks. Orthogonal wavelet transforms are not translation invariant. MATLAB training programs (bilateral filtering) MATLAB training programs (bilateral filtering) bilateral filter templates for two main template, first is the Gaussian profile, the second is based on gray level difference as generated by the coefficients of the function template. The wavelets are ordered in psi from the finest scale resolution to the coarsest scale resolution. Watch Queue Queue. The degrees of PHd and PHr are 4 and 6, respectively. image processing with wavelet transform. 3' wavelet, you see that lifting the Haar wavelet as in the previous example has essentially provided the 'bior1. 5 Scaling Function and Wavelets 22 1. This web page gathers materials to complement the third edition of the book A Wavelet Tour of Signal Processing, 3rd edition, The Sparse Way, of Stéphane Mallat. In the following image, the blurred image is corrupted by AWGN with variance 10. Gabor filter in matlab Gabor filter bank in matlab Gabor filtering on an image in matlab Gabor filter bank generator in matlab Improved 2d gabor filter in matlab Gabor wavelets in matlab 2d gabor filter(ver1,2,3) in matlab Gabor filetr gui in matlab Gabor wavelet filter for texture extraction in matlab Gabor function masks in matlab 2d and 3d. cfs = wt(fb,x) returns the continuous wavelet transform (CWT) coefficients of the signal x, using the CWT filter bank fb. These examples are accompanied by Matlab programs to illustrate how the DWT programs are used. DISCRETE WAVELET TRANSFORM USING MATLAB. 2 Ideal Filters, Shannon Sampling, Sine Wavelets 45. Wavelet packet transforms are also related to the discrete wavelet transform. A choice of wavelets. java to the plugins folder and compile it with the "Compile and Run" command. Hi Lorena, you have to determine the equivalent filter at each level of the wavelet tree. The mother wavelet has$$\int \psi(t). The first wavelet corresponds to the wavelet filter with center frequency equal to 200 Hz, and the last wavelet corresponds to the wavelet filter with center frequency equal to 50 Hz. Welcome to Vibrationdata Matlab Page Mean filter method for removing saturation with optimization. Embedded Zero-tree wavelet (EZW) coder is the first algorithm to show the full power of wavelet-based image compression. zip") In matlab, type "add_all_paths" INSIDE the SWv1. In biorthogonal wavelets, separate decomposition and reconstruction filters are defined. Emphasizes discrete and digital methods and utilizes MATLAB(r) to illustrate these concepts Combines traditional methods such as discrete Fourier transforms and discrete cosine transforms with more recent techniques such as filter banks and wavelet. The Morlet wavelet transform method is applied to music transcription. In addition to the block diagram shown in Fig. The latter is used mostly for image processing. The default wavelet used in the filter bank is the analytic Morse (3,60) wavelet. matlab_kmeans, programs which illustrate the use of Matlab's kmeans() function for clustering N sets of M-dimensional data into K clusters. This MATLAB function returns the four lowpass and highpass, decomposition and reconstruction filters associated with the orthogonal or biorthogonal wavelet wname. 62%, and 1216 bits respectively. 0 (April 24, 2014) ftc algorithm removed + bugs fixes + new functions. This example demonstrates that for a given support, the cumulative sum of the squared coefficients of a scaling filter increase more rapidly for an extremal phase wavelet than other wavelets. The Wavelet Toolbox provides functions and tools for experiments with signals and images. I'm working with ECG signals and am trying to use a wavelet technique to reduce some of the noise in various data sets. However, you can calculate corresponding frequencies (i. The reversed signs indicates a phase shift of π radians, which is the same as multiplying the DFT by e i π. Use Wavelet Toolbox™ functions to analyze signals and images using decimated (downsampled) and nondecimated wavelet transforms. Wavelet transform is applicable for stationary as well as non-stationary signals. MATLAB codes for generating 1-D and 2-D fractional Brownian motions are: MakeFBM. m , respectively. Continuous Wavelet Transform as a Bandpass Filter CWT as a Filtering Technique. signal namespace, The coefficients for the FIR low-pass filter producing Daubechies wavelets. • An intelligent method using undecimated wavelet transform (UWT) is proposed for fault detection. Learn more about bandstop, wavelet, filters, transform Wavelet Toolbox. The toolbox also includes apps and functions for decimated and nondecimated discrete wavelet analysis of signals and images, including wavelet packets and dual-tree transforms. Abstract:- This paper presents MATLAB programs for generating the coefficients of the lowpass analysis filter corresponding to orthonormal wavelet analyses. A Discrete Wavelet Transform is usually designed with one mother and father wavelet which are generated by a sequence of convolutions of discrete FIR filters. This example demonstrates that for a given support, the cumulative sum of the squared coefficients of a scaling filter increase more rapidly for an extremal phase wavelet than other wavelets. In this lecture we covered the practical use of Fourier analysis in the form of Gabor Filters. Wavelet-based Image Restoration. In this work, the simulation results were experimentally validated. The structure is organized as in this level-3 decomposition diagram. In the following image, the blurred image is corrupted by AWGN with variance 10. 5 filter will be small. m and MakeFBM2D. An orthogonal or biorthogonal wavelet filter is not a valid filter if you have a double-density, 'ddt' or dual-tree double-density, 'realdddt' or 'cplxdddt', filter bank. Daubechies 2 wavelet (db2) properties, filters and functions Wavelet Browser by PyWavelets. An orthogonal or biorthogonal wavelet filter is not a valid filter for complex dual-tree filter banks for stages greater than 1. You can create a DWT filter bank and visualize wavelets and scaling functions in time and frequency. gui-application speech-analysis matlab-realtime wavelet MATLAB Updated Jan 20, 2019. Find materials for this course in the pages linked along the left. The wavelets are generated from a single basic wavelet 5 (t), the so-called mother wavelet, by scaling and translation: −τ ψτ = ψ s t s s t 1, ( ). In each section below, the 2-channel filter banks are described first. levels, boundary="periodic", fast=TRUE) Arguments X A univariate or multivariate time series. Is there anyone know to compile it, please show me? Thanks,. Nirmala devi AP(SLG)/ECE KEC 2. Ten Lectures on Wavelets by Ingrid Daubechies. Download A_trous_wavelet_filter. The output decomposition structure consists of the wavelet decomposition vector c and the bookkeeping vector l, which contains the number of coefficients by level. Biorthogonal filter banks do have linear phase. Some of the resulting wavelet coefficients correspond to details in the data set (high frequency sub. For example, by doing using cameraman. The toolbox includes algorithms for continuous wavelet analysis, wavelet coherence, synchrosqueezing, and data-adaptive time-frequency analysis. The toolbox also includes apps and functions for decimated and nondecimated discrete wavelet analysis of signals and images, including wavelet packets and dual-tree transforms. The basic idea behind wavelet denoising, or wavelet thresholding, is that the wavelet transform leads to a sparse representation for many real-world signals and images. The essential scheme is the following: We have perfect reconstruction filter banks if the output of the diagram is the same as the input. L is an even positive integer. Obtain the continuous wavelet transform (CWT) of a signal or image, construct signal approximations with the inverse CWT, compare time-varying patterns in two signals using wavelet coherence, visualize wavelet bandpass filters, and obtain high resolution time-frequency representations using wavelet synchrosqueezing. Scale function coefficients (low pass filter in orthogonal filter banks) must satisfy following conditions (is length of filter). Download with Google Download with Facebook. • A hybrid DE/SVR – based model is a good predictive model of the phosphorus and chlorophyll presences. The wavelets are generated from a single basic wavelet 5 (t), the so-called mother wavelet, by scaling and translation: −τ ψτ = ψ s t s s t 1, ( ). For a demo of the wavelet toolbox type "wavedemo". Hello All, I need to ask if anybody had managed to successfully generate C code from MATLAB code which uses Wavelet toolbox functions like wptree. However, you can calculate corresponding frequencies (i. Description: This plugin's purpose is to remove noise from noisy images. Plot the magnitudes of the first and last wavelets contained in the output. In the analysis filter bank, the scaling low pass filter has 9 taps, the wavelet band pass filter has 7 taps. WAVELET is a FORTRAN77 library which contains some utilities for computations involving wavelets. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes". The 1-D fBm is generated by scaling the modulus and randomizing the phase of gaussians in FFT, while the 2-D fBm is authored by Olivier Barriere. We use definite integrals to find areas such as the area between a curve and the x-axis and the area between two curves. An orthogonal or biorthogonal wavelet filter is not a valid filter for complex dual-tree filter banks for stages greater than 1. ERROR: Wavelet + MATLAB Coder. A sparse matrix is a matrix in which a large portion of its entries are 0. ricker — Scipy function for a Ricker wavelet, which takes a scale parameter a = 1/2πf (I think) Mexican hat wavelet — Wikipedia article; Ryan, 1994. The rest of the paper is organized as follows. I want apply wavelet with haar filter on an image and then reconstruction the image with approximation coefficients. Continuous wavelet and short-time Fourier transforms 5. See Daubechies Wavelets: dbN for more detail. Finger-Knuckle-Print verification using Kekre's wavelet transform. The DWT block is designed for real-time implementation while Wavelet Toolbox™ software is designed for analysis, so the products handle boundary conditions and filter states differently. When choosing filters, a high-pass of no greater than 0. The quality factor for each filter bank is the number of wavelet filters per octave. To compute the real part of the complex wavelet, we set all coefficients to zero, except for one coefficient in the upper DWT, and then compute the inverse transform. matlab_map, programs which illustrate the use of MATLAB's mapping toolbox to draw maps of the world, countries, the US, or individual states. The files associated with the. To avoid this, we perform a circular shift in both the analysis and synthesis filter banks. 3' wavelet, you see that lifting the Haar wavelet as in the previous example has essentially provided the 'bior1. Does anybody have trials in this track ? - and i need to know if this kind of code generation is supported in the MATLAB (i mean the usage of the Wavelet toolbox functions). The decomposition is done with respect to either a particular wavelet (see wfilters for more information) or particular wavelet decomposition filters. You can type in help wfilters to see what filters are available. The toolbox includes algorithms for continuous wavelet analysis, wavelet coherence, synchrosqueezing, and data-adaptive time-frequency analysis. The toolbox further provides functions to denoise and compress signals and images. IMAGE PROCESSING IN. 1 is the default) and type "help wavelet" at the Matlab prompt for a list of available wavelet toolbox commands. [Wop, filters] = wavelet_factory_2d(size(x)); filters is a struct containing three fields filters. The Motivated Engineer 54,596. Orthogonal filter banks do not have linear phase. The mother wavelet has \int \psi(t). In addition to the block diagram shown in Fig. In practical cases, the Gabor wavelet is used as the discrete wavelet transform with either continuous or discrete input signal, while there is an intrinsic disadvantage of the Gabor wavelets which makes this discrete case beyond the discrete wavelet constraints: the 1-D and 2-D Gabor wavelets do not have orthonormal bases. The reversed signs indicates a phase shift of π radians, which is the same as multiplying the DFT by e i π. WavmatND: A MATLAB Package for Non-Decimated Wavelet Transform and its Applications. dotnet projects,2013 dotnet projects,ieee 2013 projects,2013 ieee projects,it projects,academic projects,engineering projects,cs projects,java projects,appli. The wavelet transform discretizes the scales using the specified number of wavelet filters. Nirmala devi AP(SLG)/ECE KEC 2. The degrees of PHd and PHr are 4 and 6, respectively. In this paper, the real-time boundary wavelet transform (RT-BWT) method is proposed for dc fault protection in multi-terminal high voltage dc (MTDC) grids. The circular shift is implemented with the Matlab function cshift. I have been trying to implement these two filters in MATLAB: Daubechies 4 undecimated wavelet 3. of improved speed discrete wavelet. Daubechies wavelets are a family of orthogonal wavelets named after Belgian physicist and mathematician Ingrid Daubechies. pdf - Free download as PDF File (. The toolbox is able to transform FIR filters into lifting scheme. • The wavelet coefficients measure how closely correlated the wavelet is with each section of the signal • For compact representation, choose a wavelet that matches the shape of the image components -Example: Haar wavelet for black and white drawings ³ f W M ( s,W) f ( x )\ s ,W ( x ) dx. algorithms refer to a FIR filter bank with low-pass filter h, high-pass filter g, and down sampling by a factor 2 at each stage of the filter bank. You can extract features from these processed signals (in case of Gabor filter. In particular you can download all the figures from the book and perform numerical experiments using Matlab, Scilab or Python. It provides implementations of various generalizations of Mallat's well-known algorithm (iterated filterbank) such that completely general filterbank trees, dual-tree complex wavelet transforms, and wavelet packets can be computed. dwt(X, filter="la8", n. Use cwtfilterbank to create a continuous wavelet transform (CWT) filter bank. levels, boundary="periodic", fast=TRUE) Arguments X A univariate or multivariate time series. matlab_kmeans, programs which illustrate the use of Matlab's kmeans() function for clustering N sets of M-dimensional data into K clusters. Wavelets come in different sizes and shapes. Sean's pick this week is Wavelet Tech Talks - MATLAB Code Files by Kirthi Devleker. I'm working with ECG signals and am trying to use a wavelet technique to reduce some of the noise in various data sets. 5 filter will "kill" polynomials up to degree 4 in the decomposition. In the 2D case, the 1D analysis filter bank is first applied to the columns of the image and then applied to the rows. When I first started working on wavelet transforms I have struggled for many hours and days to figure out what was going on in this mysterious world of wavelet transforms, due to the lack of introductory level. In this paper, four cascade iterations are used to generate the scaling function and wavelet functions from the filters of the filter bank. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes". wf specifies four filters, two for decomposition and two for reconstruction, or 3 × 4 filters (one quadruplet by direction). DFT-Based Continuous Wavelet Transform. *FREE* shipping on qualifying offers. Image Denoising. pdf - Free download as PDF File (. filter object, a character string indicating which wavelet filter to use in the decomposition, or a numeric vector of wavelet coefficients (not scaling. First, set the order to 15 and generate the scaling filter coefficients for the Daubechies wavelet and Symlet. Abstract We present a new family of 2D orthogonal wavelets which use quincunx sampling. If you carefully choose the wavelet family and wavelet order, you get a wavelet decomposition, that roughly corresponds to the desired ba. Notice also that the Marr-Hildreth not only has a lot more noise than the other methods, the low-pass filtering it uses distorts the actual position of the facial features. If the signal is approximated well locally by a polynomial up to order 4, then the wavelet coefficients obtained with the bior3. Now I'm left with calculating the energy of the wavelet coefficients. Find materials for this course in the pages linked along the left. Recently, working on Convex Rearrangements of wavelet filtered self-similar processes I looked at compactly supported orthogonal wavelet filters and for some "empirical evidence" could not find a proof. Numeric vectors, matrices and data frames are also accepted. Quantize all the wavelet coefficients created in Prob. The structure is organized as in this level-3 decomposition diagram. [c,l] = wavedec(x,n,wname) returns the wavelet decomposition of the signal x at level n using the wavelet wname. *FREE* shipping on qualifying offers. At each subsequent level, the approximation coefficients are divided into a coarser approximation (lowpass) and highpass (detail) part. If I have the wavelet $\psi_{s,u}$, which is represented by psi in MATLAB, and I want to compute the above integral. The CWT is obtained using the analytic Morse wavelet with the symmetry parameter (gamma) equal to 3 and the time-bandwidth. It provides implementations of various generalizations of Mallat's well-known algorithm (iterated filterbank) such that completely general filterbank trees, dual-tree complex wavelet transforms, and wavelet packets can be computed. Up till now, wavelets have been generally presented as a form of Applied Mathematics. This MATLAB function returns the scaling filter f associated with the Coiflet wavelet specified by wname. Orthogonal filter banks do not have linear phase. Introduction The WMTSA Wavelet Toolkit for MATLAB is a software package for the analysis of a data series using wavelet methods. ECG Signal Analysis Using Wavelet Transforms Figure 1. The default wavelet used in the filter bank is the analytic Morse (3,60) wavelet. If you carefully choose the wavelet family and wavelet order, you get a wavelet decomposition, that roughly corresponds to the desired ba. More examples of the various families can be found on the FirWav Page. and the wavelet vector [g 0, g 1, 0, 0, …. dwt(X, filter="la8", n. A classical method using high pass filter removes very low frequency component from ECG recording [2]. Use wavemngr to add the biorthogonal wavelet filters to the toolbox. 2-D Discrete Wavelet Transform. L is an even positive integer. The wavelet transform comes in several forms. I have been stuck in reconstructing the signal back to original. lifted wavelet transform in matlab. The paper presents the results of a research on the possibilities of improving the automatic classifiers of ASM 501, Bulgarian sorting machines for fruit and vegetables. Since it is a tight frame, it obeys the generalized form of Parseval's Equality: The vector-length (L2-norm). Sean‘s pick this week is Wavelet Tech Talks – MATLAB Code Files by Kirthi Devleker. The Motivated Engineer 54,596. So using the bior3. The default wavelet used in the filter bank is the analytic Morse (3,60) wavelet. We use definite integrals to find areas such as the area between a curve and the x-axis and the area between two curves. Nageswara Rao Abstract Image denoising involves the manipulation of the image data to produce a visually high quality image. This wavelet library implements the 1D wavelet transform of matlab wavelet toolbox with c++. Before 1985, Haar wavelet was the only orthogonal wavelet people know. Learn more about wavelet, image processing. The fractional wavelet filter overcomes these limitations through a novel fractional computation of the two-dimentsional image wavelet trasform. filter-bank model, where the filters called Gabor filters are derived from Gabor elementary functions. Biorthogonal filter banks do have linear phase. The wavelet analysis procedure is to adopt a wavelet prototype function, called an analyzing wavelet or mother wavelet. Daubechies wavelets are widely used in solving a broad range of problems, e. Sean‘s pick this week is Wavelet Tech Talks – MATLAB Code Files by Kirthi Devleker. Biorthogonal wavelets feature a pair of scaling functions and associated scaling filters — one for analysis and one for synthesis. % % The Coiflet filters are designed to give both the mother and father. The minimum even length filter that can accommodate the four filters has. One constraint in the use of Wiener filtering is that signal and noise should be gaussian processes for optimality and you should note that it is not always possible compute the Covariance matrix! However, in the end, both methods are superior to spectral subtraction!! you can find the matlab codes for both Waveler Shrinkage and Wiener Denoisers at. Dual-Tree Wavelet Transforms This Dual-Tree Wavelet Transforms shows how the dual-tree discrete wavelet transform (DWT) provides advantages over the critically sampled DWT for signal and image. This paper discusses image database indexing and querying by content for fine paintings using the multi-resolution property of wavelets and artificial neural networks to provide for invariant properties to contrast, brightness, blurring, scale,. Orthogonal and Biorthogonal Filter Banks Daubechies' extremal-phase wavelets, Daubechies' least-asymmetric wavelets, Fejer-Korovkin filters, coiflets, biorthogonal spline filters Orthogonal wavelet filter banks generate a single scaling function and wavelet, whereas biorthogonal wavelet filters generate one scaling function and wavelet for. ricker — Scipy function for a Ricker wavelet, which takes a scale parameter a = 1/2πf (I think) Mexican hat wavelet — Wikipedia article; Ryan, 1994. Daubechies Wavelet and Matlab. - user7116 Jul 8 '13 at 15:55. FDtools - fractional delay filter design FlexICA - for independent components analysis FMBPC - fuzzy model-based predictive control ForWaRD - Fourier-wavelet regularized deconvolution FracLab - fractal analysis for signal processing FSBOX - stepwise forward and backward selection of features using linear regression. Use the qmf function to obtain the decomposition low-pass filter for a wavelet. For example, by doing using cameraman. In order to use a built-in wavelet the name parameter must be a valid wavelet name from the pywt. The Wavelet Toolbox requires that filters associated with the wavelet have even equal length. Obtain the continuous wavelet transform (CWT) of a signal or image, construct signal approximations with the inverse CWT, compare time-varying patterns in two signals using wavelet coherence, visualize wavelet bandpass filters, and obtain high resolution time-frequency representations using wavelet synchrosqueezing. We also covered the relation of these filters to wavelets before starting continuous and discrete. An orthogonal wavelet is entirely defined by the scaling filter – a low-pass finite impulse response (FIR) filter of length 2N and sum 1. When I first started working on wavelet transforms I have struggled for many hours and days to figure out what was going on in this mysterious world of wavelet transforms, due to the lack of introductory level. wavelet transform) offer a huge variety of applications. 5 filter will be small. You can specify wavelet and scaling filters by the number of the vanishing moments, which allows you to remove or retain polynomial behavior in your data. Discrete Wavelet Transforms are usually designed with one mother and father wavelet which are generated by a sequence of convolutions of discrete FIR filters. I'm working with ECG signals and am trying to use a wavelet technique to reduce some of the noise in various data sets. m , respectively. 2-D Discrete Wavelet Transform. I just download toolbox_wavelets and try to use in MatLab 2015b. % % The Beylkin filter places roots for the frequency response function % close to the Nyquist frequency on the real axis. 5 Implementation Relevant wavelet theory. This web page gathers materials to complement the third edition of the book A Wavelet Tour of Signal Processing, 3rd edition, The Sparse Way, of Stéphane Mallat. 75 Hz e wavelet 7. 0 (July 21, 2015) bug fixes in the curvelet transform (crash due to odd image sizes + wrong high frequency filters for option 2) + new curvelet transform option 3 (scales per angular sectors) v2. First, set the order to 15 and generate the scaling filter coefficients for the Daubechies wavelet and Symlet. The basic idea behind wavelet denoising, or wavelet thresholding, is that the wavelet transform leads to a sparse representation for many real-world signals and images.
|
Composite Numbers with 1 Prime
What is the method for finding a long sequence of consecutive composite numbers that has only 1 prime? Specifically, how to find 2011 consecutive natural numbers, 1 of which is prime.
• Are you looking for at most one prime, or exactly one prime? If it's the former, the usual example is that $n!+2, n!+3, \ldots, n!+n$ are all composite. – MJD Apr 19 '14 at 5:44
• I'm looking for exactly one prime, @MJD – Jason Chen Apr 19 '14 at 5:44
• In the second line, I said that it needed to be 2011 numbers long. – Jason Chen Apr 19 '14 at 5:48
You can construct a sequence in the following manner:
1)Let us say you want the size of sequence to be n. Then find the largest prime less than $n!+2$.
2)Now all the numbers from $p$ to $n!+n$ are composite .So choose the sequence from this list and you are done.
Here is a good way to do this.
As MJD pointed out in the comments, it is easy to find 2011 consecutive numbers which are all composite; namely, you can take 2012! + 2, 2012! + 3, ..., up to 2012! + 2012. Now, let $p$ be the smallest prime larger than 2012! + 2012. Then $p-1, p-2, p-3, ..., p-2011$ are all composite (why?) so the sequence $p-2011, p-2010, p-2009, ..., p$ contains exactly 1 prime.
• Calculating the smallest prime above $2012!+2012$ is not so easy. I am not aware of any method for efficiently testing the primality of arbitrary 6,000-digit numbers. – MJD Apr 19 '14 at 5:54
• It's not clear that the question is asking for an computationally efficient method (just any construction). Also, there are fairly efficient primality testing algorithms, although even these would take a while (but not ridiculously long!) to check whether a 5000 digit number is prime. – jschnei Apr 19 '14 at 6:02
• Your answer was better than the other one, anyway. – MJD Apr 19 '14 at 6:03
• @MJD: PARI/GP can do this easily. – Charles Apr 22 '14 at 20:52
|
# Analytically find the domain of a logarithmic function?
I'm taking pre-calc and I'm already falling behind this semester.
I'm hoping someone could give me a simple explanation on how to solve these types of problems:
$$f(x) = \log_5(4-x^2)$$
I have the answer, but I don't know how to get to it exactly.
I think I factor whatever is inside of the log. But then what's the point of the log?
Here's a few more problems that are similar:
$$f(x) = \log(x^2 - 13x + 36)$$
$$f(x) = \ln|7 + 28x|$$
-
Logarithm is defined just on POSITIVE reals.. – uforoboa Sep 26 '12 at 0:45
this is just a clever ruse to make you do an inequality of some expression of x and then see what inequality that implies about x – binn Sep 26 '12 at 0:46
You wrote "these types of problems", followed by $f(x)=\log_5(4-x^2)$. But "$f(x)=\log_5(4-x^2)$" doesn't state any problem. If there's something above that that says "Simplify the following", then you'd have stated a math problem. If there's something above that that says "Find the domains of the following functions", then you'd have stated a DIFFERENT math problem. And if there's something above that says something else, you'd have yet another math problem. Before anyone can be sure what problem you're asking about, you need to give us that additional information. – Michael Hardy Sep 26 '12 at 1:54
Sorry about that Michael. I thought it was clear enough that the title explained what the problem was: "Analytically find the domain of a logarithmic function?". That's all the instructions on the paper. – An Alien Sep 26 '12 at 2:03
add comment
## 2 Answers
If $f : I \subset \mathbb{R} \to \mathbb{R}$ is a function that maps an interval of the real numbers to the real numbers and if you want to know the domain of $f$ and there's no restriction you assume that the domain is the maximum set where the function can be defined. In other words, the interval $I$ will be the maximum set where $f$ can be defined.
For instance, what means the function $\log_5$? This function, when applied to a number $x$ gives the number $y$ such that $5^y = x$. But $5$ raised to any power will always give a strictly positive number then the maximum set where it makes sense of talking about $\log_5$ is the set of positive real numbers.
However, you are composing $\log_5$ with $f : \mathbb{R} \to \mathbb{R}$ defined by $f(x) = 4-x^2$. The domain of the composition will be then the subset of the domain of $f$ that the function $f$ maps to the domain of the function $\log_5$, in other words, you want all real $x$ such that:
$4 - x^2 > 0$
Of course this implies $x^2 < 4$ and so $-2<x<2$. Then the domain of the function $\log_5(4-x^2)$ is the set $I = \{x\in \mathbb{R} \mid -2<x<2\}$.
Don't preocupate if you didn't understand what I meant by the domain of the composition and so on, try to understand this case and then study those topics deeper. I think this way you'll be fine.
I hope this answer helps you somehow. Good luck.
-
add comment
When we work with $\ln(f)$ in which $f=f(x)$ then we should care about this point that $f(x)>0$. Here, you get a function $$f(x)=\ln|7+28x|$$ Since there is an absolute value in $\ln(...)$ so, the only job we can do is to make $7+28x$ non-zero. This means that $x\mathbb R, ~~x\neq\frac{-7}{28}=\frac{-1}4$.
-
Nicely argued! +1 – amWhy Nov 18 at 14:33
add comment
|
# Variance of the reciprocal II
Background
Leo A. Goodman, On the Exact Variance of Products
Journal of the American Statistical Association
Vol. 55, No. 292 (Dec., 1960), pp. 708-713
from where I extract the following edited quotes (removed superfluous calculations and sentences)
Let $x$ and $y$ be two independent random variables. Let us denote the expected value of x by $E(x) = X$, the variance of $x$ by $V(x)$, ... A similar notation will be used for the random variable $y$.
...we have that the variance $V(xy)$ of the product $xy$ is equal to $$V(xy) = \ldots = X^2V(y) + Y^2V(x) + V(x)V(y)$$
... We shall now present an unbiased estimate of the variance $V(xy)$. ... we have that $$v(xy) = \ldots = x^2v(y) + y^2(x) - v(x)v(y)$$
is an unbiased estimate of $V(xy)$, where $v(x)$ is an unbiased estimate of $V(x)$ and $v(y)$ is an unbiased estimate of $V(y)$.
I have a relatively simple formula $P = w + xy/(1-z)$ where each of these (independent!) variables have been estimated by a statistical package, and supplied along with 95% confidence limits and standard errors (hence variances). In fact, each of $w,x,y,z$ are probabilities, and $z$ is bounded away from 1. (as an example of the magnitudes involved, one instance of the problem has $0.1 \lt w,x,y,z \lt 0.6$ and all standard errors about $3 \times 10^{-3}$)
Questions
I need to estimate some confidence limits on $P$, and my first idea was to use the confidence limits of $w,x,y,z$, but it looks tricky/inadvisable. My second idea was to work out the variance of $P$. This clearly boils down to finding the variance for $xy/(1-z)$.
Someone has told me that I should use the equation for $v(xy)$ in the context of my formula. That is all well and good, I can accept that. So now all I need to do is find the variance of $1/(1-z)$ and apply the result of the Goodman paper twice, or perhaps only find the variance of $y/(1-z)$ and use the Goodman result once. For argument's sake, let's do the former.
I found on the internet a rough set of notes which estimated the variance of a ratio $x/y$ to be (taking the special case of $x,y$ independent) $$Var(x/y) \approx \frac{E(y)^2 Var(x) + E(x)^2 Var(y)}{E(y)^4}$$ and for the case that I am interested in, I can take $x \sim Uniform(0,1)$ (i.e. '$1$') and so get $$Var(1/y) \approx \frac{Var(y)}{E(y)^4} \quad \quad (1)$$ Is this reliable/right? Even if it is, I now am faced with a small conundrum. What is the analogue in this instance for the formula for $v$?
I am happy to take all answers that address my original problem, the question of approximating $Var(1/(1-z))$, whether I use $Var$ as given in the approximation (1) or some "unbiased estimate" in terms of the data I do have, and lastly, what would this "unbiased estimate" be, given (1)?
• Does your stat package provide some form of "predict" function? These typically provide standard errors of the predicted values. If so, that might be a simpler way to go. – jbowman Dec 9 '11 at 0:39
• Well, it has given me standard errors for by w,x,y,z as output from a standard procedure (only discovered this facility this week), but don't think it will have a function that will work in my specific setting. I was just going to calculate P manually. – David Roberts Dec 9 '11 at 1:18
• Since you are using $x$ and $y$ to denote random variables, what is the meaning of $x$ and $y$ in $$v(xy) = \ldots = x^2v(y) + y^2(x) - v(x)v(y)?$$ I understand that $v(x)$ and $v(y)$ are the variances of $x$ and $y$, but what value(s) of $x$ and $y$ are to be used in this equation? – Dilip Sarwate Jan 10 '12 at 2:35
• @DilipSarwate - actually I don't know, and the paper doesn't say. The author goes on to discuss sample means $\bar{x},\bar{y}$ of $x$ and $y$, and $s^2(x)$ as the 'usual unbiased estimate of $V(x)$' and so on. – David Roberts Jan 10 '12 at 3:47
If you can't get a predictive accuracy out of the package, this may help.
1) A better approximation to $Var(x/y)$, which to some extent takes covariation into account, is:
$Var(x/y) \approx \left(\frac{E(x)}{E(y)}\right)^2 \left(\frac{Var(x)}{E(x)^2} + \frac{Var(y)}{E(y)^2} - 2 \frac{Cov(x,y)}{E(x)E(y)}\right)$
2) For approximating the variance of a transform of a random variate, the delta method Wikipedia sometimes, but not always, gives good results. In this case, it gives, corresponding to your formula (1):
$Var(1/(1-z)) \approx \frac{Var(z)}{(1-E(z))^4}$
So now you know where that comes from! Using more terms from the underlying Taylor expansion etc. gives a higher-order, although not necessarily better, approximation:
$Var(1/(1-z)) \approx \frac{Var(z)}{(1-E(z))^4} + 2\frac{E[(z-E(z))^3]}{(1-E(z))^5} + \frac{E[(z-E(z))^4]}{(1-E(z))^6}$
I tried this out via simulation using 10,000 $U(0.1,0.6)$ variates, mimicking the example range you provided in your question, and obtained the following results. The observed variance of $1/(1-z)$ was 0.149. The first-order delta approximation yielded a value of 0.117. The next delta approximation yielded a value of 0.128. 10,000 draws from a Beta(10,20) distribution gave results of similar relative accuracy; the observed variance of $1/(1-z)$ was 0.044 and the higher-order delta approximation gave a value of 0.039.
How you would get the third and fourth moments of your estimates I'm not sure. You could, if your sample sizes give you some confidence in being close to asymptotic normality for your estimates, just use those of the Normal distribution. A bootstrap is a possibility as well, if you can do it. Either way, with small samples you're probably better off with the one-term approximation.
Of course, I could simplify all this notation by just defining $z' = 1-z$ and using that, but I chose to stick with the original notation in the question.
• Well, the estimates are probabilities arising from a multinomial logistic model, so they are some sort of funky transformation of a (combination of) normally distributed variables, or variables I'm happy to assume are normal. (I do know what this is, sort of a multivariable inverse logit) I don't know if this helps much with moment estimation, though. – David Roberts Dec 9 '11 at 3:41
• I misplaced some brackets in the longer expression, fixed now. The Normal assumption allows you to use $E[(z-E(z))^3] = 0$ and $E[(z-E(z))^4] = 3Var(z)^2$, which may be helpful. – jbowman Dec 9 '11 at 14:35
• Thanks for this answer - it saved me a lot of time hunting down things. – David Roberts Dec 14 '11 at 4:32
• i am lacking the privileges to fix a type in the accepted answer: there should be a minus sign before the covariance term – eyaler Jan 10 '12 at 1:14
• The first solutions seems weird to me. If E(x)=0, the estimate of var(x/y) goes to zero. That cannot be right then. – user89706 Sep 17 '15 at 13:54
|
time-1.9.3: A time library
Data.Time.Calendar.Julian
Synopsis
# Year and day format
Convert from proleptic Julian year and day format. Invalid day numbers will be clipped to the correct range (1 to 365 or 366).
Convert from proleptic Julian year and day format. Invalid day numbers will return Nothing
Is this year a leap year according to the proleptic Julian calendar?
Show in proleptic Julian year and day format (yyyy-ddd)
Convert to proleptic Julian year and day format. First element of result is year (proleptic Julian calendar), second is the day of the year, with 1 for Jan 1, and 365 (or 366 in leap years) for Dec 31.
toJulian :: Day -> (Integer, Int, Int) Source #
Convert to proleptic Julian calendar. First element of result is year, second month number (1-12), third day (1-31).
Convert from proleptic Julian calendar. First argument is year, second month number (1-12), third day (1-31). Invalid values will be clipped to the correct range, month first, then day.
Convert from proleptic Julian calendar. First argument is year, second month number (1-12), third day (1-31). Invalid values will return Nothing.
Show in ISO 8601 format (yyyy-mm-dd)
The number of days in a given month according to the proleptic Julian calendar. First argument is year, second is month.
Add months, with days past the last day of the month clipped to the last day. For instance, 2005-01-30 + 1 month = 2005-02-28.
Add months, with days past the last day of the month rolling over to the next month. For instance, 2005-01-30 + 1 month = 2005-03-02.
Add years, matching month and day, with Feb 29th clipped to Feb 28th if necessary. For instance, 2004-02-29 + 2 years = 2006-02-28.
Add years, matching month and day, with Feb 29th rolled over to Mar 1st if necessary. For instance, 2004-02-29 + 2 years = 2006-03-01.
Calendrical difference, with as many whole months as possible. Same as diffJulianDurationClip for positive durations.
|
`ISRN OncologyVolume 2012 (2012), Article ID 349351, 9 pageshttp://dx.doi.org/10.5402/2012/349351`
Review Article
## The Potential Benefit by Application of Kinetic Analysis of PET in the Clinical Oncology
Nuclear Medicine Department, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
Received 4 November 2012; Accepted 25 November 2012
Academic Editors: S. Honoré and T. Yokoe
Copyright © 2012 Mustafa Takesh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. M. L. Macheda, S. Rogers, and J. D. Best, “Molecular and cellular regulation of glucose transporter (GLUT) proteins in cancer,” Journal of Cellular Physiology, vol. 202, no. 3, pp. 654–662, 2005.
2. C. Plathow and W. A. Weber, “Tumor cell metabolism imaging,” Journal of Nuclear Medicine, vol. 49, supplement 6, pp. 43S–63S, 2008.
3. E. Sutinen, M. Nurmi, A. Roivainen et al., “Kinetics of [11C]choline uptake in prostate cancer: a PET stydy,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 31, no. 3, pp. 317–324, 2004.
4. K. Glunde and Z. M. Bhujwalla, “Choline kinase alpha in cancer prognosis and treatment,” Lancet Oncology, vol. 8, no. 10, pp. 855–857, 2007.
5. M. Beheshti, W. Langsteger, and I. Fogelman, “Prostate cancer: role of SPECT and PET in imaging bone metastases,” Seminars in Nuclear Medicine, vol. 39, no. 6, pp. 396–407, 2009.
6. A. J. Breeuwsma, J. Pruim, M. M. Jongen et al., “In vivo uptake of [11C]choline does not correlate with cell proliferation in human prostate cancer,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 32, no. 6, pp. 668–673, 2005.
7. Q. H. Zheng, T. A. Gardner, S. Raikwar et al., “[11C]Choline as a PET biomarker for assessment of prostate cancer tumor models,” Bioorganic and Medicinal Chemistry, vol. 12, no. 11, pp. 2887–2893, 2004.
8. L. G. Strauss, L. Pan, C. Cheng, U. Haberkorn, and A. Dimitrakopoulou-Strauss, “Shortened acquisition protocols for the quantitative assessment of the 2-tissue-compartment model using dynamic PET/CT18F-FDG studies,” Journal of Nuclear Medicine, vol. 52, no. 3, pp. 379–385, 2011.
9. T. Ohtake, N. Kosaka, T. Watanabe et al., “Noninvasive method to obtain input function for measuring tissue glucose utilization of thoracic and abdominal organs,” Journal of Nuclear Medicine, vol. 32, no. 7, pp. 1432–1438, 1991.
10. C. Burger and A. Buck, “Requirements and implementation of a flexible kinetic modeling tool,” Journal of Nuclear Medicine, vol. 38, no. 11, pp. 1818–1823, 1997.
11. O. C. Hutchinson, D. R. Collingridge, H. Barthel, P. M. Price, and E. O. Aboagye, “Pharmacokinetics of radiolabelled anticancer drugs for positron emission tomography,” Current Pharmaceutical Design, vol. 9, no. 11, pp. 917–929, 2003.
12. A. M. Spence, M. Muzi, D. A. Mankoff et al., “18F-FDG PET of gliomas at delayed intervals: improved distinction between tumor and normal gray matter,” Journal of Nuclear Medicine, vol. 45, no. 10, pp. 1653–1659, 2004.
13. N. Kawai, Y. Nishiyama, K. Miyake, T. Tamiya, and S. Nagao, “Evaluation of tumor FDG transport and metabolism in primary central nervous system lymphoma using [18F]fluorodeoxyglucose (FDG) positron emission tomography (PET) kinetic analysis,” Annals of Nuclear Medicine, vol. 19, no. 8, pp. 685–690, 2005.
14. Y. Nishiyama, Y. Yamamoto, T. Monden et al., “Diagnostic value of kinetic analysis using dynamic FDG PET in immunocompetent patients with primary CNS lymphoma,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 34, no. 1, pp. 78–86, 2007.
15. H. Schöder and H. W. D. Yeung, “Positron emission imaging of head and neck cancer, including thyroid carcinoma,” Seminars in Nuclear Medicine, vol. 34, no. 3, pp. 180–197, 2004.
16. B. Huang, P.-L. Khong, D. L.-W. Kwong, B. Hung, C.-S. Wong, and C.-Y. O. Wong, “Dynamic PET-CT studies for characterizing nasopharyngeal carcinoma metabolism: comparison of analytical methods,” Nuclear Medicine Communications, vol. 33, no. 2, pp. 191–197, 2012.
17. Y. Anzai, S. Minoshima, G. T. Wolf, and R. L. Wahl, “Head and neck cancer: detection of recurrence with three-dimensional principal components analysis at dynamic FDG PET,” Radiology, vol. 212, no. 1, pp. 285–290, 1999.
18. B. Huang, T. Chan, W. K. S. Chan, and P.-L. Khong, “Nasopharyngeal carcinoma: investigation of intratumoral heterogeneity with FDG PET/CT,” American Journal of Roentgenology, vol. 199, no. 1, pp. 169–174, 2012.
19. H. O. Peitgen, H. Juergens, and D. Saupe, Chaos and Fractals, Springer, New York, NY, USA, 1st edition, 1992.
20. M. Kleen, O. Habler, B. Zwissler, and K. Messmer, “Programs for assessment of spatial heterogeneity of regional organ blood flow,” Computer Methods and Programs in Biomedicine, vol. 55, no. 1, pp. 51–57, 1998.
21. P. Laverman, O. C. Boerman, F. H. M. Corstens, and W. J. G. Oyen, “Fluorinated amino acids for tumour imaging with positron emission tomography,” European Journal of Nuclear Medicine, vol. 29, no. 6, pp. 681–690, 2002.
22. W. A. Weber, H. J. Wester, A. L. Grosu et al., “O-(2-[18F]fluoroethyl)-L-tyrosine and L-[methyl-11C]methionine uptake in brain tumours: initial results of a comparative study,” European Journal of Nuclear Medicine, vol. 27, no. 5, pp. 542–549, 2000.
23. A. Becherer, G. Karanikas, M. Szabó et al., “Brain tumour imaging with PET: a comparison between [18F]fluorodopa and [11C]methionine,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 30, no. 11, pp. 1561–1567, 2003.
24. W. Chen, D. H. S. Silverman, S. Delaloye et al., “18F-FDOPA PET imaging of brain tumors: comparison study with 18F-FDG PET and evaluation of diagnostic accuracy,” Journal of Nuclear Medicine, vol. 47, no. 6, pp. 904–911, 2006.
25. K. Ishiwata, K. Kubota, M. Murakami et al., “Re-evaluation of amino acid PET studies: can the protein synthesis rates in brain and tumor tissues be measured in vivo?” Journal of Nuclear Medicine, vol. 34, no. 11, pp. 1936–1943, 1993.
26. C. Schiepers, W. Chen, T. Cloughesy, M. Dahlbom, and S. C. Huang, “18F-FDOPA kinetics in brain tumors,” Journal of Nuclear Medicine, vol. 48, no. 10, pp. 1651–1661, 2007.
27. L. M. Cher, C. Murone, N. Lawrentschuk et al., “Correlation of hypoxic cell fraction and angiogenesis with glucose metabolic rate in gliomas using 18F-fluoromisonidazole, 18F-FDG PET, and immunohistochemical studies,” Journal of Nuclear Medicine, vol. 47, no. 3, pp. 410–418, 2006.
28. D. Thorwarth, S. M. Eschmann, F. Paulsen, and M. Alber, “A kinetic model for dynamic [18F]-Fmiso PET data to analyse tumour hypoxia,” Physics in Medicine and Biology, vol. 50, no. 10, pp. 2209–2224, 2005.
29. D. Thorwarth, S. M. Eschmann, J. Scheiderbauer, F. Paulsen, and M. Alber, “Kinetic analysis of dynamic 18F-fluoromisonidazole PET correlates with radiation treatment outcome in head-and-neck cancer,” BMC Cancer, vol. 5, article 152, 2005.
30. K. Shi, M. Souvatzoglou, S. T. Astner et al., “Quantitative assessment of hypoxia kinetic models by a cross-study of dynamic 18F-FAZA and 15O-H2O in patients with head and neck tumors,” Journal of Nuclear Medicine, vol. 51, no. 9, pp. 1386–1394, 2010.
31. Y. T. Hong, J. S. Beech, R. Smith, J. C. Baron, and T. D. Fryer, “Parametric mapping of 18 Ffluoromisonidazole positron emission tomography using basis functions,” Journal of Cerebral Blood Flow and Metabolism, vol. 31, no. 2, pp. 648–657, 2011.
32. W. Wang, N. Y. Lee, J. C. Georgi et al., “Pharmacokinetic analysis of hypoxia 18F-fluoromisonidazole dynamic PET in head and neck cancer,” Journal of Nuclear Medicine, vol. 51, no. 1, pp. 37–45, 2010.
33. M. Henze, J. Schuhmacher, P. Hipp et al., “PET imaging of somatostatin receptors using [68GA]DOTA-D-Phe1-Tyr3-Octreotide: first results in patients with meningiomas,” Journal of Nuclear Medicine, vol. 42, no. 7, pp. 1053–1056, 2001.
34. M. Henze, A. Dimitrakopoulou-Strauss, S. Milker-Zabel et al., “Characterization of 68Ga-DOTA-D-Phe1-Tyr 3-octreotide kinetics in patients with meningiomas,” Journal of Nuclear Medicine, vol. 46, no. 5, pp. 763–769, 2005.
35. A. H. Jacobs, A. Thomas, L. W. Kracht et al., “18F-fluoro-L-thymidine and 11C-methylmethionine as markers of increased transport and proliferation in brain tumors,” Journal of Nuclear Medicine, vol. 46, no. 12, pp. 1948–1958, 2005.
36. M. Muzi, A. M. Spence, F. O'Sullivan et al., “Kinetic analysis of 3′-deoxy-3′-18F-fluorothymidine in patients with gliomas,” Journal of Nuclear Medicine, vol. 47, no. 10, pp. 1612–1621, 2006.
37. C. Schiepers, W. Chen, M. Dahlbom, T. Cloughesy, C. K. Hoh, and S. C. Huang, “18F-fluorothymidine kinetics of malignant brain tumors,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 34, no. 7, pp. 1003–1011, 2007.
38. M. S. Bradbury, D. Hambardzumyan, P. B. Zanzonico et al., “Dynamic small-animal PET imaging of tumor proliferation with 3′-deoxy-3′-18F-fluorothymidine in a genetically engineered mouse model of high-grade gliomas,” Journal of Nuclear Medicine, vol. 49, no. 3, pp. 422–429, 2008.
39. C. Schiepers, M. Dahlbom, W. Chen et al., “Kinetics of $\mathrm{3ʹ}$-deoxy-3ʹ-18F-fluorothymidine during treatment monitoring of recurrent high-grade glioma,” Journal of Nuclear Medicine, vol. 51, no. 5, pp. 720–727, 2010.
40. M. Wardak, C. Schiepers, M. Dahlbom et al., “Discriminant analysis of 18F-fluorothymidine kinetic parameters to predict survival in patients with recurrent high-grade glioma,” Clinical Cancer Research, vol. 17, no. 20, pp. 6553–6562, 2011.
41. T. Tsuchida, Y. Demura, M. Sasaki et al., “Differentiation of histological subtypes in lung cancer with 18F-FDG-PET 3-point imaging and kinetic analysis,” Hellenic Journal of Nuclear Medicine, vol. 14, no. 3, pp. 224–227, 2011.
42. T. Torizuka, K. R. Zasadny, B. Recker, and R. L. Wahl, “Untreated primary lung and breast cancers: correlation between F-18 FDG kinetic rate constants and findings of in vitro studies,” Radiology, vol. 207, no. 3, pp. 767–774, 1998.
43. C. Juhász, X. Lu, M. S. Jahania et al., “Quantification of tryptophan transport and metabolism in lung tumors using PET,” Journal of Nuclear Medicine, vol. 50, no. 3, pp. 356–363, 2009.
44. M. Muzi, H. Vesselle, J. R. Grierson et al., “Kinetic analysis of 3′-deoxy-3′-fluorothymidine PET studies: validation studies in patients with lung cancer,” Journal of Nuclear Medicine, vol. 46, no. 2, pp. 274–282, 2005.
45. A. A. M. Van Der Veldt, M. Lubberink, H. N. Greuter et al., “Absolute quantification of [11C]docetaxel kinetics in lung cancer patients using positron emission tomography,” Clinical Cancer Research, vol. 17, no. 14, pp. 4814–4824, 2011.
46. J. Trojan, O. Schroeder, J. Raedle et al., “Fluorine-18 FDG positron emission tomography for imaging of hepatocellular carcinoma,” American Journal of Gastroenterology, vol. 94, no. 11, pp. 3314–3319, 1999.
47. S. Okazumi, K. Isono, K. Enomoto et al., “Evaluation of liver tumors using fluorine-18-fluorodeoxyglucose PET: characterization of tumor and assessment of effect of treatment,” Journal of Nuclear Medicine, vol. 33, no. 3, pp. 333–339, 1992.
48. Y. Choi, R. A. Hawkins, S. C. Huang et al., “Evaluation of the effect of glucose ingestion and kinetic model configurations of FDG in the normal liver,” Journal of Nuclear Medicine, vol. 35, no. 5, pp. 818–823, 1994.
49. C. Messa, Y. Choi, C. K. Hoh et al., “Quantification of glucose utilization in liver metastases: parametric imaging of FDG uptake with PET,” Journal of Computer Assisted Tomography, vol. 16, no. 5, pp. 684–689, 1992.
50. S. Chen, C. Ho, D. Feng, and Z. Chi, “Tracer kinetic modeling of 11C-acetate applied in the liver with positron emission tomography,” IEEE Transactions on Medical Imaging, vol. 23, no. 4, pp. 426–432, 2004.
51. S. Chen and D. Feng, “Noninvasive quantification of the differential portal and arterial contribution to the liver blood supply front PET measurements using the 11C-acetate kinetic model,” IEEE Transactions on Biomedical Engineering, vol. 51, no. 9, pp. 1579–1585, 2004.
52. L. G. Strauss, S. Klippel, L. Pan, K. Schönleben, U. Haberkorn, and A. Dimitrakopoulou-Strauss, “Assessment of quantitative FDG PET data in primary colorectal tumours: which parameters are important with respect to tumour detection?” European Journal of Nuclear Medicine and Molecular Imaging, vol. 34, no. 6, pp. 868–877, 2007.
53. L. G. Strauss, D. Koczan, S. Klippel et al., “Impact of angiogenesis-related gene expression on the tracer kinetics of 18F-FDG in colorectal tumors,” Journal of Nuclear Medicine, vol. 49, no. 8, pp. 1238–1244, 2008.
54. L. G. Strauss, D. Koczan, S. Klippel et al., “Impact of cell-proliferation-associated gene expression on 2-deoxy-2-[18F]fluoro-D-glucose (FDG) kinetics as measured by dynamic positron emission tomography (dPET) in Colorectal Tumors,” Molecular Imaging and Biology, vol. 13, no. 6, pp. 1290–1300, 2011.
55. J. Buijsen, J. Van Den Bogaard, M. H. M. Janssen et al., “FDG-PET provides the best correlation with the tumor specimen compared to MRI and CT in rectal cancer,” Radiotherapy and Oncology, vol. 98, no. 2, pp. 270–276, 2011.
56. E. C. Ford, P. E. Kinahan, L. Hanlon et al., “Tumor delineation using PET in head and neck cancers: threshold contouring and lesion volumes,” Medical Physics, vol. 33, no. 11, pp. 4280–4288, 2006.
57. M. H. M. Janssen, H. J. W. L. Aerts, M. C. Öllers et al., “Tumor delineation based on time-activity curve differences assessed with dynamic fluorodeoxyglucose positron emission tomography-computed tomography in rectal cancer patients,” International Journal of Radiation Oncology Biology Physics, vol. 73, no. 2, pp. 456–465, 2009.
58. J. R. Bading, P. B. Yoo, J. D. Fissekis, M. M. Alauddin, D. Z. D'Argenio, and P. S. Conti, “Kinetic modeling of 5-fluorouracil anabolism in colorectal adenocarcinoma: a positron emission tomography study in rats,” Cancer Research, vol. 63, no. 13, pp. 3667–3674, 2003.
59. L. G. Strauss, J. Hoffend, D. Koczan, L. Pan, U. Haberkorn, and A. Dimitrakopoulou-Strauss, “Early effects of FOLFOX treatment of colorectal tumour in an animal model: assessment of changes in gene expression and FDG kinetics,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 36, no. 8, pp. 1226–1234, 2009.
60. A. Dimitrakopoulou-Strauss, L. G. Strauss, and J. Rudi, “PET-FDG as predictor of therapy response in patients with colorectal carcinoma,” Quarterly Journal of Nuclear Medicine, vol. 47, no. 1, pp. 8–13, 2003.
61. A. Dimitrakopoulou-Strauss, L. G. Strauss, C. Burger et al., “Prognostic aspects of 18F-FDG PET kinetics in patients with metastatic colorectal carcinoma receiving FOLFOX chemotherapy,” Journal of Nuclear Medicine, vol. 45, no. 9, pp. 1480–1487, 2004.
62. M. Schulte, D. Brecht-Krauss, B. Heymer et al., “Grading of tumors and tumorlike lesions of bone: evaluation by FDG PET,” Journal of Nuclear Medicine, vol. 41, no. 10, pp. 1695–1701, 2000.
63. A. C. Kole, O. E. Nieweg, H. J. Hoekstra, J. R. Van Horn, H. S. Koops, and W. Vaalburg, “Fluorine-18-fluorodeoxyglucose assessment of glucose metabolism in bone tumors,” Journal of Nuclear Medicine, vol. 39, no. 5, pp. 810–815, 1998.
64. H. Wu, A. Dimitrakopoulou-Strauss, T. O. Heichel et al., “Quantitative evaluation of skeletal tumours with dynamic FDG PET: SUV in comparison to Patlak analysis,” European Journal of Nuclear Medicine, vol. 28, no. 6, pp. 704–710, 2001.
65. R. Tian, M. Su, Y. Tian et al., “Dual-time point PET/CT with F-18 FDG for the differentiation of malignant and benign bone lesions,” Skeletal Radiology, vol. 38, no. 5, pp. 451–458, 2009.
66. A. Dimitrakopoulou-Strauss, L. G. Strauss, T. Heichel et al., “The role of quantitative 18F-FDG PET studies for the differentiation of malignant and benign bone lesions,” Journal of Nuclear Medicine, vol. 43, no. 4, pp. 510–518, 2002.
67. L. G. Strauss, A. Dimitrakopoulou-Strauss, D. Koczan et al., “18F-FDG kinetics and gene expression in giant cell tumors,” Journal of Nuclear Medicine, vol. 45, no. 9, pp. 1528–1535, 2004.
68. Y. Kawakami, T. Kunisada, S. Sugihara et al., “New approach for assessing vascular distribution within bone tumors using dynamic contrast-enhanced MRI,” Journal of Cancer Research and Clinical Oncology, vol. 133, no. 10, pp. 697–703, 2007.
69. A. Dimitrakopoulou-Strauss, L. G. Strauss, M. Schwarzbach et al., “Dynamic PET 18F-FDG studies in patients with primary and recurrent soft-tissue sarcomas: impact on diagnosis and correlation with grading,” Journal of Nuclear Medicine, vol. 42, no. 5, pp. 713–720, 2001.
70. S. Okazumi, A. Dimitrakopoulou-Strauss, M. H. M. Schwarzbach, and L. G. Strauss, “Quantitative, dynamic 18F-FDG-PET for the evaluation of soft tissue sarcomas: relation to differential diagnosis, tumor grading and prediction of prognosis,” Hellenic Journal of Nuclear Medicine, vol. 12, no. 3, pp. 223–307, 2009.
71. A. Dimitrakopoulou-Strauss, L. G. Strauss, G. Egerer et al., “Impact of dynamic 18F-FDG PET on the early prediction of therapy outcome in patients with high-risk soft-tissue sarcomas after neoadjuvant chemotherapy: a feasibility study,” Journal of Nuclear Medicine, vol. 51, no. 4, pp. 551–558, 2010.
72. V. Michel, Z. Yuan, S. Ramsubir, and M. Bakovic, “Choline transport for phospholipid synthesis,” Experimental Biology and Medicine, vol. 231, no. 5, pp. 490–504, 2006.
73. A. Bansal, W. Shuyan, T. Hara, R. A. Harris, and T. R. DeGrado, “Biodisposition and metabolism of [18F]fluorocholine in 9L glioma cells and 9L glioma-bearing fisher rats,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 35, no. 6, pp. 1192–1203, 2008.
74. G. Henriksen, M. Herz, A. Hauser, M. Schwaiger, and H. J. Wester, “Synthesis and preclinical evaluation of the choline transport tracer deshydroxy-[18F]fluorocholine ([18F]dOC),” Nuclear Medicine and Biology, vol. 31, no. 7, pp. 851–858, 2004.
75. P. Shreve, P. C. Chiao, H. D. Humes, M. Schwaiger, and M. D. Gross, “Carbon-11-acetate PET imaging in renal disease,” Journal of Nuclear Medicine, vol. 36, no. 9, pp. 1595–1601, 1995.
76. J. Kotzerke, B. G. Volkmer, B. Neumaier, J. E. Gschwend, R. E. Hautmann, and S. N. Reske, “Carbon-11 acetate positron emission tomography can detect local recurrence of prostate cancer,” European Journal of Nuclear Medicine, vol. 29, no. 10, pp. 1380–1384, 2002.
77. A. L. Vavere, S. J. Kridel, F. B. Wheeler, and J. S. Lewis, “1-11C-acetate as a PET radiopharmaceutical for imaging fatty acid synthase expression in prostate cancer,” Journal of Nuclear Medicine, vol. 49, no. 2, pp. 327–334, 2008.
78. C. Schiepers, C. K. Hoh, J. Nuyts et al., “1-11C-acetate kinetics of prostate cancer,” Journal of Nuclear Medicine, vol. 49, no. 2, pp. 206–215, 2008.
|
# Page:PoyntingTransfer.djvu/6
But from the values of P', Q', R' in (5) we see that
${\displaystyle {\begin{array}{ll}{\frac {dQ'}{dz}}-{\frac {dR'}{dy}}&=-{\frac {d^{2}G}{dt\ dz}}-{\frac {d^{2}\psi }{dx\ dz}}+{\frac {d^{2}H}{dt\ dy}}-{\frac {d^{2}\psi }{dz\ dx}}\\\\&={\frac {d}{dt}}\left({\frac {dH}{dy}}-{\frac {dG}{dz}}\right)\\\\&={\frac {da}{dt}}=\mu {\frac {d\alpha }{dt}}\ (\mathrm {Maxwell,\ vol.\ 2,\ p} .\ 216)\end{array}}}$
similarly
${\displaystyle {\begin{array}{c}{\frac {dR'}{dx}}-{\frac {dP'}{dz}}={\frac {db}{dt}}=\mu {\frac {d\beta }{dt}},\\\\{\frac {dP'}{dy}}-{\frac {dQ'}{dx}}={\frac {dc}{dt}}=\mu {\frac {d\gamma }{dt}}\end{array}}}$
Whence the triple integral in (6) becomes
${\displaystyle -{\frac {\mu }{4\pi }}\iiint \left(\alpha {\frac {d\alpha }{dt}}+\beta {\frac {d\beta }{dt}}+\gamma {\frac {d\gamma }{dt}}\right)dx\ dy\ dz}$
Transposing it to the other side we obtain
${\displaystyle {\begin{array}{r}{\frac {K}{4\pi }}\iiint \left(P{\frac {dP}{dt}}+Q{\frac {dQ}{dt}}+R{\frac {dR}{dt}}\right)dx\ dy\ dz+{\frac {\mu }{4\pi }}\iiint \left(\alpha {\frac {d\alpha }{dt}}+\beta {\frac {d\beta }{dt}}+\gamma {\frac {d\gamma }{dt}}\right)dx\ dy\ dz\\\\+\iiint (X{\dot {x}}+Y{\dot {y}}+Z{\dot {z}})dx\ dy\ dz+\iiint (Pp+Qq+Rr)dx\ dy\ dz\\\\={\frac {1}{4\pi }}\iint \left\{l(R'\beta -Q'\gamma )+m(P'\gamma -R'\alpha )+n(Q'\alpha -P'\beta )\right\}dS\end{array}}}$ (7)
The first two terms of this express the gain per second in electric and magnetic energies as in (2). The third term expresses the work done per second by the electromagnetic forces, that is, the energy transformed by the motion of the matter in which currents exist. The fourth term expresses the energy transformed by the conductor into heat, chemical energy, and so on; for P, Q, R are by definition the components of the force acting at a point per unit of positive electricity, so that ${\displaystyle Ppdxdydz}$ or ${\displaystyle Pdxpdydz}$ is the work done per second by the current flowing parallel to the axis of ${\displaystyle x}$ through the element of volume ${\displaystyle dxdydz}$. So for the other two components. This is in general transformed into other forms of energy, heat due to resistance, thermal effects at thermoelectric surfaces, and so on.
The left side of (7) thus expresses the total gain in energy per second within the closed surface, and the equation asserts that this energy comes through the bounding surface, each element contributing the amount expressed by the right side.
This may be put in another form, for if ${\displaystyle {\mathfrak {E'}}}$ be the resultant of P', Q', R' and ${\displaystyle \theta }$ the
|
### SSC JE Mechanical Engineering 23rd Jan 2018 Shift-1 Question 2
Instructions
In the following question, select the related word pair from the given alternatives
Question 2
# Square : Four :: ? : ?
Solution
Square has 4 sides
Rectangle has 4 sides
Hexagon has 6 sides
Rhombus has 4 sides
Triangle has 3 sides
$$\therefore\$$Triangle and Three are related in the same way Square and Four are related.
Hence, the correct answer is Option D
|
# Page number appears twice in the page
I don't know why I am getting two page numbers on the same page, one in the center and one on the right:
\documentclass[a4paper,12pt]{report}
\voffset=-1in
\usepackage{fancyhdr}
\pagestyle{fancy}
\rfoot{\thepage}
\begin{document}
The aim of the application is to create a Sushi plate and show its details.
\end{document}
Why does the one in the center still appear?
-
If the given answer help you solve the problem, please kindly accept it by clicking the check mark button below its score. Clicking the check mark button makes it green and increase your acceptance. The higher acceptance you have, the higher chance people help your other questions. :-) – kiss my armpit Mar 11 '13 at 20:08
done, sorry for the delay – Noor Mar 11 '13 at 20:15
\cfoot{}
It's usually advised to clear the header/footer entirely. Hence the use of \fancyhf{}. – Werner Feb 18 '13 at 20:39
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.