url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://mathhelpboards.com/threads/laplace-poisson-integral.6588/
# [SOLVED]Laplace/Poisson Integral #### dwsmith ##### Well-known member Feb 1, 2012 1,673 We have a two dimensional plate whose hole has radius $$a$$ and $$T(a, \theta) = f(\theta)$$. Find an expression for the steady state temperature profile $$T(r, \theta)$$ for $$r > a$$. I am pretty sure the solution below is correct but if you want to glance it over that would be fine. How do I find the form of Poisson Integral Formula for this problem? Laplace's equation is $$\frac{1}{r}\frac{\partial}{\partial r}\left( r\frac{\partial T}{\partial r}\right) + \frac{1}{r^2}\frac{\partial^2 T}{\partial\theta^2} = 0$$. Let $$T(r, \theta)$$ be of the form $$T = R(r)\Theta(\theta)$$. $\frac{r}{R}\frac{\partial}{\partial r}\left( r\frac{\partial R}{\partial r}\right) = - \frac{\Theta''}{\Theta} = \lambda^2$ Since we have perfect thermal contact, our periodic boundary conditions are \begin{align} \Theta(-\pi) &= \Theta(\pi)\\ \Theta'(-\pi) &= \Theta'(\pi) \end{align} When $$\lambda = 0$$, we have $$\Theta(\theta) = b$$ and $$R(r) = \alpha\ln(r) + \beta$$. Now suppose $$\lambda\neq 0$$. $\Theta_n(\theta) = A_n\cos(n\theta) + B_n\sin(n\theta)$ Let's now look at the radial equation, $$r^2R'' + rR' - n^2R = 0$$, which is of the Cauchy-Euler type. The general form of $$T(r, \theta)$$ is $T(r, \theta) = \alpha\ln(r) + \beta + \sum_{n = 1}^{\infty} \left(r^n + r^{-n}\right)\left(A_n\cos(n\theta) + B_n\sin(n\theta)\right).$ Since $$r$$ goes out to infinity, $$r^n$$ and $$\ln(r)$$ would blow up at infinity. Therefore, $$T(r, \theta)$$ is of the form $T(r, \theta) = A_0 + \sum_{n = 1}^{\infty}\frac{A_n}{r^n}\cos(n\theta) + \frac{B_n}{r^n}\sin(n\theta).$ To solve for the Fourier coefficients, we need to use the boundary condition on the hole of radius $$a$$. \begin{alignat*}{2} T(a, \theta) &= A_0 + \sum_{n = 1}^{\infty}\frac{A_n}{a^n}\cos(n\theta) + \frac{B_n}{a^n}\sin(n\theta) &&{} =f(\theta)\\ A_0 &= \frac{1}{2\pi}\int_{-\pi}^{\pi}f(\theta)d\theta\\ A_n &= \frac{a_n}{\pi}\int_{-\pi}^{\pi}f(\theta)\cos(n\theta)d\theta\\ B_n &= \frac{a_n}{\pi}\int_{-\pi}^{\pi}f(\theta)\sin(n\theta)d\theta \end{alignat*} [HR][/HR][HR][/HR]We can re-write the solution as $T(r,\theta) = \sum_{n = 0}^{\infty}\frac{C_n}{r^n}e^{in\theta}.$ The Poisson kernel is $$P(r,\theta) = \frac{1}{2\pi}\sum\limits_{n = -\infty}^{\infty}r^{\lvert n\rvert}e^{in\theta}$$. #### dwsmith ##### Well-known member Feb 1, 2012 1,673 We can write $$T(r, \theta) = \sum\limits_{n = -\infty}^{\infty}\left(\frac{a}{r}\right)^nc_n\exp(in\theta)$$. Then $$c_n = \frac{a^n}{2\pi}\int_{-\pi}^{\pi}f(\varphi)\exp(-in\varphi)d\varphi$$. $\sum\limits_{n = -\infty}^{\infty}\left(\frac{a}{r}\right)^n \left(\frac{1}{2\pi} \int_{-\pi}^{\pi}f(\varphi)\exp(-in\varphi)d\varphi\right)\exp(in\theta) = \int_{-\pi}^{\pi}f(\varphi)\left[\frac{1}{2\pi}\sum_{-\infty}^{\infty}r^{-n} \exp(in(\theta - \varphi))\right]d\varphi$ Poisson's kernel is $$P(r, \theta) = \frac{1}{2pi}\sum\limits_{n = -\infty}^{\infty}r^{|n|}e^{in\theta}$$. In our case $$r > a$$, we have $P(r, \theta - \varphi) = \frac{1}{2\pi}\sum_{-\infty}^{\infty}r^{-n} \exp(in(\theta - \varphi)).$ We can re-write $$P(r, \theta)$$ as $P(r, \theta) = \frac{1}{2\pi}\left[\sum_{n = 0}^{\infty}r^{-n}e^{in\theta} + \sum_{n = 1}^{\infty}r^{-n}e^{-in\theta}\right]$ Let $$z = \frac{1}{r}\exp(i\theta)$$. Then we have two geometric series. $P(r, \theta) = \frac{1}{2\pi}\left[\frac{1}{1 - z} + \frac{\bar{z}}{1 - \bar{z}}\right]$ At the moment, I have to go so I can't finish it yet. However, is this the correct idea?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997571110725403, "perplexity": 855.7397021167113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141205147.57/warc/CC-MAIN-20201130035203-20201130065203-00377.warc.gz"}
http://philpapers.org/s/Arnon%20Avron
## Works by Arnon Avron 62 found Sort by: 1. we also provide an efficient algorithm for recovering this data. We then illustrate the ideas in a diagnostic system for checking faulty circuits. The underlying formalism is.. No categories My bibliography Export citation 2. in models. We show that these natural preferential In the research on paraconsistency, preferential systems systems that were originally designed for paraconwere used for constructing logics which are paraconsistent sistent reasoning fulfill a key condition (stopperedbut stronger than substructural paraconsistent logics. The ness or smoothness) from the theoretical research preferences in these systems were defined in different ways. of nonmonotonic reasoning. Consequently, the Some were based on checking which abnormal formulas nonmonotonic consequence relations that they in-. My bibliography Export citation 3. We present a new unified framework for formalizations of axiomatic set theories of different strength, from rudimentary set theory to full ZF . It allows the use of set terms, but provides a static check of their validity. Like the inconsistent “ideal calculus” for set theory, it is essentially based on just two set-theoretical principles: extensionality and comprehension (to which we add ∈-induction and optionally the axiom of choice). Comprehension is formulated as: x ∈ {x | ϕ} ↔ ϕ, where (...) No categories My bibliography Export citation 4. One of the most signi cant drawbacks of classical logic is its being useless in the presence of an inconsistency. Nevertheless, the classical calculus is a very convenient framework to work with. In this work we propose means for drawing conclusions from systems that are based on classical logic, although the informationmightbe inconsistent. The idea is to detect those parts of the knowledge-base that \cause" the inconsistency, and isolate the parts that are \recoverable". We do this by temporarily switching into (...) My bibliography Export citation 5. We suggest a new framework for the Weyl-Feferman predicativist program by constructing a formal predicative set theory P ZF which resembles ZF , and is suitable for mechanization. The basic idea is that the predicatively acceptable instances of the comprehension schema are those which determine the collections they define in an absolute way, independent of the extension of the “surrounding universe”. The language of P ZF is type-free, and it reflects real mathematical practice in making an extensive use of statically (...) Translate to English | My bibliography Export citation 6. The notion of a bilattice was rst introduced by Ginsburg (see Gin]) as a general framework for a diversity of applications (such as truth maintenance systems, default inferences and others). The notion was further investigated and applied for various purposes by Fitting (see Fi1]- Fi6]). The main idea behind bilattices is to use structures in which there are two (partial) order relations, having di erent interpretations. The two relations should, of course, be connected somehow in order for the mathematical structure (...) No categories My bibliography Export citation 7. We provide a constructive, direct, and simple proof of the completeness of the cut-free part of the hypersequential calculus for G¨odel logic (thereby proving both completeness of the calculus for its standard semantics, and the admissibility of the cut rule in the full calculus). We then extend the results and proofs to derivations from assumptions, showing that such derivations can be confined to those in which cuts are made only on formulas which occur in the assumptions. My bibliography Export citation 8. We develop a unified framework for dealing with constructibility and absoluteness in set theory, decidability of relations in effective structures (like the natural numbers), and domain independence of queries in database theory. Our framework and results suggest that domain-independence and absoluteness might be the key notions in a general theory of constructibility, predicativity, and computability. Translate to English | My bibliography Export citation 9. We define the notions of a canonical inference rule and a canonical system in the framework of single-conclusion Gentzen-type systems (or, equivalently, natural deduction systems), and prove that such a canonical system is non-trivial iff it is coherent (where coherence is a constructive condition). Next we develop a general non-deterministic Kripke-style semantics for such systems, and show that every constructive canonical system (i.e. coherent canonical single-conclusion system) induces a class of non-deterministic Kripke-style frames for which it is strongly sound and (...) My bibliography Export citation 10. Propositional canonical Gentzen-type systems, introduced in [2], are systems which in addition to the standard axioms and structural rules have only logical rules in which exactly one occurrence of a connective is introduced and no other connective is mentioned. [2] provides a constructive coherence criterion for the non-triviality of such systems and shows that a system of this kind admits cut-elimination iff it is coherent. The semantics of such systems is provided using two-valued non-deterministic matrices (2Nmatrices). [23] extends these results (...) No categories My bibliography Export citation 11. We present a four-valued approach for recovering consistent data from inconsistent set of assertions. For a common family of knowledge-bases we also provide an e cient algorithm for doing so automaticly. This method is particularly useful for making model-based diagnoses. No categories My bibliography Export citation 12. A formula A is said to have the contraction property in a logic L i whenever A;A;? `L B (when ? is a multiset) also A;? `L B. In MLL and in MALL without the additive constants a formula has the contractionproperty i it is a theorem. Adding the mix rule does not change this fact. In MALL (with or without mix) and in a ne logic A has the contraction property i either A is provable or A is equivalent (...) Translate to English | My bibliography Export citation 13. This paper has two goals. First, we develop frameworks for logical systems which are able to re ect not only nonmonotonic patterns of reasoning, but also paraconsistent reasoning. Our second goal is to have a better understanding of the conditions that a useful relation for nonmonotonic reasoning should satisfy. For this we consider a sequence of generalizations of the pioneering works of Gabbay, Kraus, Lehmann, Magidor and Makinson. These generalizations allow the use of monotonic nonclassical logics as the underlying logic (...) My bibliography Export citation 14. In advanced books and courses on logic (e.g. Sm], BM]) Gentzen-type systems or their dual, tableaux, are described as techniques for showing validity of formulae which are more practical than the usual Hilbert-type formalisms. People who have learnt these methods often wonder why the Automated Reasoning community seems to ignore them and prefers instead the resolution method. Some of the classical books on AD (such as CL], Lo]) do not mention these methods at all. Others (such as Ro]) do, but (...) No categories Translate to English | My bibliography Export citation 15. It is well known that every propositional logic which satisfies certain very natural conditions can be characterized semantically using a multi-valued matrix ([Los and Suszko, 1958; W´ ojcicki, 1988; Urquhart, 2001]). However, there are many important decidable logics whose characteristic matrices necessarily consist of an infinite number of truth values. In such a case it might be quite difficult to find any of these matrices, or to use one when it is found. Even in case a logic does have a (...) No categories Translate to English | My bibliography Export citation 16. We construct a modular semantic frameworks for LFIs (logics of formal (in)consistency) which extends the framework developed in [1; 3], but includes Marco’s schema too (and so practically all the axioms considered in [11] plus a few more). In addition, the paper provides another demonstration of the power of the idea of nondeterministic semantics, especially when it is combined with the idea of using truth-values to encode relevant data concerning propositions. No categories Translate to English | My bibliography Export citation 17. Non-deterministic matrices (Nmatrices) are multiple-valued structures in which the value assigned by a valuation to a complex formula can be chosen non-deterministically out of a certain nonempty set of options. We consider two different types of semantics which are based on Nmatrices: the dynamic one and the static one (the latter is new here). We use the Rasiowa-Sikorski (R-S) decomposition methodology to get sound and complete proof systems employing finite sets of mv-signed formulas for all propositional logics based on such (...) My bibliography Export citation 18. A paraconsistent logic is a logic which allows non-trivial inconsistent theories. One of the oldest and best known approaches to the problem of designing useful paraconsistent logics is da Costa’s approach, which seeks to allow the use of classical logic whenever it is safe to do so, but behaves completely differently when contradictions are involved. da Costa’s approach has led to the family of Logics of Formal (In)consistency (LFIs). In this paper we provide non-deterministic semantics for a very large family (...) My bibliography Export citation 19. A paraconsistent logic is a logic which allows non-trivial inconsistent theories. One of the oldest and best known approaches to the problem of designing useful paraconsistent logics is da Costa’s approach, which seeks to allow the use of classical logic whenever it is safe to do so, but behaves completely differently when contradictions are involved. da Costa’s approach has led to the family of Logics of Formal (In)consistency (LFIs). In this paper we provide non-deterministic semantics for a very large family (...) Translate to English | My bibliography Export citation 20. We show by way of example how one can provide in a lot of cases simple modular semantics for rules of inference, so that the semantics of a system is obtained by joining the semantics of its rules in the most straightforward way. Our main tool for this task is the use of finite Nmatrices, which are multi-valued structures in which the value assigned by a valuation to a complex formula can be chosen non-deterministically out of a certain nonempty set (...) No categories My bibliography Export citation 21. In order to handle inconsistent knowledge bases in a reasonable way, one needs a logic which allows nontrivial inconsistent theories. Logics of this sort are called paraconsistent. One of the oldest and best known approaches to the problem of designing useful paraconsistent logics is da Costa’s approach, which seeks to allow the use of classical logic whenever it is safe to do so, but behaves completely differently when contradictions are involved. Da Costa’s approach has led to the family of logics (...) Translate to English | My bibliography Export citation 22. We have avoided here the term \false", since we do not want to commit ourselves to the view that A is false precisely when it is not true. Our formulation of the intuition is therefore obviously circular, but this is unavoidable in intuitive informal characterizations of basic connectives and quanti ers. No categories Translate to English | My bibliography Export citation 23. We introduce a general framework for solving the problem of a computer collecting and combining information from various sources. Unlike previous approaches to this problem, in our framework the sources are allowed to provide information about complex formulae too. This is enabled by the use of a new tool — non-deterministic logical matrices. We also consider several alternative plausible assumptions concerning the framework. These assumptions lead to various logics. We provide strongly sound and complete proof systems for all the basic (...) No categories Translate to English | My bibliography Export citation 24. An (n, k)-ary quantifier is a generalized logical connective, binding k variables and connecting n formulas. Canonical systems with (n, k)-ary quantifiers form a natural class of Gentzen-type systems which in addition to the standard axioms and structural rules have only logical rules in which exactly one occurrence of a quantifier is introduced. The semantics for these systems is provided using two-valued non-deterministic matrices, a generalization of the classical matrix. In this paper we use a constructive syntactic criterion of coherence (...) No categories My bibliography Export citation 25. We provide a general investigation of Logic in which the notion of a simple consequence relation is taken to be fundamental. Our notion is more general than the usual one since we give up monotonicity and use multisets rather than sets. We use our notion for characterizing several known logics (including Linear Logic and non-monotonic logics) and for a general, semantics-independent classi cation of standard connectives via equations on consequence relations (these include Girard's \multiplicatives" and \additives"). We next investigate the (...) No categories My bibliography Export citation 26. We show that a given data ow language l has the property that for any program P and any demand for outputs D (which can be satis ed) there exists a least partial computation of P which satis es D, i all the operators of l are stable. This minimal computation is the demand-driven evaluation of P. We also argue that in order to actually implement this mode of evaluation, the operators of l should be further restricted to be e (...) No categories My bibliography Export citation 27. In several areas of Mathematical Logic and Computer Science one would ideally like to use the set F orm(L) of all formulas of some first-order language L for some goal, but this cannot be done safely. In such a case it is necessary to select a subset of F orm(L) that can safely be used. Three main examples of this phenomenon are: • The main principle of naive set theory is the comprehension schema: ∃Z(∀x.x ∈ Z ⇔ A). No categories Translate to English | My bibliography Export citation 28. There is a long tradition (See e.g. [9, 10]) starting from [12], according to which the meaning of a connective is determined by the introduction and elimination rules which are associated with it. The supporters of this thesis usually have in mind natural deduction systems of a certain ideal type (explained in Section 3 below). Unfortunately, already the handling of classical negation requires rules which are not of that type. This problem can be solved in the framework of multiple-conclusion Gentzen-type (...) No categories My bibliography Export citation 29. Until not too many years ago, all logics except classical logic (and, perhaps, intuitionistic logic too) were considered to be things esoteric. Today this state of a airs seems to have completely been changed. There is a growing interest in many types of nonclassical logics: modal and temporal logics, substructural logics, paraconsistent logics, non-monotonic logics { the list is long. The diversity of systems that have been proposed and studied is so great that a need is felt by many researchers (...) My bibliography Export citation 30. Linear logic is a new logic which was recently developed by Girard in order to provide a logical basis for the study of parallelism. It is described and investigated in Gi]. Girard's presentation of his logic is not so standard. In this paper we shall provide more standard proof systems and semantics. We shall also extend part of Girard's results by investigating the consequence relations associated with Linear Logic and by proving corresponding str ong completeness theorems. Finally, we shall investigate (...) My bibliography Export citation 31. Hypersequents are nite sets of ordinary sequents. We show that multiple-conclusion sequents and single-conclusion hypersequents represent two di erent natural methods of switching from a singleconclusioncalculusto a multiple-conclusionone. The use of multiple-conclusionsequentscorresponds to using a multiplicative disjunction, while the use of single-conclusionhypersequents corresponds to using an additive one. Moreover: each of the two methods is usually based on a di erent natural semantic idea and accordingly leads to a di erent class of algebraic structures. In the cases we consider here (...) No categories My bibliography Export citation 32. One of the most important paraconsistent logics is the logic mCi, which is one of the two basic logics of formal inconsistency. In this paper we present a 5-valued characteristic nondeterministic matrix for mCi. This provides a quite non-trivial example for the utility and effectiveness of the use of non-deterministic many-valued semantics. My bibliography Export citation 33. Around 1950, B.A. Trakhtenbrot proved an important undecidability result (known, by a pure accident, as \Trakhtenbrot's theorem"): there is no algorithm to decide, given a rst-order sentence, whether the sentence is satis able in some nite model. The result is in fact true even if we restrict ourselves to languages that has only one binary relation Tra63]. It is hardly conceivable that at that time Prof. Trakhtenbrot expected his result to in uence the development of the theory of relational databases (...) Translate to English | My bibliography Export citation 34. An (n, k)-ary quantifier is a generalized logical connective, binding k variables and connecting n formulas. Canonical systems with (n, k)-ary quantifiers form a natural class of Gentzen-type systems which in addition to the standard axioms and structural rules have only logical rules in which exactly one occurrence of a quantifier is introduced. The semantics for these systems is provided using two-valued non-deterministic matrices, a generalization of the classical matrix. In this paper we use a constructive syntactic criterion of coherence (...) No categories My bibliography Export citation 35. Arnon Avron (2014). Paraconsistency, Paracompleteness, Gentzen Systems, and Trivalent Semantics. Journal of Applied Non-Classical Logics 24 (1-2):12-34. A quasi-canonical Gentzen-type system is a Gentzen-type system in which each logical rule introduces either a formula of the form , or of the form , and all the active formulas of its premises belong to the set . In this paper we investigate quasi-canonical systems in which exactly one of the two classical rules for negation is included, turning the induced logic into either a paraconsistent logic or a paracomplete logic, but not both. We provide a constructive coherence criterion (...) No categories My bibliography Export citation 36. Arnon Avron (2014). The Classical Constraint on Relevance. Logica Universalis 8 (1):1-15. We show that as long as the propositional constants t and f are not included in the language, any language-preserving extension of any important fragment of the relevance logics R and RMI can have only classical tautologies as theorems . This property is not preserved, though, if either t or f is added to the language, or if the contraction axiom is deleted. My bibliography Export citation 37. Arnon Avron (2014). What is Relevance Logic? Annals of Pure and Applied Logic 165 (1):26-48. My bibliography Export citation 38. Anna Zamansky & Arnon Avron (2012). Canonical Signed Calculi with Multi-Ary Quantifiers. Annals of Pure and Applied Logic 163 (7):951-960. My bibliography Export citation 39. Maximality is a desirable property of paraconsistent logics, motivated by the aspiration to tolerate inconsistencies, but at the same time retain from classical logic as much as possible. In this paper we introduce the strongest possible notion of maximal paraconsistency, and investigate it in the context of logics that are based on deterministic or non-deterministic three-valued matrices. We show that all reasonable paraconsistent logics based on three-valued deterministic matrices are maximal in our strong sense. This applies to practically all three-valued (...) My bibliography Export citation 40. Arnon Avron, Oskar Becker, Johan van Benthem, Andreas Blass, Robert Brandom, L. E. J. Brouwer, Donald Davidson, Michael Dummett, Walter Felscher & Kit Fine (2009). Jagadeesan, Radha, 306 Japaridze, Giorgi, Xi. In Ondrej Majer, Ahti-Veikko Pietarinen & Tero Tulenheimo (eds.), Games: Unifying Logic, Language, and Philosophy. Springer Verlag. 377. No categories Translate to English My bibliography Export citation 41. Arnon Avron & Beata Konikowska (2009). Proof Systems for Reasoning About Computation Errors. Studia Logica 91 (2):273 - 293. In the paper we examine the use of non-classical truth values for dealing with computation errors in program specification and validation. In that context, 3-valued McCarthy logic is suitable for handling lazy sequential computation, while 3-valued Kleene logic can be used for reasoning about parallel computation. If we want to be able to deal with both strategies without distinguishing between them, we combine Kleene and McCarthy logics into a logic based on a non-deterministic, 3-valued matrix, incorporating both options (...) My bibliography Export citation 42. . The paper presents a method for transforming a given sound and complete n-sequent proof system into an equivalent sound and complete system of ordinary sequents. The method is applicable to a large, central class of (generalized) finite-valued logics with the language satisfying a certain minimal expressiveness condition. The expressiveness condition decrees that the truth-value of any formula φ must be identifiable by determining whether certain formulas uniformly constructed from φ have designated values or not. The transformation preserves the general (...) My bibliography Export citation 43. Anna Zamansky & Arnon Avron (2006). Cut-Elimination and Quantification in Canonical Systems. Studia Logica 82 (1):157 - 176. Canonical Propositional Gentzen-type systems are systems which in addition to the standard axioms and structural rules have only pure logical rules with the sub-formula property, in which exactly one occurrence of a connective is introduced in the conclusion, and no other occurrence of any connective is mentioned anywhere else. In this paper we considerably generalize the notion of a “canonical system” to first-order languages and beyond. We extend the Propositional coherence criterion for the non-triviality of such systems to rules with (...) My bibliography Export citation 44. Arnon Avron (2005). A Non-Deterministic View on Non-Classical Negations. Studia Logica 80 (2-3):159 - 194. We investigate two large families of logics, differing from each other by the treatment of negation. The logics in one of them are obtained from the positive fragment of classical logic (with or without a propositional constant ff for “the false”) by adding various standard Gentzen-type rules for negation. The logics in the other family are similarly obtained from LJ+, the positive fragment of intuitionistic logic (again, with or without ff). For all the systems, we provide simple semantics which is (...) My bibliography Export citation 45. Arnon Avron & Beata Konikowska (2001). Decomposition Proof Systems for Gödel-Dummett Logics. Studia Logica 69 (2):197-219. The main goal of the paper is to suggest some analytic proof systems for LC and its finite-valued counterparts which are suitable for proof-search. This goal is achieved through following the general Rasiowa-Sikorski methodology for constructing analytic proof systems for semantically-defined logics. All the systems presented here are terminating, contraction-free, and based on invertible rules, which have a local character and at most two premises. My bibliography Export citation 46. Arnon Avron (1999). Review: John C. Mitchell, Foundations for Programming Languages. [REVIEW] Journal of Symbolic Logic 64 (2):918-922. My bibliography Export citation 47. Arnon Avron, Furio Honsell, Marino Miculan & Cristian Paravano (1998). Encoding Modal Logics in Logical Frameworks. Studia Logica 60 (1):161-208. We present and discuss various formalizations of Modal Logics in Logical Frameworks based on Type Theories. We consider both Hilbert- and Natural Deduction-style proof systems for representing both truth (local) and validity (global) consequence relations for various Modal Logics. We introduce several techniques for encoding the structural peculiarities of necessitation rules, in the typed -calculus metalanguage of the Logical Frameworks. These formalizations yield readily proof-editors for Modal Logics when implemented in Proof Development Environments, such as Coq or LEGO. My bibliography Export citation 48. Ofer Arieli & Arnon Avron (1996). Reasoning with Logical Bilattices. Journal of Logic, Language and Information 5 (1):25--63. The notion of bilattice was introduced by Ginsberg, and further examined by Fitting, as a general framework for many applications. In the present paper we develop proof systems, which correspond to bilattices in an essential way. For this goal we introduce the notion of logical bilattices. We also show how they can be used for efficient inferences from possibly inconsistent data. For this we incorporate certain ideas of Kifer and Lozinskii, which happen to suit well the context of our work. (...) My bibliography Export citation 49. Arnon Avron (1994). What is a Logical System? In Dov M. Gabbay (ed.), What is a Logical System? Oxford University Press. No categories
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9140825271606445, "perplexity": 1670.760238833716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768980.24/warc/CC-MAIN-20141217075248-00115-ip-10-231-17-201.ec2.internal.warc.gz"}
https://brilliant.org/problems/i-dont-think-i-can-do-it-3/
I don't think I can do it 3 Calculus Level 2 $\displaystyle \int^{2}_{1} dx \int^{x^2}_{x}(2x-y) dy = \dfrac{A}{B}$ Find $$A+B$$ where $$A$$ and $$B$$ are co-prime positive integers. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9614730477333069, "perplexity": 468.30263873978896}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00149-ip-10-171-10-70.ec2.internal.warc.gz"}
http://bardiac.blogspot.com/2013/02/much-busyness.html
## Wednesday, February 13, 2013 ### Much Busyness We have these folders for shared projects stored in the department office.  For each project, a couple people need a given folder set.  We all have department office keys. So before the weekend, I chatted with another person on my project, who had our folder set in hir office.  I said it was no problem, because I wouldn't get to it over the weekend. And then when I went to work on it yesterday morning, it wasn't in the department office, and hir office light was on, but it was closed.  (I didn't try the door, because, well, I didn't.  Nor did I get an admin assistant to let me in, which I probably should have.)  So I sent a short (polite) email asking hir to leave the file in the department office when she got a chance so I could do my part. This morning, zie stopped by my office to apologize (which was nice) but didn't want to move the file to the department office, but instead wanted me to store part in my office and let hir keep the rest in hir office.  Gah!  It took a lot to try to convince hir that we both had keys to the department office, and it would be helpful to leave it there, and only get it when zie was actually working on the project so that I, too, could use it at my convenience. I really, really don't get the need to keep something in my office if I'm not actively working on it at that moment, since it's really easy to go into the department office and pick it up.  (The department office is, at most, about 50 steps from hir or my office, and we all have keys, so we can get in on the weekend or at night.) (I also don't get why one would leave one's office light on all day when one was out at meetings.  My dad would have made some comment about owning the electric company, no doubt.) On the other hand, I really need to go finish my part of this project.  I hope zie has moved the file back. #### 1 comment: 1. GAAH indeed. We had this problem when I was on the committee that evaluates promotion and tenure files, which at the time were honking big binders stored in the provost's office. Each member of the committee was assigned a specific time and day for reading the files, and then they had to be returned to the office, and of course none of us had keys to the provost's office so we had to fetch and return files during limited hours. Nightmare. That's why we moved to electronic portfolios: any member of the committee can read the files any time, any place with an internet connection. Much better.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8242986798286438, "perplexity": 1668.6550254611548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380638.30/warc/CC-MAIN-20141119123300-00098-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.statisticssolutions.com/regression-analysis-logistic-regression/
# Logistic Regression Logistic regression is a class of regression where the independent variable is used to predict the dependent variable.  When the dependent variable has two categories, then it is a binary logistic regression.  When the dependent variable has more than two categories, then it is a multinomial logistic regression.  When the dependent variable category is to be ranked, then it is an ordinal logistic regression (OLS).  To obtain the maximum likelihood estimation, transform the dependent variable in the logit function.  Logit is basically a natural log of the dependent variable and tells whether or not the event will occur.  Ordinal logistic regression does not assume a linear relationship between the dependent and independent variable.  It does not assume homoscedasticity.  Wald statistics tests the significance of the individual independent variable. Assumptions: This test is popular because it can overcome many restrictive assumptions of OLS regression. 1. In OLS regression, a linear relationship between the dependent and independent variable is a must, but in logistic regression, one does not assume such things.  The relationship between the dependent and independent variable may be linear or non-linear. 2. OLS assumes that the distribution should be normally distributed, but in logistic regression, the distribution may be normal, poisson, or binominal. 3. OLS assumes that there is an equal variance between all independent variables, but ordinal logistic does not assume that there is an equal variance between independent variables. 4. Does not assume normally distributed error term variance. Still, violation of these OLS assumptions in logistic regression assumes the following: • Data level: The dependent variable should be dichotomous in nature for binary regression. • Error Term: The error term is assumed independently. • Linearity: Does not assume a linear relationship, but between the odd ratio and the independent variable, there should be a linear relationship. • No outliers: Assumes that there should be no outliers in data. • Large sample: Uses the maximum likelihood method, so a large sample size is required for logistic regression. Key terms and concepts: • Dependent variable: Dichotomous in nature, for the binary logistic regression dependent variables are in two categories.  Usually we predict the higher category (assumed as 1) by taking the lower reference category (assumed as 0).  In multinomial logistic regression, the dependent variable has more than two categories.  We can predict the other category by the reference category.  In ordinal logistic regression, we predict the cumulative probability of the dependent variable order. • Factor: The independent variable is dichotomous in nature and is called the factor.  Usually we convert them into a dummy variable. • Covariate: The independent variable that is metric in nature is called the covariate. • Interaction term: The covariate shows the individual effect on the dependent variable.  The interaction effect is the combination of two variable effects on the dependent variable.  For example, when we predict the dependent variable based upon age and education category, there will be two impacts: one is individual impact on the dependent variable and the other is the interaction impact. • Maximum likelihood estimation: This method is used to predict the odd ratio for the dependent variable.  In OLS estimation, we minimize the error sum of the square distance, but in maximum likelihood estimation, we maximize the log likelihood. • SPSS and SAS: In SPSS, this test is available in the regression option and in SAS, we can use this method by using “command proc logistic” or “proc catmod.” • Significance test: Hosmer and Lemeshow chi-square test is used to test the overall model of goodness-of-fit test.  It is the modified chi-square test, which is better than the traditional chi-square test.  Significant p value shows the goodness-of- fit model.  Omnibus tests table in SPSS output shows the traditional chi-square and Hosmer and Lemeshow chi-square test value.  Pearson chi-square test and likelihood ratio test are used in multinomial logistic regression to estimate the model goodness-of-fit. • Stepwise: The three methods available are enter, backward, and forward.  In the enter method, all variables will be included, whether it is significant or insignificant.  In the backward method, it will start dropping non-significant variables from the list.  In forward method, it will move forward while dropping non-significant variables. • Parameter estimate and logit: In SPSS statistical output, the “parameter estimate” is the b coefficient used to predict the log odds (logit) of the dependent variable.  Let z be the logit for a dependent variable, then the logistic prediction equation is: z = ln(odds(event)) = ln(prob(event)/prob(nonevent)) = ln(prob(event)/[1 – prob(event)]) = b0 + b1X1 + b2X2 + ….. + bkXk Where b0 is constant and k is independent (X) variables.  In ordinal logistic regression, the threshold coefficient will be different for every order of dependent variables.  The coefficient will give the cumulative probability of every order of dependent variables. • Odd ratio: Exponential beta gives the odd ratio of the dependent variable.  We can find the probability of the dependent variable from this odd ratio.  When the exponential beta value is greater than one, than the probability of higher category increases, and if the probability of exponential beta is less than one, then the probability of higher category decreases.  Exponential beta value is interpreted with the reference category, where the probability of the dependent variable will increase or decrease.  In continuous variables, it is interpreted with one unit increase in the independent variable, corresponding to the increase or decrease of the units of the dependent variable. • Measures of Effect Size: R2 is no more accepted because R2 tells us the variance extraction by the independent variable, but here, variance is split into two categories.  Cox and Snell’s R2, Nagelkerke’s R2, McFadden’s R2, and Pseudo-R2 are now more realizable then simple R2. • Classification Table: The classification table shows how these two categories are correctly predicted.  For example, from two categories, only 85% were predicted correctly, this is shown in the classification table. Logistic Regression Resources Allison, P. D. (1999). Comparing logit and probit coefficients across groups. Sociological Methods and Research, 28(2), 186-208. DeMaris, A. (1992). Logit modeling: Practical applications. Newbury Park, CA: Sage Publications. Greenland, S., Schwartzbaum, J. A., & Finkle, W. D. (2000). Problems due to small samples and sparse data in conditional logistic regression analysis. American Journal of Epidemiology, 151(5), 531-539. Hosmer, D. W., & Lemeshow, S. (2000). Applied Logistic Regression (2nd ed.). New York: John Wiley & Sons. Jaccard, J. (2001). Interaction effects in logistic regression. Thousand Oaks, CA: Sage Publications. Jennings, D. E. (1986). Outliers and residual distributions in logistic regression. Journal of the American Statistical Association, 81(396), 987-990. Kleinbaum, D. G., Klein, M., & Pryor, E. R. (2004). Logistic regression: A self-learning text (2nd ed.). New York: Springer. McFadden, D. (1974). Conditional logit analysis of qualitative choice behavior. In P. Zarembka (Ed.), Frontiers in Econometrics (pp. 105-142). New York: Academic Press. Menard, S. (2002). Applied logistic regression analysis (2nd ed.). Thousand Oaks, CA: Sage Publications. O’Connell, A. A. (2005). Logistic regression models for ordinal response variables. Thousand Oaks, CA: Sage Publications. Pampel, F. C. (2000). Logistic regression: A primer. Thousand Oaks, CA: Sage Publications. Pedhazur, E. J. (1982). Multiple regression in behavioral research. New York: Holt, Rinehart & Winston. Peduzzi, P., Concato, J., Kemper, E., Holford, T. R., & Feinstein, A. R. (1996). A simulation study of the number of events per variable in logistic regression analysis. Journal of Clinical Epidemiology, 49(12), 1373-1379. Peng, C. -Y. J., Lee, K. L., & Ingersoll, G. M. (2002). An introduction to logistic regression analysis and reporting. Journal of Educational Research, 96(1), 3-14. Press, S. J., & Wilson, S. (1978). Choosing between logistic regression and discriminant analysis. Journal of the American Statistical Association, 73(364), 699-705. Rice, J. C. (1994). Logistic regression: An introduction. In B. Thompson (Ed.), Advances in social science methodology (pp. 191-245). Greenwich, CT: JAI Press. Wright, R. E. (1994). Logistic regression. In L. G. Grimm & P. R. Yarnold (Eds.), Reading and understanding multivariate statistics (pp. 217-244). Washington, DC: American Psychological Association. Related Pages:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9619578123092651, "perplexity": 1933.6637489117593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826530.72/warc/CC-MAIN-20181214232243-20181215014243-00434.warc.gz"}
https://web2.0calc.com/questions/geometry-question_91
+0 # Geometry Question 0 89 1 If AC = 36 inches, what is the measure of AB. (Please See Attached) Jul 8, 2020 #1 +25569 +2 If $$AC = 36$$ inches, what is the measure of AB. (Please See Attached) $$\begin{array}{|rcll|} \hline \mathbf{(2x-6) + (x^2-13x)} &=& \mathbf{36} \\ x^2-11x-6 &=& 36 \\ \mathbf{x^2-11x-42} &=& \mathbf{0} \\\\ x &=& \dfrac{11\pm \sqrt{11^2-4*(-42)} } {2} \\\\ x &=& \dfrac{11\pm \sqrt{121+168} } {2} \\\\ x &=& \dfrac{11\pm \sqrt{289} } {2} \\\\ x &=& \dfrac{11\pm 17 } {2} \\\\ x &=& \dfrac{11 \mathbf{+} 17 } {2} \quad | \quad x > 0! \\\\ \mathbf{x} &=& \mathbf{14} \\\\ \mathbf{\text{AB}} &=& \mathbf{2x-6} \\ \text{AB} &=& 2*14-6 \\ \mathbf{\text{AB}} &=& \mathbf{22\ \text{inches}} \\ \hline \end{array}$$ Jul 9, 2020
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928130507469177, "perplexity": 3420.730803444798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876500.43/warc/CC-MAIN-20201021122208-20201021152208-00623.warc.gz"}
https://www.mathplanet.com/education/algebra-2/trigonometry/inverse-functions
# Inverse functions Solving a problem where we know how much sine is for an unknown angle means that we are to find the angle. In order to solve these equations we use inverse the trigonometric functions called inverse sine or arcsine. Example Find the angle where $sin\: \Theta =0.7$ We solve this by taking the inverse sine on both sides, this is notated either by sin-1 or Arcsin: $\Theta =arcsin\: 0.7=44.4^{\circ}$ The same technique is used on both cosine and tangent and is notated with cosin-1 or Arccos and tan-1 or Arctan. ## Video lesson Solve the following equation tan Ѳ=0.5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9940277338027954, "perplexity": 1011.011141638497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00636.warc.gz"}
http://www.dsplog.com/2008/10/16/alamouti-stbc/print/
- DSP log - http://www.dsplog.com - Alamouti STBC Posted By Krishna Sankar On October 16, 2008 @ 6:16 am In MIMO | 273 Comments In the recent past, we have discussed three receive diversity schemes – Selection combining [1], Equal Gain Combining [2] and Maximal Ratio Combining [3]. All the three approaches used the antenna array at the receiver to improve the demodulation performance, albeit with different levels of complexity. Time to move on to a transmit diversity scheme where the information is spread across multiple antennas at the transmitter. In this post, lets discuss a popular transmit diversity scheme called Alamouti Space Time Block Coding (STBC). For the discussion, we will assume that the channel is a flat fading Rayleigh multipath channel [4] and the modulation is BPSK. ## Alamouti STBC A simple Space Time Code, suggested by Mr. Siavash M Alamouti in his landmark October 1998 paper – A Simple Transmit Diversity Technique for Wireless Communication [5], offers a simple method for achieving spatial diversity with two transmit antennas. The scheme is as follows: 1. Consider that we have a transmission sequence, for example $\{x_1, x_2, x_3, \ldots, x_n \}$ 2. In normal transmission, we will be sending $x_1$in the first time slot, $x_2$in the second time slot, $x_3$ and so on. 3. However, Alamouti suggested that we group the symbols into groups of two. In the first time slot, send $x_1$and $x_2$from the first and second antenna. In second time slot send $-x_2^*$ and $x_1^*$from the first and second antenna. In the third time slot send $x_3$ and $x_4$from the first and second antenna.In fourth time slot, send $-x_4^*$ and $x_3^*$from the first and second antenna and so on. 4. Notice that though we are grouping two symbols, we still need two time slots to send two symbols. Hence, there is no change in the data rate. 5. This forms the simple explanation of the transmission scheme with Alamouti Space Time Block coding. Figure: 2-Transmit, 1-Receive Alamouti STBC coding ## Other Assumptions 1. The channel is flat fading – In simple terms, it means that the multipath channel has only one tap. So, the convolution operation reduces to a simple multiplication. For a more rigorous discussion on flat fading and frequency selective fading, may I urge you to review Chapter 15.3 Signal Time-Spreading from [DIGITAL COMMUNICATIONS: SKLAR] [6] 2. The channel experience by each transmit antenna is independent from the channel experienced by other transmit antennas. 3. For the $i^{th}$ transmit antenna, each transmitted symbol gets multiplied by a randomly varying complex number $h_i$. As the channel under consideration is a Rayleigh channel, the real and imaginary parts of $h_i$ are Gaussian distributed having mean $\mu_{h_i}=0$ and variance $\sigma^2_{h_i}=\frac{1}{2}$. 4. The channel experienced between each transmit to the receive antenna is randomly varying in time. However, the channel is assumed to remain constant over two time slots. 5. On the receive antenna, the noise$n$ has the Gaussian probability density function with $p(n) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{-(n-\mu)^2}{2\sigma^2}$ with $\mu=0$ and $\sigma^2 = \frac{N_0}{2}$. 7. The channel $h_i$ is known at the receiver. In the first time slot, the received signal is, $y_1 =h_1x_1 + h_2x_2 + n_1 = [h_1\ h_2] \left[\begin{eqnarray}x_1 \\ x_2 \end{eqnarray}\right]+n_1$. In the second time slot, the received signal is, $y_2 =-h_1x_2^* + h_2x_1^* + n_2 = [h_1\ h_2] \left[\begin{eqnarray}-x_2^* \\ x_1^*\end{eqnarray}\right]+n_2$. where $y_1$, $y_2$ is the received symbol on the first and second time slot respectively, $h_1$ is the channel from $1^{st}$ transmit antenna to receive antenna, $h_2$ is the channel from $2^{nd}$ transmit antenna to receive antenna, $x_1$, $x_2$are the transmitted symbols and $n_1,\ n_2$ is the noise on $1^{st}, 2^{nd}$ time slots. Since the two noise terms are independent and identically distributed, $E\left{\left[\begin{eqnarray}n_1\\n_2^*\end{eqnarray}\right]\left[\begin{eqnarray}n_1^*\ n_2\end{eqnarray}\right]\right}=\left[\begin{eqnarray}|n_1|^2\ \ \ \ 0 \\0\ \ \ \ |n_2|^2\end{eqnarray}\right]$. For convenience, the above equation can be represented in matrix notation as follows: $\left[\begin{eqnarray}y_1 \\ y_2^*\end{eqnarray}\right] = \underbrace{\left[\begin{eqnarray}\ h_1\ \ \ h_2 \\ h_2^*\ -h_1^*\end{enarray}\right]}\left[\begin{eqnarray}x_1 \\ x_2 \end{eqnarray}\right]+\left[\begin{eqnarray}n_1\\n_2^* \end{eqnarray}\right]$. Let us define $\mathbf{H}= \left[\begin{eqnarray}\ h_1\ \ \ h_2 \\ h_2^*\ -h_1^*\end{enarray}\right]$. To solve for $\left[\begin{eqnarray}x_1 \\ x_2 \end{eqnarray}\right]$, we know that we need to find the inverse of $\mathbf{H}$. We know, for a general m x n matrix, the pseudo inverse [7] is defined as, $\mathbf{H^+}=(H^HH)^{-1}H^H$. The term, $(H^HH) = \left[\begin{eqnarray}\ h_1^*\ \ \ h_2 \\ h_2^*\ -h_1\end{eqnarray}\right]\left[\begin{eqnarray}\ h_1\ \ \ h_2 \\ h_2^*\ -h_1^*\end{eqnarray}\right]=\left[\begin{eqnarray}|h_1|^2+|h_2|^2\ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 \\ 0\ \ \ \ \ \ \ \ \ \ \ \ \|h_1|^2+|h_2|^2\end{eqnarray}\right]$. Since this is a digonal matrix, the inverse is just the inverse of the diagonal elements, i.e $(H^HH)^{-1} = \left[\begin{eqnarray}\frac{1}{|h_1|^2+|h_2|^2}\ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 \\ 0\ \ \ \ \ \ \ \ \ \ \ \ \frac{1}{\|h_1|^2+|h_2|^2}\end{eqnarray}\right]$. The estimate of the transmitted symbol is, $\begin{eqnarray}\hat{\left[\begin{eqnarray}x_1 \\ x_2\end{eqnarray}\right]} & = &(H^HH)^{-1}H^H\left[\begin{eqnarray}y_1 \\ y_2^* \end{eqnarray}\right]\\ & = & (H^HH)^{-1}H^H\left(H\left[\begin{eqnarray}x_1 \\ x_2 \end{eqnarray}\right]+\left[\begin{eqnarray}n_1\\n_2^* \end{eqnarray}\right]\right) \\&=&\left[\begin{eqnarray}x_1 \\ x_2 \end{eqnarray}\right] + (H^HH)^{-1}H^H\left[\begin{eqnarray}n_1\\n_2^* \end{eqnarray}\right]\\\end{eqnarray}$ . If you compare the above equation with the estimated symbol following equalization in Maximal Ratio Combining [8], you can see that the equations are identical. ## BER with Almouti STBC Since the estimate of the transmitted symbol with the Alamouti STBC scheme is identical to that obtained from MRC, the BER with above described Alamouti scheme should be same as that for MRC. However, there is a small catch. With Alamouti STBC, we are transmitting from two antenna’s. Hence the total transmit power in the Alamouti scheme is twice that of that used in MRC. To make the comparison fair, we need to make the total trannsmit power from two antennas in STBC case to be equal to that of power transmitted from a single antenna in the MRC case. With this scaling, we can see that BER performance of 2Tx, 1Rx Alamouti STBC case has a roughly 3dB poorer performance that 1Tx, 2Rx MRC case. From the post on Maximal Ratio Combining [9], the bit error rate for BPSK modulation in Rayleigh channel with 1 transmit, 2 receive case is, $P_{e,MRC} =p_{MRC}^2\left[1+2(1-p_{MRC})\right]$, where $p_{MRC}=\frac{1}{2}-\frac{1}{2}\left(1+\frac{1}{E_b/N_0}\right)^{-1/2}$. With Alamouti 2 transmit antenna, 1 receive antenna STBC case, $p_{STBC}=\frac{1}{2}-\frac{1}{2}\left(1+\frac{2}{E_b/N_0}\right)^{-1/2}$ and Bit Error Rate is $P_{e,STBC} =p_{STBC}^2\left[1+2(1-p_{STBC})\right]$. ## Key points The fact that $(H^HH)$ is a diagonal matrix ensured the following: 1. There is no cross talk between $x_1$, $x_2$ after the equalizer. 2. The noise term is still white. $E\left{H^H\left[\begin{eqnarray}n_1\\n_2^*\end{eqnarray}\right]\left[\begin{eqnarray}n_1^*\ n_2\end{eqnarray}\right]H\right}=H^H\left[\begin{eqnarray}|n_1|^2\ \ \ \ 0 \\0\ \ \ \ |n_2|^2\end{eqnarray}\right]H=\left[\begin{eqnarray}|n_1|^2\ \ \ \ 0 \\0\ \ \ \ |n_2|^2\end{eqnarray}\right]\left[\begin{eqnarray}{|h_1|^2+|h_2|^2}\ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 \\ 0\ \ \ \ \ \ \ \ \ \ \ \ {\|h_1|^2+|h_2|^2}\end{eqnarray}\right]$. ## Simulation Model The Matlab/Octave script performs the following (a) Generate random binary sequence of +1′s and -1′s. (b) Group them into pair of two symbols (c) Code it per the Alamouti Space Time code, multiply the symbols with the channel and then add white Gaussian noise. (e) Perform hard decision decoding and count the bit errors (f) Repeat for multiple values of $\frac{E_b}{N_0}$ and plot the simulation and theoretical results. Figure: BER plot for BPSK in Rayleigh channel with 2 Transmit and 1 Receive Alamouti STBC ## Observations Compared to the BER plot for nTx=1, nRx=2 Maximal ratio combining, we can see the Alamouti Space Time Block Coding has around 3dB poorer performance. ## Reference A [12] Simple Transmit Diversity Technique for Wireless Communication [5]Siavash M Alamouti, IEEE Journal on selected areas in Communication, Vol 16, No, 8, October 1998 URL to article: http://www.dsplog.com/2008/10/16/alamouti-stbc/ URLs in this post: [2] Equal Gain Combining: http://www.dsplog.com/2008/09/19/equal-gain-combining/ [3] Maximal Ratio Combining: http://www.dsplog.com/2008/09/28/maximal-ratio-combining/ [4] Rayleigh multipath channel: http://www.dsplog.com/2008/07/14/rayleigh-multipath-channel/ [5] Simple Transmit Diversity Technique for Wireless Communication: http://ieeexplore.ieee.org/iel4/49/15739/00730453.pdf [7] pseudo inverse: http://planetmath.org/encyclopedia/Pseudoinverse.html [8] Maximal Ratio Combining: http://www.dsplog.com/2008/09/28/maximal-ratio-combining/#MRC [9] Maximal Ratio Combining: http://www.dsplog.com/2008/09/28/maximal-ratio-combining [10] Matlab/Octave script for simulating BER for 2 transmit, 1 receive Alamouti STBC coding for BPSK modulation in Rayleigh fading channel: http://www.dsplog.com/db-install/wp-content/uploads/2008/10/script_ber_alamouti_stbc_code_bpsk_rayleigh_channel.m
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 52, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9175248742103577, "perplexity": 1167.6441785511358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00390.warc.gz"}
http://www.ck12.org/geometry/Pythagorean-Theorem-and-Pythagorean-Triples/lesson/Pythagorean-Theorem-and-Pythagorean-Triples-Intermediate/r5/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> You are viewing an older version of this Concept. Go to the latest version. # Pythagorean Theorem and Pythagorean Triples ## Square of the hypotenuse equals the sum of the squares of the legs. % Progress Progress % Geometry Pythagorean Theorem and Pythagorean Triples Pythagorean Theorem and Pythagorean Triples ### Explore More Directions: Use what you have learned to solve each problem.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262734055519104, "perplexity": 3595.1363285898465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701149548.13/warc/CC-MAIN-20160205193909-00160-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/radius-of-convergence.933559/page-2
# Homework Help: Radius of Convergence? 1. Dec 6, 2017 ### WWGD Nice plug! EDIT: You're beating me at this, you should be worried :). 2. Dec 6, 2017 ### FactChecker I like and agree with your edit comment. 3. Dec 6, 2017 ### ScreamingIntoTheVoid Right, that makes sense, thank you for taking the time to provide me with a more extensive responce. And r would be found using the ratio test correct? So just to make sure that I'm doing this right, if I had my original problem (except written correctly): Σ (x-2)n/nn , using the ratio test I would get (x-2)n+1/nn+1 * (nn)/(x-2)n which I n→1 could turn into [(x-2)(x-2)n] * (nn)/(x-2)n , which by applying a limit and simplifying that would turn into: |x-2| lim n→∞ (nn/(n+1)n+1) which would turn into |x-2|>0, leaving the interval of convergence to be -2>n>2 and the r value to be 2? (note: That's probably a somewhat messy version of the process if I did it right because I also taught myself intervals of convergence/the ratio of convergence last night. Though I feel like I have somewhat of a grasp on those, I've probably done it in a somewhat messy manner) **ALSO thanks for responding to this thread even though it's just about a day old** Last edited by a moderator: Dec 6, 2017 4. Dec 7, 2017 ### Ray Vickson Sometimes the ratio test can be used to determine the radius of convergence, but sometimes other tests must be used. Sometimes we really do not need any such tests at all, but can just rely on a bounding property. The radius of convergence of $\sum (x-2)^n/n^n$ is $\infty$, not 2 or 3 or any other finite number. First: simplify writing by putting $x-2 = y$, so the series is $\sum y^n/n^n$. If we set $t_n = y^n/n^n$ we have the ratio $$\frac{|t_{n+1}|}{|t_n|} = \frac{|y|^{n+1}}{|y|^n} \frac{n^n}{(n+1)^{n+1}} = |y| \left( \frac{n}{n+1}\right)^n \frac{1}{n+1}.$$ Now $n/(n+1) < 1$ so $(n/(n+1))^n < 1$ for all $n > 0$, so we have $$\frac{|t_{n+1}|}{|t_n|} < |y| \, \,\frac{1}{n+1} \to 0\; \text{as} \; n \to \infty.$$ The limiting ratio is <1 for any $|y|$---and, in fact, is < 0.1 or < 0.000000000001 or .... or less than any positive number at all. Therefore, the sum will converge, no matter what is the value of $y = x-2$, so will converge no matter what is the value of $x$. It converges for $x = -1,000,000$ or for $x = 175,000$ or whatever other value you choose to employ. The radius of convergence cannot be 2 because you would be saying that the series diverges if $|x-2| > 2$, and that is definitely not the case here. Last edited: Dec 7, 2017
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9449418783187866, "perplexity": 582.7270775025263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948464.28/warc/CC-MAIN-20180426183626-20180426203626-00155.warc.gz"}
http://www.msri.org/seminars/19927
# Mathematical Sciences Research Institute Home » A vanishing theorem for D-modules, and applications to t-structures for quantized symplectic varieties # Seminar A vanishing theorem for D-modules, and applications to t-structures for quantized symplectic varieties February 20, 2013 (03:30 PM PST - 04:30 PM PST) Location: MSRI: Simons Auditorium Speaker(s) Thomas Nevins (University of Illinois at Urbana-Champaign) Description No Description Video D-modules: more precisely, one can construct functors (of quantum Hamiltonian reduction'') from categories of equivariant D-modules to representations of the algebras. I'll describe an effective combinatorial criterion for such functors to vanish on certain equivariant D-modules---equivalently, for certain equivariant D-modules to have no nonzero group-invariant elements. I will also explain consequences of this vanishing criterion for natural t-structures on the derived categories of sheaves over quantum analogs of various interesting symplectic algebraic varieties. Most of the talk will be low-tech and will presume no prior familiarity with the terms mentioned above. This is joint work with Kevin McGerty.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8710641264915466, "perplexity": 2187.2708344795633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720972.46/warc/CC-MAIN-20161020183840-00300-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.leaxr.com/mod/page/view.php?id=11526
## College of Micronesia-FSM: Dana Lee Ling's Introduction to Statistics Using OpenOffice.org, LibreOffice.org Calc, 4th edition: "Section 3.2: Differences in the Distribution of Data” ### Range The range is the maximum data value minus the minimum data value. =MAX(data)−MIN(data) The range is a useful basic statistic that provides information on the distance between the most extreme values in the data set. The range does not show if the data if evenly spread out across the range or crowded together in just one part of the range. The way in which the data is either spread out or crowded together in a range is referred to as the distribution of the data. One of the ways to understand the distribution of the data is to calculate the position of the quartiles and making a chart based on the results. ### Percentiles, Quartiles, Box and Whisker charts The median is the value that is the middle value in a sorted list of values. At the median 50% of the data values are below and 50% are above. This is also called the 50th percentile for being 50% of the way "through" the data. If one starts at the minimim, 25% of the way "through" the data, the point at which 25% of the values are smaller, is the 25th percentile. The value that is 25% of the way "through" the data is also called the first quartile. Moving on "through" the data to the median, the median is also called the second quartile. Moving past the median, 75% of the way "through" the data is the 75th percentile also known as the third quartile. Note that the 0th percentile is the minimum and the 100th percentile is the maximum. Spreadsheets can calculate the first, second, and third quartile for data using a function, the quartile function. =QUARTILE(data,type) Data is a range with data. Type represents the type of quartile. (0 = minimum, 1 = 25% or first quartile, 2 = 50% (median), 3 = 75% or third quartile and 4 = maximum. Thus if data is in the cells A1:A20, the first quartile could be calculated using: =QUARTILE(A1:A20,1) #### InterQuartile Range The InterQuartile Range (IQR) is the range between the first and third quartile: =QUARTILE(Data,3)-QUARTILE(Data,1) There are some subtleties to calculating the IQR for sets with even versus odd sample sizes, but this text leaves those details to the spreadsheet software functions. ### Quartiles, Box and Whisker plots The above is very abstract and hard to visualize. A box and whisker plot takes the above quartile information and plots a chart based on the quartiles. A box and whisker plot is built around a box that runs from the value at the 25th percentile (first quartile) to the value at the 75th percentile (third quartile). The length of the box spans the distance from the value at the first quartile to the third quartile, this is called the Inter-Quartile Range (IQR). A line is drawn inside the box at the location of the 50th percentile. The 50th percentile is also known as the second quartile and is the median for the data. Half the scores are above the median, half are below the median. Note that the 50th percentile is the median, not the mean. s1 s2 10 11 20 11 30 12 40 13 50 15 60 18 70 23 80 31 90 44 100 65 110 99 120 154 The basic box plot described above has lines that extend from the first quartile down to the minimum value and from the third quartile to the maximum value. These lines are called "whiskers" and end with a cross-line called a "fence". If, however, the minimum is more than 1.5 × IQR below the first quartile, then the lower fence is put at 1.5 × IQR below the first quartile and the values below the fence are marked with a round circle. These values are referred to as potential outliers - the data is unusually far from the median in relation to the other data in the set. Likewise, if the maximum is more than 1.5 × IQR beyond the third quartile, then the upper fence is located at 1.5 × IQR above the 3rd quartile. The maximum is then plotted as a potential outlier along with any other data values beyond 1.5 × IQR above the 3rd quartile. There are actually two types of outliers. Potential outliers between 1.5 × IQR and 3.0 × IQR beyond the fence . Extreme outliers are beyond 3.0 × IQR. In the program Gnome Gnumeric potential outliers are marked with a circle colored in with the color of the box. Extreme outiers are marked with an open circle - a circle with no color inside. An example with hypothetical data sets is given to illustrate box plots. The data consists of two samples. Sample one (s1) is a uniform distribution and sample two (s2) is a highly skewed distribution. Box and whisker plots can be generated by the Gnome Gnumeric program or by using on line box plot generators. The box and whisker plot is a useful tool for exploring data and determining whether the data is symmetrically distributed, skewed, and whether the data has potential outliers - values far from the rest of the data as measured by the InterQuartile Range. The distribution of the data often impacts what types of analysis can be done on the data. The distribution is also important to determining whether a measurement that was done is performing as intended. For example, in education a "good" test is usually one that generates a symmetric distibution of scores with few outliers. A highly skewed distribution of scores would suggest that the test was either too easy or too difficult. Outliers would suggest unusual performances on the test. Two data sets, one uniform, the other with one potential outlier and one extreme outlier. ### Standard Deviation Consider the following data: Data mode median mean μ min max range midrange Data set 1 5, 5, 5, 5 5 5 5 5 5 0 0 Data set 2 2, 4, 6, 8 none 5 5 2 8 6 5 Data set 3 2, 2, 8, 8 none 5 5 2 8 6 5 Neither the mode, median, nor the mean reveal clearly the differences in the distribution of the data above. The mean and the median are the same for each data set. The mode is the same as the mean and the median for the first data set and is unavailable for the last data set (spreadsheets will report a mode of 2 for the last data set). A single number that would characterize how much the data is spread out would be useful. As noted earlier, the range is one way to capture the spread of the data. The range is calculated by subtracting the smallest value from the largest value. In a spreadsheet: =MAX(data)−MIN(data) The range still does not characterize the difference between set 2 and 3: the last set has more data further away from the center of the data distribution. The range misses this difference. To capture the spread of the data we use a measure related to the average distance of the data from the mean. We call this the standard deviation. If we have a population, we report this average distance as the population standard deviation. If we have a sample, then our average distance value may underestimate the actual population standard deviation. As a result the formula for sample standard deviation adjusts the result mathematically to be slightly larger. For our purposes these numbers are calculated using spreadsheet functions. ### Standard deviation One way to distinguish the difference in the distribution of the numbers in data set 2 and data set 3 above is to use the standard deviation. Data mean μ stdev Data set 1 5, 5, 5, 5 5 0.00 Data set 2 2, 4, 6, 8 5 2.58 Data set 3 2, 2, 8, 8 5 3.46 The function that calculates the sample standard deviation is: =STDEV(data) In this text the symbol for the sample standard deviation is usually sx. In this text the symbol for the population standard deviation is usually σ. The symbol sx usually refers the standard deviation of single variable x data. If there is y data, the standard deviation of the y data is sy. Other symbols that are used for standard deviation include s and σx. Some calculators use the unusual and confusing notations σxn−1 and σxn for sample and population standard deviations. In this class we always use the sample standard deviation in our calculations. The sample standard deviation is calculated in a way such that the sample standard deviation is slightly larger than the result of the formula for the population standard deviation. This adjustment is needed because a population tends to have a slightly larger spread than a sample. There is a greater probability of outliers in the population data. ### Coefficient of variation CV The Coefficient of Variation is calculated by dividing the standard deviation (usually the sample standard deviation) by the mean. =STDEV(data)/AVERAGE(data) Note that the CV can be expressed as a percentage: Group 2 has a CV of 52% while group 3 has a CV of 69%. A deviation of 3.46 is large for a mean of 5 (3.46/5 = 69%) but would be small if the mean were 50 (3.46/50 = 7%). So the CV can tell us how important the standard deviation is relative to the mean. ### Rules of thumb regarding spread As an approximation, the standard deviation for data that has a symmetrical, heap-like distribution is roughly one-quarter of the range. If given only minimum and maximum values for data, this rule of thumb can be used to estimate the standard deviation. At least 75% of the data will be within two standard deviations of the mean, regardless of the shape of the distribution of the data. At least 89% of the data will be within three standard deviations of the mean, regardless of the shape of the distribution of the data. If the shape of the distribution of the data is a symmetrical heap, then as much as 95% of the data will be within two standard deviations of the mean. Data beyond two standard deviations away from the mean is considered "unusual" data. ### Basic statistics and their interaction with the levels of measurement Levels of measurement and appropriate measures Level of measurement Appropriate measure of middle Appropriate measure of spread nominal mode none or number of categories ordinal median range interval median or mean range or standard deviation ratio mean standard deviation At the interval level of measurement either the median or mean may be more appropriate depending on the specific system being studied. If the median is more appropriate, then the range should be quoted as a measure of the spread of the data. If the mean is more appropriate, then the standard deviation should be used as a measure of the spread of the data. Another way to understand the levels at which a particular type of measurement can be made is shown in the following table. Levels at which a particular statistic or parameter has meaning Level of measurement Statistic/ Parameter Nominal Ordinal Interval Ratio sample size mode minimum maximum range median mean standard deviation coefficient of variation For example, a mode, median, and mean can be calculated for ratio level measures. Of those, the mean is usually considered the best measure of the middle for a random sample of ratio level data.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456818461418152, "perplexity": 481.25845405509403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578841544.98/warc/CC-MAIN-20190426153423-20190426175423-00261.warc.gz"}
http://www.unige.ch/math/folks/velenik/papers/abs_ISV14.html
Abstract Invariance Principle to Ferrari-Spohn Diffusions D. Ioffe, S. Shlosman and Y. Velenik Commun. Math. Phys. 336, 905-932 (2015). We prove an invariance principle for a class of tilted $1+1$-dimensional SOS models or, equivalently, for a class of tilted random walk bridges in $\mathbb{Z}_+$. The limiting objects are stationary reversible ergodic diffusions with drifts given by the logarithmic derivatives of the ground states of associated singular Sturm-Liouville operators. In the case of a linear area tilt, we recover the Ferrari-Spohn diffusion with log-Airy drift, which was derived by Ferrari and Spohn in the context of Brownian motions conditioned to stay above circular and parabolic barriers. Key words: Invariance principle, critical prewetting, entropic repulsion, random walk, Ferrari-Spohn diffusions. Files: PDF file, Published version, bibtex
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8007324934005737, "perplexity": 1239.2563725515497}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820700.4/warc/CC-MAIN-20171017033641-20171017053641-00079.warc.gz"}
http://physics.stackexchange.com/questions/40983/is-the-total-energy-of-the-universe-constant
# Is the total energy of the universe constant? If total energy is conserved just transformed and never newly created, is there a sum of all energies that is constant? Why is it probably not that easy? - Here is a related question that might be helpful physics.stackexchange.com/q/2838 – user11547 Oct 17 '12 at 1:32 The total energy of the universe is not well defined, so we can't even discuss whether it's constant. physicsforums.com/showthread.php?t=506985 – Ben Crowell Oct 17 '12 at 5:36 How are people even trying to answer this "yes" or "no" when the definition of energy in GR is a subject of ongoing research? – DanielSank Dec 17 '14 at 6:36 ## 5 Answers No. The universe is dominated by dark energy, which is consistent with a cosmological constant $\Lambda$. In other words, as the universe expands, the energy density stays roughly the same. So the (energy density)*volume is growing exponentially at late times. Although the total energy is not well defined (as the volume of the universe may be infinite), the fractional rate of growth is certainly nonzero. You might wonder how the total energy can grow without violating energy conservation. The answer is that in general relativity, we just need $\boldsymbol{\nabla} \cdot \boldsymbol{T} = 0$, so a cosmological constant is perfectly consistent as $\boldsymbol{\nabla} \cdot \Lambda \boldsymbol{g} = 0$ For a nice explanation by Sean Carroll, see http://blogs.discovermagazine.com/cosmicvariance/2010/02/22/energy-is-not-conserved/ - The total energy isn't just undefined because of the possibility that the universe is infinite. It's undefined for the reasons given in juanrga's answer. – Ben Crowell May 6 '13 at 21:06 What about Noether theorem ? If the laws of physics don't depend on time, we should be able to build a conserved quantity, and call it "energy" – agemO Dec 5 '14 at 13:03 @agemO Noether's theorem leads to a conserved current. Getting a conserved quantity involves performing a spatial three dimensional integral. This is very subtle in GR. – jwimberley Dec 17 '14 at 14:10 And does it eventually lead to a conserve quantity ? What happen when done on a closed universe ? – agemO Dec 17 '14 at 16:09 @agemO yes you can use Noether's theorem in this way and get a conserved current even with dark energy. The current can be integrated. There are no special subtleties in doing the integration in GR. The energy in the gravitational field is negative and cancels the increasing dark energy. the total energy in a closed universe is zero, but not in a trivial way. The sum of energies from different fields only adds to zero when the field equations apply. So the correct answer is "yes, energy is conserved." – Philip Gibbs - inactive Dec 19 '14 at 19:30 Energy conservation stems from Noether's theorem applied to time (i.e., time-invariance leads to energy conservation, similarly to how spatial-invariance leads to momentum conservation). Since the universe is expanding (and accelerating at that), the state of the universe today is different than it was yesterday and will be tomorrow, hence energy conservation cannot be established for the whole universe. Locally, however, the stress-energy tensor, $$T^{\mu\nu}=\left(p+\rho\right)u^\mu u^\nu - pg^{\mu\nu},$$ will satisfy the conservation law (of energy and momentum), $$T^{\mu\nu}{}_{;\nu}=0$$ (derived through the Bianchi identity, the $;\nu$ subscript denotes the covariant derivatve). Wald states (Amazon link, emphasis are his) in Chapter 4 The issue of energy in general relativity is a rather delicate one. In general relativity there is no known meaningful notion of local energy density of the gravitational field. The basic reason for this is closely related to the fact that the spacetime metric, $g_{\mu\nu}$, describes both the background spacetime structure and the dynamical aspects of the gravitational field, but no natural way is known to decompose it into its "background" and "dynamical" parts. Since one would expect to attribute energy to the dynamical aspect of gravity but not to the background spacetime structure, it seems unlikely that a notion of local energy density could be obtained without a corresponding decomposition of the spacetime metric. However, for an isolated system, the total energy can be defined by examining the gravitational field at large distances from the system. In addition, for an isolated system the flux of energy carried away from the system by gravitational radiation also is well defined. Later, in Chapter 11, ...the most likely candidate for the energy density of the gravitational field in general relativity would be an expression quadratic in the first derivatives of the metric. However, since no tensor other than $g_{\mu\nu}$ itself can be constructed locally from only the coordinate basis components of $g_{\mu\nu}$ and their first derivatives, a meaningful expression quadratic in first derivatives of the metric can be obtained only if one has additional structure on spacetime, such as a preferred coordinate system or a decomposition of the spacetime metric into a "background part" and a "dynamical part" (so that, say one could take derivatives of the "dynamical part" of the metric with respect to the derivative operator associated with the background part). Such additional structure would be completely counter to the spirit of general relativity, which views the spacetime metric as fully describing all aspects of spacetime structure and the gravitational field. - This is wrong because it treats the gravitational field as a given background field when in fact its evolution is given by dynamical equations which are time invariant and derived from the Einstein-Hilbert action. Noether's theorem therefore does apply. See e.g. Dirac's short book on GR which derived energy conservation in GR this way. – Philip Gibbs - inactive Dec 18 '14 at 22:46 The theory of the energy content of the gravitational field is good enough to predict the deceleration of binary pulsars due to gravitational wave radiation. A Nobel prize has been given. MTW, Wald and Peebles are wrong about energy in GR. Einstein, Landau, Lifshitz, Dirac and Weinberg are right. Energy density is just reference frame dependent as you would expect in relativity. That does not make it meaningless. – Philip Gibbs - inactive Dec 19 '14 at 1:16 Kyle, What you said is also true of special relativity but nobody is saying there are any problems with energy conervation in SR. Reference frame dependence is not an issue. – Philip Gibbs - inactive Dec 19 '14 at 1:39 Kyle, thank you for your advice. My advice to you is that when someone refutes what you say with simple clear logic it does not help to cite vague and irrelevant points from textbooks. In your first quote Wald talks vaguely about the metric being both background and dynamic. There is no sense in that. The metric is dynamic and that is all it is. It is no more a background that any other field is a background. He just ignores the fact that formulations for gravitational energy have been known for decades and tries to argue that they cannot exist. – Philip Gibbs - inactive Dec 19 '14 at 11:35 In the second quote he wants the formulation of energy to depend only on first derivatives of the metric. This can be done with pseudotensors but a covariant formulation requires second derivatives because the action has third derivatives. These requirements he wants to impose are artificial and unjustifed. Note that pseudotensors do not require a "preferred" reference frame, they just require someone to choose a reference frame for the purpose of measurement as you do for any other measurment. A good covariant formulation uses the Komar superpotential. – Philip Gibbs - inactive Dec 19 '14 at 11:42 Your question is tagged as general-relativity and cosmology, and as textbooks remark (e.g. Peebles [1]) "there is not a general global energy conservation law in general relativity theory. Therefore: ”The conclusion, whether we like it or not, is obvious: energy in the universe is not conserved” [2]. [1] Peebles P. J. E., 1993, Principles of Physical Cosmology (Princeton Univ. Press). [2] Harrison E., 1981, Cosmology ( Cambridge University Press) - What we like to call the energy, i.e., the total matter/energy content of space-time, might not be conserved. However, there is a lot of reason to suspect that fundamentally the universe is some big quantum system, and that space-time and particles and fields are emergent from this underlying idea. In that case, we expect there to be a Hamiltonian $H$ and some time evolution rule $i\hbar \partial_t \left|\psi\right\rangle = H \left|\psi\right\rangle$, and unitarity requires that energy be conserved. Papers by Page and Wootters have interesting things to say on the subject. - So after criticizing me for saying that energy is conserved you say the same thing but give a more speculative justification based on quantum gravity rather than GR. Don't you think that if enrgy is conserved in a quantum theory there will be a corresponding formulation in the classical limit? – Philip Gibbs - inactive Dec 18 '14 at 11:33 I did not mean to give the impression that I was criticizing your answer because I believe that energy was not conserved. I took issue with your stating a controversial viewpoint within GR as a fact, and not letting the reader know that the site you link to is by no means a good place for a beginner to start. Also, my answer is speculative because it is an open problem. Finally, GR is not expected to be the classical limit to a quantum theory of everything, precisely because of things like singularities, information paradoxes, etc. It is likely an approximation. – lionelbrits Dec 18 '14 at 13:31 My answer is only controversial in the sense that there are people here who do not understand how energy works in GR and dispute it despite it having been understood for nearly a hundred years. It sounds like you downvoted me mainly because I linked to viXra without actually finding anything wrong with the paper. GR is expected to be a classical limit of quantum gravity. What else could it be? All classical limits are approximations and are incomplete. – Philip Gibbs - inactive Dec 18 '14 at 22:54 "Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics and astronomy" What made you think that answers had to be tailored to suit beginners? – Philip Gibbs - inactive Dec 18 '14 at 23:01 The only thing that prevents us defining a total conserved energy for the entire universe is that if the universe is infinite then the total energy could be infinite or indeterminate. The statements that say energy is not conserved in general relativity are wrong, irrespective of who says them. You can define energy over any finite volume of space and you can define the flux of energy over the boundary surrounding the volume. The rate at which energy decreases in the volume is equal to the flux of energy across the boundary. This is the the most general way to express energy conservation globally. All statements to the contrary can be refuted and to avoid arguing around in circles I have done that at length in my write-up at http://vixra.org/abs/1305.0034 - No, the alternative is open peer review, but it is not often an available option. – Philip Gibbs - inactive Dec 17 '14 at 21:53 So what was your physics objection to my answer? Do you prefer the appeal to authority citing a text book that is not peer reviewed? – Philip Gibbs - inactive Dec 17 '14 at 22:02 It is not true that my point of view is widely considered incorrect. Conservation of energy in GR was first formulated by Einstein with good alternative but equivalent formulations being given by Landau-Lifshitz, Dirac, Weinberg and others. There are now better methods that dont use pseudotensors. There are of course others who do not understand it, especially people here, but you may notice that I still get more upvotes than downvotes. Tell me a specific fault in my answers and papers instead of appealing to selected authorities or complaining about lack of peer-review. – Philip Gibbs - inactive Dec 18 '14 at 11:25 Kyle, what you describe is one of the many frequently repeated fallacies that is refuted in my paper that I link to in my answer (point 3). Basically your fault is that you are treating the gravitational field as a given background in which matter and radiation move when in fact it is itself a dynamical field affected by them through equations which are time invariant. When you apply Noether's theorem to the full Lagrangian you get the conserved currents which can be integrated to give global conservation laws for energy including the energy in the gravitational field. – Philip Gibbs - inactive Dec 18 '14 at 22:40 Kyle, it is not a paper of original research so there is no point submitting it to a journal. It seems you cannot defend your answer so you just appeal to authority or critic on the basis that my work is not peer reviewed. Tell me this, do you really think that there is an explicit dependence on time in Einstein's theory of gravity? Do you think that the expansion of the universe is not governed by time-independent equations? That is what you are claiming in your answer and your criticism of my answer. Do you really think that? – Philip Gibbs - inactive Dec 19 '14 at 14:29 ## protected by Qmechanic♦Jan 31 at 12:03 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8607479333877563, "perplexity": 373.3165393741822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823387.9/warc/CC-MAIN-20160723071023-00194-ip-10-185-27-174.ec2.internal.warc.gz"}
http://link.springer.com/article/10.1007%2Fs10670-011-9292-0
Erkenntnis , Volume 75, Issue 2, pp 223–236 # Basic and Refined Nomic Truth Approximation by Evidence-Guided Belief Revision in AGM-Terms ## Authors • Department of Theoretical PhilosophyUniversity of Groningen DOI: 10.1007/s10670-011-9292-0 ## Abstract Straightforward theory revision, taking into account as effectively as possible the established nomic possibilities and, on their basis induced empirical laws, is conducive for (unstratified) nomic truth approximation. The question this paper asks is: is it possible to reconstruct the relevant theory revision steps, on the basis of incoming evidence, in AGM-terms? A positive answer will be given in two rounds, first for the case in which the initial theory is compatible with the established empirical laws, then for the case in which it is incompatible with at least one such a law. ## 1 Introduction AGM-style belief revision (AGM-BR; for an overview, see Hansson 1999), typically aims at coherence optimization between a given set of beliefs and new information in as conservative a way as possible, implicitly taking that new information as true, whatever distance its adherents take to matters of truth. However, as far as aiming at truth approximation at all, AGM-BR seems to be primarily aiming at the truth about the actual world, actual truth approximation, in short. Following Grove (1988), Niiniluoto (1999) and Cevolani and Calandra (2009) have been studying the prospects of belief revision for approximation of the actual truth. New and extended attempts focusing on this aim have been made at conferences in Trieste (2009) and Amsterdam (2009) by Cevolani, Crupi and Festa, Schurz, Niiniluoto, Smets, and Zwart and Renardel. However, theorizing and hence theory revision in the natural sciences typically aim at nomic truth approximation, that is, an approximation of the truth about what is nomically (e.g. physically) possible and what is not (Kuipers 2000). Nomic truth approximation by theory revision is guided by evidence, where evidence consists of case descriptions and induced empirical laws based on them. In addition to a basic (“content”) kind of nomic truth approximation, there is a refined (“likeness”) kind, as a concretization of the basic one (see Zwart (2001) for the distinction between content and likeness approaches1). Moreover, there exist (observationally-theoretically) stratified variants of both, probabilistic variants and, in principle, all kinds of combinations. An instructive stratified toy example is the following: let there be a complex, but finite, electric network of switches and bulbs, and a battery. Let the network of (serial and parallel) connections be hidden and the task is to find out the precise structure of this network. The observational nomic truth about the network amounts to a characterization of the physically possible state of the network, as far as positions of the switches (on/off) and the bulbs (lighting or not) are concerned. This nomic truth can be expressed by a propositional formula, of which the disjunctive normal form has as disjuncts the constituents that represent these states. It may be possible to reconstruct from this formula the theoretical nomic truth, that is, the full network, including the hidden connections. However, it may also be that there are empirically equivalent networks, that is, networks generating the same observational nomic truth. The question asked in this paper is: can something like AGM-BR be helpful for evidence-guided theory revision aiming at (some kind of) nomic truth approximation? In other words, is it possible to reconstruct plausible theory revision steps, on the basis of characteristic evidence, aiming at nomic truth approximation in AGM-terms? In Sect. 2 it will first be argued that straightforward basic theory revision, taking into account as effectively as possible the established nomic possibilities and the, on their basis, induced empirical laws, guarantees (unstratified) basic nomic truth approximation. Then it will be shown that this revision can be reconstructed into two AGM-steps, in arbitrary order. One of these is straightforward expansion; the other is an extreme form of contraction, viz. so-called full meet contraction. This revision needs, however, refinement for the difficult but likely case that at least one of the induced laws is incompatible with the original theory. In Sect. 3 it will first be shown that the spheres approach of theory revision developed by Adam Grove (1988) can be used to refine the above indicated two-step theory revision such that it can be used for the hard case, and reduces to the basic case when theory and all induced laws are compatible. Assuming the proper order, that is, first a refined kind of revision in the face of the induced laws, viz. a kind of partial meet revision, and then full meet contraction in the face of the remaining counterexamples, the resulting refinement is potentially conducive for basic truth approximation. In terms of the likeness foundation of the spheres approach by Wlodek Rabinowicz (1995), based on a four-place similarity relation, it will be shown that even this refinement is potentially conducive for refined truth approximation. In the concluding Sect. 4 the main (positive) conclusions will be followed by a number of debunking remarks about the presented AGM-style theory revision from a realist point of view. ## 2 The Basic Account ### 2.1 Basic Definitions and Basic Theory Revision According to the structuralist theory of truth approximation (Kuipers 2000), nomic truth approximation more specifically aims at the strongest true theory T about the set of nomic possibilities within the set of conceptual possibilities Mp generated by a chosen vocabulary for a chosen domain. Nomic truth approximation by evidence-guided theory revision requires definitions of ‘being closer to the truth’ and ‘being more successful’, or rather primarily their ‘at least as’-versions. A theory X amounts to a specified subset of Mp with the weak claim that it is a superset of T (T ⊆ X) and the strong claim that it is equal to it (T = X), resulting from adding the claim that X is a subset of T (X ⊆ T). The weak claim may also be called the necessity claim and the extra one the sufficiency claim, corresponding to whether the claim states that belonging to X is necessary or sufficient for being nomically possible. Informally we can summarize the point of departure as follows: we have a domain Mp of possibilities and every theory ‘amounts to’ a subset of this domain. This applies also to the strongest true theory, T. The elements of T are the ‘real’ possibilities, so to speak. All the possibilities outside T are not real. The weak claim concerning a theory X is that this theory does not leave out any real possibilities. The strong claim is that it in addition does not allow for any unreal possibilities. The (qualitative) basic definition of ‘Y is at least as close to T as X’ amounts to: Y∆T ⊆ X∆T (where ∆ stands for symmetrical difference, i. e. Y∆T = Y − T ∪ T − Y), and hence to: • (ib) T − Y is a subset of T − X • (iib) Y − T is a subset of X − T and ‘closer to iff, in addition, in at least one case it is a proper subset. This is the model version; there is also a consequence version and a mixed version (see Kuipers 2000, Chap. 8). Not knowing T, we have to try to improve our guesses (theories) of what T is on the basis of, or guided by, (new) evidence. Evidence typically comes in by experimentally realizing conceptual possibilities, say R(t) up to time t. They are, of course, nomic possibilities, hence, if we have not made mistakes, R(t) is a subset of T (R(t) ⊆ T), whatever T is. Neglecting mistakes and forgetfulness, R(t) is an increasing set of established nomic possibilities. R(t) will grow in particular due to testing general hypotheses, each one claiming that all nomic possibilities satisfy it. They may have been derived from the weak claim of theory X or may have been put to the test in order to test some other theory or for still other reasons. At each point of time we may assume that one or more of them are considered to have been sufficiently established as empirical laws by inductive generalization. Let subset S(t) of Mp represent at time t the resulting strongest, induced empirical law, which amounts to the claim that S(t) is the smallest induced superset of T, whatever T is (T ⊆ S(t)). Neglecting mistakes and forgetfulness, S(t) is a decreasing set. In sum: R(t) ⊆ T ⊆ S(t), assuming no mistakes. From now on t will be omitted. The following definition is now plausible: The (qualitative) basic definition of ‘Y is at least as successful as X relative to R/S’ amounts to: (ib-sf) R − Y is a subset of R − X (iib-sf) Y − S is a subset of X − S and ‘more successful’ by requiring in addition that in at least one case it is a proper subset. The first clause can be rephrased as: all established counterexamples to Y are counterexamples to X; and the second as: all established laws (represented by supersets of S!) explained by X are explained by Y. Note that the above definition implies that a theory Y is maximally successful relative to R/S iff R ⊆ Y ⊆ S. For then, and only then, both R − Y and Y − S are empty sets, which means that Y is at least as successful as any theory X. In general, it is crucial for the proper explication of qualitative notions of more truthlikeness and (corresponding) more successfulness or greater success to be able to prove the following theorem, with or, as in the present basic unstratified case, without further conditions: Success Theorem: If Y is closer to T than X then Y will always be at least as successful as X and become more successful in the long run. Proof of “Y will always be at least as successful as X”. First clauses, assuming R ⊆ T, R − Y ⊆ T − Y, and by (ib), R − Y ⊆ T − X. But we also have that R − Y ⊆ R, hence R − Y is a subset of the intersection of R and T − X, which equals R − X. Second clauses, assuming T ⊆ S, Y − S ⊆ Y − T, and by (iib), Y − S ⊆ X − T. But we have also Y − S ⊆ Mp − S, hence Y − S is a subset of the intersection of Mp − S and X − T, which equals X − S. Q.e.d. Proof sketch of “Y will …. become more successful [than X] in the long run”. When (ib) or (iib) can be strengthened to proper subsets, in the long run, in which R approaches T by steadily growing and S approaches T by steadily shrinking, there will be realized (hence, nomic) possibilities belonging (to T) and to Y, but not to X, or there will be laws induced that assign the status of nomic impossibilities (in Mp − T) to conceptual possibilities that are excluded by Y, but not by X. Of course, a straightforward proof requires precise assumptions about the way in which the experiments are going through T and how and when laws are induced. Q.e.d. This theorem gives good reasons to abduce, under certain conditions and for the time being, that theory Y is closer to the truth than theory X when Y is persistently more successful than X, i.e., when we typically speak of empirical progress. Or, conversely: ‘truth approximation’ provides the default-explanation of ‘empirical progress’. For the basic case the good reasons are threefold (Kuipers 2000, 162), in brief: (1) it is still possible that Y is closer to the truth than X, which would explain the persistent greater success, (2) it is impossible that X is closer to the truth than Y, (3) if neither holds, the persistent greater success so far requires a (test history) specific explanation. We can also paraphrase the overall conclusion by saying that persistent greater success is conducive for truth approximation and hence that greater success is potentially conducive for truth approximation. From the above definitions and the theorem we may also draw the preliminary conclusion that, assuming the data are correct, theory revision which not only realizes empirical progress but also nomic truth approximation is at least formally possible. Moreover, as is easy to check, both are even realistic, in the case of finitely many conceptual possibilities, as in the electric network, and, in general, when a finite propositional language can be used. It is now easy to show that there is a unique way to revise a theory X in the face of evidence R/S such that the revision is, as a rule, not only more successful but even closer to the truth than X. We call this basic theory revision of X by R/S. The revised theory is $$\left( {{\text{X}} \cap {\text{S}}} \right) \cup {\text{R}}$$ or, equivalently, $$\left( {{\text{X}} \cap {\text{R}}} \right) \cup {\text{S}}$$ and will be indicated by $${\text{X}}_{\text{R/S}}^{\text{b}}$$. Note that $${\text{X}}_{\text{R/S}}^{\text{b}}$$ equals X when R ⊆ X ⊆ S, i.e., when X is maximally successful. Moreover, it is easy to see that R ⊆ $${\text{X}}_{\text{R/S}}^{\text{b}}$$ ⊆ S, i.e., that $${\text{X}}_{\text{R/S}}^{\text{b}}$$ is maximally successful. Basic Revision Theorem: Assuming correct data (R ⊆ T ⊆ S), ‘basic theory revision of X by R/S’, resulting in $${\text{X}}_{\text{R/S}}^{\text{b}}$$, guarantees that $${\text{X}}_{\text{R/S}}^{\text{b}}$$ is (basically) at least as close to T, and hence at least as successful as X relative to R/S. Moreover, it is even closer to T, and more successful, than X when X is not maximally successful. Note that the condition that X is not maximally successful amounts to the claim that R is not a subset of X or X is not a subset of S, i.e., R includes counterexamples of X or X cannot explain all laws derivable from S, in sum, X is not ‘between’ R and S (while the revision is!). The validity of the theorem can easily be checked on the basis of the following picture, in which the shaded areas together indicate the revised theory $${\text{X}}_{\text{R/S}}^{\text{b}}$$, the horizontal one the expansion step and the vertical one the contraction step. Note that there are two extreme cases in which the role of X essentially vanishes and which are for that reason of special interest: $${\text{If}}\,{\text{X}} \cap {\text{S}} = \emptyset ,\,{\text{then}}\,{\text{X}}_{\text{R/S}}^{\text{b}} = {\text{R}},\,{\text{hence}}\,{\text{further}}\,{\text{roles}}\,{\text{of}}\,{\text{X}}\,{\text{and}}\,{\text{S}}\,{\text{vanish}}$$ $${\text{If}}\,{\text{X}} \cup {\text{R}} = {\text{M}}_{\text{p}} ,\,{\text{then}}\,{\text{X}}_{\text{R/S}}^{\text{b}} = {\text{S}},\,{\text{hence}}\,{\text{further}}\,{\text{roles}}\,{\text{of}}\,{\text{X}}\,{\text{and}}\,{\text{R}}\,{\text{vanish}}$$ In particular the first case is of a great interest, for though extreme in some sense, it is certainly not exceptional. It simply amounts to the case in which a theory X is incompatible with at least one established law, and hence with the strongest established law. But first we will deal with the question put in this paper as far as non-extreme cases are concerned. ### 2.2 Basic Theory Revision in Light of AGM-Belief Revision Now we turn to the main question of the paper: is it possible to reproduce the theory revision from X to $${\text{X}}_{\text{R/S}}^{\text{b}}$$ by AGM-style belief revision? As is well-known, AGM-belief revision centers around three (partially related) operations (Alchourrón et al. 1985), see also e.g. Cevolani and Calandra (2009). A belief set, that is, a deductively closed set of sentences of a given language, is confronted with some ‘input sentence’ that, by minimal further changes of the original belief set, either should become a consequence or no longer be a consequence of the revised belief set. For the first case, it makes an important difference whether or not the input sentence is compatible with the belief set. In the first subcase we get so-called expansion, viz., the belief set is strengthened to the set of consequences of the union of the belief set and the input sentence. Regarding the input sentence it leads from suspension of judgment about that sentence to its acceptance. In the second subcase the belief set has to be adapted in a more complicated way, satisfying certain axioms. It is called revision (in the narrow sense). Regarding the input sentence revision leads from its rejection to its acceptance, except when the input sentence is inconsistent. Finally, in the second main case, the input sentence is supposed to belong to the belief set, but should no longer belong to the revised set. Hence, now the belief set has to be weakened in a minimal way, again in line with some axioms. It is called contraction. Regarding the input sentence it leads from its acceptance to suspension of judgment, except when the input sentence is logically true, in which case it remains accepted after contraction.2 The focus in the belief revision program has been the axiomatic characterization of the three indicated operations. Whereas this kind of explication of expansion is relatively simple, it is rather complicated for revision and contraction. To be sure, we did not present the previous subsection in terms of sentences of a language but in terms of (sets of) conceptual possibilities or structures generated by a language. But we could translate, for example, theory X in terms of Th(X), i.e., the (deductively closed) set of sentences that are true of all structures in X. In this way the set of structures X becomes the set of models of Th(X). However, it is characteristic of the structuralist approach to identify a sentence or theory X with its set of models and to consider the set of (subsets of Mp being) supersets of X as representing the set of consequences of X. In the present context of nomic theories, this essentially model-theoretical notion of consequence is, directly be transmitted to the weak claim of a theory. To be precise, if Y is a superset of X, X ⊆ Y, the weak claim of theory Y, “T ⊆ Y”, is a consequence of the weak claim of theory X, “T ⊆ X”. Note that the strong claims of theories X and Y are incompatible as soon as Y is a proper superset of X. In this way we not only get ‘model versions’ of (sets of) sentences and consequences, but we can also form model versions of the three operations (Hansson 1999, 220–225). For expansion this is almost trivial, for revision and contraction some extreme forms are also rather easy, precisely the ones we need in this section. Expansion of theory X by input ‘sentence’ A amounts to $${\text{X}} \cap {\text{A}}$$. The so-called full meet (fm-)revision of X by A amounts to $${\text{X}} \cap {\text{A}}$$ when X is compatible with A ($${\text{X}} \cap {\text{A}}$$ is non-empty) and to A when X is incompatible with A. Finally, the so-called full meet (fm-)contraction of X by A amounts to $${\text{X}} \cup {\text{cA}}$$ when X entails A and to X when it does not. Note that fm-revision of X by A not only entails A, as informally required of revision, but also coincides with expansion of X by A when X and A are compatible and fully jumps to A when they are incompatible. In the latter case it is an extreme form of (AGM-)revision. Note also that fm-contraction of X by A no longer entails A, when X does entail A, as informally required of contraction, but that it then fully allows all possibilities in cA. In this sense it is an extreme form of (AGM-)contraction. Note, moreover, that it remains X when X does not entail A. Since the AGM-operations typically deal with the consequences of the relevant belief set, it is plausible to focus our leading question first on the way in which the weak or necessity claim of a theory X has to be adapted. To obtain a nomic theory in our sense we have to finally add the sufficiency claim to the adapted version of X. Recall that we exclude in this subsection two extreme cases, of which the most important one is that S and X are incompatible. From the indicated perspective it is immediately clear that the first step in the basic revision, from X to $${\text{X}} \cap {\text{S}}$$, is a clear case of (the model version of) expansion of X by S. Similarly, if we consider the transition from $${\text{X}} \cup {\text{R}}$$ to $$( {\text{X}} \cup {\text{R)}} \cap {\text{S}}$$. Recall, for later purposes, that expansion and fm-revision of X by S coincide when they are compatible, which we are assuming. Regarding the transition from X to $${\text{X}} \cup {\text{R}}$$, or from $${\text{X}} \cap {\text{S}}$$ to $$( {\text{X}} \cap {\text{S)}} \cup {\text{R}}$$, the situation is a bit more complicated. Focusing on the transition from X to $${\text{X}} \cup {\text{R}}$$, that is, from X to X $$\cup$$ (R − X), of which R − X amounts to the set of realized counterexamples of X, we see that whereas X is a subset of, and hence entails, c(R − X), X $$\cup$$ R does no longer entail this consequence. It even allows all possibilities in R − X. This amounts to fm-contraction of X by c(R − X). Similarly, the transition from $${\text{X}} \cap {\text{S}}$$ to $$( {\text{X}} \cap {\text{S)}} \cup {\text{R}}$$ amounts to fm-contraction of $${\text{X}} \cap {\text{S}}$$ by c(R − ($${\text{X}} \cap {\text{S}}$$)). In sum, we may now conclude that basic theory revision of X by R/S, leading to $$( {\text{X}} \cap {\text{S)}} \cup {\text{R}}$$ or, equivalently, $$( {\text{X}} \cup {\text{R)}} \cap {\text{S}}$$, can be seen as the successive application of expansion and fm-contraction (or vice versa), followed by adding the sufficiency claim to the resulting theory. To be sure, we need to add the sufficiency claim, of which it is not easy to see how it can be represented in AGM-terms, starting from the present perspective on ‘nomic theories’. It seems that the totally different approach by Cevolani, Crupi, and Festa (this volume) opens a new perspective avoiding this closure operation of sorts. However, that approach seems restricted to finite propositional languages and does not seem to have a clear alternative for the ‘refined account’ that we will soon start to motivate and to develop. From our perspective the above analysis completes our task of an AGM-presentation of basic theory revision for the non-extreme cases in which X and S are compatible ($${\text{X}} \cap {\text{S}}$$ ≠ Ø) and in which X and R do not exhaust Mp ($${\text{X}} \cup {\text{R}}$$ ≠ Mp), respectively. In both extreme cases the role of X essentially vanishes. Whereas the second extreme case ($${\text{X}} \cup {\text{R}}$$ = Mp) seems rather rare, the first extreme case ($${\text{X}} \cap {\text{S}}$$ = Ø) certainly is not: it merely assumes that X is incompatible with at least one induced empirical law. Hence, the main remaining task is to refine the expansion of X by S in some way for the case in which X and S are incompatible. One might suggest that another aspect of the transition from X to $${\text{X}} \cup {\text{R}}$$ (or from $${\text{X}} \cap {\text{S}}$$ to $$( {\text{X}} \cap {\text{S)}} \cup {\text{R)}}$$ may require refinement. Instead of fm-contraction of X by c(R − X) one might think of so-called partial meet contraction of X by c(R − X), in which case not all possibilities in R − X are allowed. This would require some kind of degree of trustworthiness of the various experimentally realized conceptual possibilities. A similar kind of refinement of the transition from X to $${\text{X}} \cap {\text{S}}$$ arises when we would assume a degree of trustworthiness of the induced laws, in which case S is no longer taken for granted. However, these kinds of refinement, which amount to weakening of the correct data assumption, go beyond the scope of the present paper. Finally, here, and for later purposes, it is interesting to see what would have resulted when we would have defined basic theory revision of X by R/S in terms of fm-revision of X by S followed by the relevant fm-contraction, or vice versa (in both cases, followed by adding the sufficiency claim). By the indicated alternative definition we would obtain $$( {\text{X}} \cap {\text{S)}} \cup {\text{R}}$$ when $${\text{X}} \cap {\text{S}}$$ is non-empty, and $${\text{S}} \cup {\text{R}}$$, that is, S, when $${\text{X}} \cap {\text{S}}$$ is empty, hence deviating from our primary definition in, and only in, the second case. By the ‘vice versa’ definition, fm-contraction by c(R − X), followed by fm-revision by S, we would obtain $$( {\text{X}} \cup {\text{R)}} \cap {\text{S}}$$, which reduces to R when $${\text{X}} \cap {\text{S}}$$ is empty. Hence, in this case the result would not differ from our primary definition. Let us call the deviating alternative definition the fm-definition of basic theory revision of X by R/S. ## 3 The Refined Account The main problem of basic revision of X by R/S is that it reduces to R when X and S do not overlap, i.e., contradict each other. Expansion of X by S then gives the empty set, the subsequent weakening with R just amounts to R, hence a result that in no way reminds us of X. It is easy to check that the other order leads to the same result. For this route it is crucial to note that R is a subset of S. The plausible direction for refinement is to try to concretize basic revision of X by R/S in terms of a likeness approach that reduces to the basic (content) approach under the appropriate idealization conditions (IC-test), i.c. when X and S are compatible. From the AGM-BR-perspective and our structuralist view of theories the spheres approach of Adam Grove (1988) is highly plausible. The spheres may seem to fall rather out of the air, but later we will see that they can be given a plausible ‘similarity foundation’ which, moreover, enables us to connect the spheres approach more specifically to the (structuralist) likeness approach of qualitative truth approximation. The basic idea of Grove is to postulate nested spheres around X, satisfying a number of conditions, notably, and plausibly, that X is the smallest and Mp is the largest sphere. Consider the smallest sphere σX(S) around X overlapping with S. It is now plausible to define (refined) theory revision of X by S ($${\text{X}}_{\text{S}}^{\text{r}}$$) as the intersection of S and σX(S), i.e. as σX(S) $$\cap$$ S. It is easy to check (IC-test!) that, when X and S are compatible, σX(S) = X and hence $${\text{X}}_{\text{S}}^{\text{r}} = {\text{X}} \cap {\text{S}}( = {\text{X}}_{\text{S}}^{\text{b}} )$$. Grove has proved that $${\text{X}}_{\text{S}}^{\text{r}}$$ satisfies the original AGM-axioms of belief revision presented in (Alchourrón et al. 1985). This corresponds to what later has been called ‘transitively relational partial meet revision’ (Hansson 1999, p. 223), which we will simply abbreviate by ‘pm-revision’. In sum, $${\text{X}}_{\text{S}}^{\text{r}}$$ = σX(S) $$\cap$$ S is the most straightforward AGM-way to deal with the revision of X by S, but how to take R into account now? Recall that R is a non-empty subset of S. Recall also that the transition from X to $${\text{X}} \cup {\text{R}}$$ amounted to fm-contraction of X by c(R − X) and the transition from $${\text{X}} \cap {\text{S}}$$ to $$( {\text{X}} \cap {\text{S)}} \cup {\text{R}}$$ to fm-contraction of $${\text{X}} \cap {\text{S}}$$ by c(R − ($${\text{X}} \cap {\text{S}}$$)). As Hansson (1999, 224–225) describes, contraction can also get a spheres interpretation, giving rise to partial meet (pm-)contraction. However, this would mean that we have to make a selection of members of R − X or of R − (X ∩ S), respectively. In the present context there is not much reason for this kind of refinement. Therefore, the only question that remains is the order in which the refined revision by S and the basic revision by R should take place. According to a first alternative one can first apply fm-contraction of X by c(R − X), followed by pm-revision of the result ($${\text{X}} \cup {\text{R}}$$) by S, leading to $${\text{X}}_{\text{S(R)}}^{\text{r}} = \,_{\text{def}} ({\text{X}} \cup {\text{R}})^{\text{r}}_{\text{S}} = ({\text{X}} \cup {\text{R}}) \cap {\text{S}} = {\text{X}}_{\text{R/S}}^{\text{b}}$$, since $${\text{X}} \cup {\text{R}}$$ itself is the smallest sphere around $${\text{X}} \cup {\text{R}}$$ overlapping with S. Hence, $${\text{X}}_{\text{S(R)}}^{\text{r}} = {\text{X}}_{\text{R/S}}^{\text{b}}$$, i.e., basic revision of X by R/S, even if $${\text{X}} \cap {\text{S}}$$ = Ø. Hence, the first alternative is no solution of the main problem. The second alternative fares better: first pm-revision of X by S, followed by fm-contraction of the result (σX(S) $$\cap$$ S) by c(R − (σX(S) $$\cap$$ S)). In this way we get: $${\text{X}}_{\text{R(S)}}^{\text{r}}$$ = $$_{\text{def}} {\text{X}}_{\text{S}}^{\text{r}}$$ $$\cup$$R = (σX(S) $$\cap$$ S) $$\cup$$ R. Note that (IC-test) $${\text{X}}_{\text{R(S)}}^{\text{r}}$$ reduces to $$( {\text{X}} \cap {\text{S)}} \cup {\text{R}}$$ = $${\text{X}}_{\text{R/S}}^{\text{b}}$$ when X and S overlap. However, $${\text{X}}_{\text{R(S)}}^{\text{r}}$$ ≠ $${\text{X}}_{\text{R/S}}^{\text{b}}$$ when X and S do not overlap. Hence, the order matters a lot, and we will opt for this second alternative. Of course, refined revision is only rounded off by adding the sufficiency claim. Of the suggested overlaps in the figure below, the only required overlap is that of σX(S) with S, not that with T, let alone that with R. However this may be, the horizontally shaded area indicates the revision step and the vertically shaded area the contraction step. Note that pm-revision of X by S reduces to fm-revision when there are just two spheres, viz. X and Mp. Hence, when there are just two spheres, the result of pm-revision of X by S followed by the relevant fm-contraction, and closed by adding the sufficiency claim, reduces to (the result of) the fm-definition of basic theory revision of X by R/S. Let us now evaluate refined revision first in terms of basic truthlikeness and basic successfulness. Let us begin by the latter. It is not difficult to check that $${\text{X}}_{\text{R(S)}}^{\text{r}}$$ is basically at least as successful as X. It is even maximally successful, for it holds that $${\text{R}} \subseteq {\text{X}}_{\text{R(S)}}^{\text{r}} \subseteq {\text{S}}$$, hence, $${\text{X}}_{\text{R(S)}}^{\text{r}}$$ has no established counterexamples and it explains the strongest established law, hence it explains all established laws. However, already in view of being basically at least as successful, the proposed revision is, due to the (basic) success theorem, potentially conducive for basic truth approximation, even if X is incompatible with S. But in this extreme case, the proposed revision is not basically at least as close to the truth, except in a very extreme, lucky case. The reason is that, as a rule, the revision introduces new mistakes, viz. it includes models of S outside T that did not belong to X, i.e., σX(S) $$\cap$$ (S − T) will be non-empty. This is typically grist to the mill of refined truth approximation, for in that approach new mistakes are allowed as long as they are less bad than old ones. Hence, the question is how the revision fares in terms of refined truthlikeness and corresponding refined successfulness. Refined truth approximation, as presented in (Kuipers 2000), is a qualitative likeness approach to truth approximation. It is based on a three-place ‘structure-likeness’ relation on the set of structures: $${\text{s}}\left( {{\text{x}},{\text{y}},{\text{z}}} \right)\,{\text{y}}\,{\text{is}}\,{\text{at}}\,{\text{least}}\,{\text{as}}\,{\text{similar}}\left( {\text{close}} \right)\,{\text{to}}\,{\text{z}}\,{\text{as}}\,{\text{x}}$$ When s(x,y,z) holds, y is also said to be, qua kind of structure, between x and z. It is supposed to satisfy some plausible minimal (s-) conditions.3 Moreover, we need not assume that all pairs of structures are comparable in the sense of being related by some intermediate structure. Hence we define: x and z are related, r(x,z), iff $$\exists$$ y s(x,y,z). Finally, we say that s is trivial if: for all x, y, and $${\text{z}}\,{\text{s}}\left( {{\text{x}},{\text{y}},{\text{z}}} \right){\text{iff}}\,{\text{x}} = {\text{y}} = {\text{z}}.$$ Before we introduce further definitions, let us introduce the likeness foundation of spheres and indicate the connection with the likeness approach to truth approximation. Not all of Grove’s sphere axioms are very plausible. Wlodek Rabinowicz (1995) provided plausible foundations in terms of a four-place similarity relation: $${\text{sim}}\left( {{\text{x}},{\text{y}};{\text{u}},{\text{v}}} \right)\,{\text{x}}\,{\text{is}}\,{\text{at}}\,{\text{least}}\,{\text{as}}\,{\text{close}}\,\left( {\text{similar}} \right)\,{\text{to}}\,{\text{y}}\,{\text{as}}\,{\text{u}}\,{\text{is}}\,{\text{to}}\,{\text {v}}$$ satisfying four plausible conditions and one Limit Assumption (see below). Given a set of structures X, Rabinowicz now defines a binary relation between structures $$\begin{array}{*{20}c} {{\text{x}} \le_{\text{X}} {\text{y}}\,{\text{iff}}} \hfill & {\forall {\text{y}}^{'} \in {\text{X}}\,\exists {\text{x}}^{'} } \hfill \\ \end{array} \,\in {\text{X}}\,{\text{sim(x}}^{'} ,{\text{x;y}}^{'} , {\text{y)}}$$ This relation might be paraphrased by: X has at least as similar representatives of x as of y. The relation enables the definition of a sphere (Rabinowicz 1995, p. 92): $$\begin{array}{*{20}c} {{\text{Y}}\,{\text{is}}\,{\text{a}}\,sphere\,{\text{around}}\,{\text{X}}} \hfill & {\text{iff}} \hfill & {\left( {\text{i}} \right){\text{if }}\,{\text{X}} \ne \emptyset \,{\text{then}}\,{\text{Y}} \ne \emptyset } \hfill \\ {} \hfill & {} \hfill & {\left( {\text{ii}} \right) \, \forall {\text{x}}\forall {\text{y}} \in {\text{Y}}\,{\text{if}}\,{\text{x}} \le_{\text{X}} {\text{y}}\,{\text{then x}} \in {\text{Y}}} \hfill \\ \end{array}$$ It is not difficult to check that this definition satisfies Grove’s four axioms, among them that X and Mp are the smallest and the largest sphere, respectively. Recall that σX(S) was the ‘smallest’ sphere around X that overlaps with S and that $${\text{X}}_{\text{S}}^{\text{r}}$$ = σX(S) $$\cap$$ S was defined as the refined revision of X by S. Rabinowicz proved that $${\text{X}}_{\text{S}}^{\text{r}}$$ = {x′ ∈ S| $$\exists$$x ∈ X ∀y ∈ X ∀y′ ∈ S sim(x′,x;y′,y)}, where the latter set corresponds to Rabinowicz’ version of $${\text{X}}_{\text{S}}^{\text{r}}$$. The idea behind this version is that it forms “the set of S-worlds that are as similar to some worlds in X as possible, as compared with other worlds in X” (Rabinowicz (1995, p. 82, S substituted for Y). The Limit Assumption that now is needed instead of a, here not presented, very arbitrary assumption of Grove is not at all that arbitrary: if X and S are non-empty then $${\text{X}}_{\text{S}}^{\text{r}}$$ is non-empty. Now we can turn to the connection between s and sim. Assuming that z in s(x,y,z) is a kind of target the most plausible one certainly is: $$\begin{array}{*{20}c} {{\text{s}}\left( {{\text{x}},{\text{y}},{\text{z}}} \right)\,{\text{iff}}\,{\text{sim}}\,\left( {{\text{y}},{\text{z}};{\text{x}},{\text{z}}} \right) \, } & {{\text{i}}.{\text{e}}.,\,{\text{y}}\,{\text{is}}\,{\text{at}}\,{\text{least}}\,{\text{as}}\,{\text{similar}}\,{\text{to}}\,{\text{z}}\,{\text{as}}\,{\text{x}}\,\left( {{\text{is}}\,{\text{to}}\,{\text{z}}} \right)} \\ \end{array}$$ With this connection in mind we now arrive at the crucial definition of refined truth approximation. Definition: Y is refined at least as truthlike as X iff • (ir) $$\forall$$x ∈ X z ∈ T r(x,z) → ∃y ∈ Y s(x,y,z) • (iir) $$\forall$$y ∈ Y − (X $$\cup$$ T) ∃x ∈ X − T ∃z ∈ T − X s(x,y,z) It is easy to check that (ir) is a strengthening of (ib) of the basic definition and that (iir) is a weakening of (iib). (ir) roughly says that every comparable pair of structures, one of X and one of T, has an ‘intermediate’ in Y. (iir) states that if Y − (X $$\cup$$ T) is at all non-empty, which is excluded in the basic case, these structures are ‘useful’. The definition reduces to the basic one when s is trivial. Whereas the basic revision $${\text{X}}_{\text{R(S)}}^{\text{r}}$$ was easily seen to be basically at least as truthlike as X, the refined revision $${\text{X}}_{\text{R(S)}}^{\text{r}}$$ is now not necessarily at least as truthlike as X in the refined sense. Hence, there is now even more reason to turn to successfulness. Definition: Y is refined at least as successful as X, relative to R/S, iff • (ir-sf) $$\forall$$x ∈ X z ∈ R r(x,z) → ∃y ∈ Y s(x,y,z) • (iir-sf) $$\forall$$y ∈ Y − (X $$\cup$$ S) ∃x ∈ X − S ∃z ∈ S − X s(x,y,z) The Refined Success Theorem tells now that, assuming correct data, ‘refined at least as truthlike’ entails ‘refined at least as successful’. Again the proof is not difficult. However, for the general proof of (iir)’s entailment of (iir-sf) we need to assume that, if Y − (X $$\cup$$ S) is non-empty, S is convex (i.e., if x, z ∈ S and s(x,y,z), that is, when y is qua kind of structure between x and z, then y ∈ S). Similar to the basic case, the consequence of the theorem is that being persistently more successful in the refined sense is conducive for refined truth approximation (provided S is convex, if relevant). The final crucial question now is whether the (AGM-interpretable) refined revision $${\text{X}}_{\text{R(S)}}^{\text{r}}$$ of X by R/S is at least as successful as X in the refined sense. In that case it would be potentially conducive for truth approximation for it may become persistently more successful in the refined sense and hence conducive for refined truth approximation. This happens to be the case according to the following: Main Theorem: $${\text{X}}_{\text{R(S)}}^{\text{r}}$$ is refined at least as successful as X, relative to R/S. Let us look at the specific claims: $$\begin{array}{*{20}c} {\left( {{\text{i}}^{\text{r}}{\text{-sf}} {\text{-wrt X}}_{\text{R(S)}}^{\text{r}} } \right) \,} & {\forall {\text{x}} \in {\text{X}}\, {\text {z}} \in {\text{R}}\,{\text{r}}\left( {{\text{x}},{\text{z}}} \right) \to \exists y \in {\text{X}}_{\text{R(S)}}^{\text{r}} {\text{s}}\left({{\text{x}},{\text{y}},{\text{z}}} \right)} \\ \end{array}$$ This is trivial, for R is a subset of $${\text{X}}_{\text{R(S)}}^{\text{r}}$$ and r(x,z) → s(x,z,z) is a (plausible) minimal s-condition. $$\begin{array}{*{20}c} {\left({{\text{ii}}^{\text{r}}{\text{-sf}} {\text{-wrtX}}_{\text{R(S)}}^{\text{r}} } \right) \,} & {\forall {\text{y}}\in{\text{X}}_{\text{R(S)}}^{\text{r}} - ({\text{X}}\cup{\text{S}})\exists {\text{x}} \in {\text{X}} -{\text{S}}\exists{\text{z}} \in {\text {S}} - {\text {X}} \,{\text{s}}\left({{\text{x}},{\text{y}},{\text{z}}} \right)} \\\end{array}$$ This is also trivial, for $${\text{X}}_{\text{R(S)}}^{\text{r}}$$ is a subset of S, hence $${\text{X}}_{\text{R(S)}}^{\text{r}}$$ − (X$$\cup$$S) is empty. The latter fact has even the consequence that the convexity of S is not required for the applicability of the Refined Success Theorem. ## 4 Conclusions The main conclusions of this paper are: First, basic revision of theory X in light of evidence R/S, assuming X and S compatible, based on expansion by S, leading to $${\text{X}} \cap {\text{S}}$$, followed by fm-contraction by c(R − ($${\text{X}} \cap {\text{S}}$$)), leading to $$( {\text{X}} \cap {\text{S)}} \cup {\text{R}}$$, and closed by adding the sufficiency claim, is basically at least as successful as X and even basically at least as close to X in the nomic sense. Second, refined theory revision in light of evidence R/S, assuming X and S incompatible, based on pm-revision by S, along Grove-Rabinowicz lines, leading to σX(S) $$\cap$$ S, followed by fm-contraction by c(R − (σX(S) $$\cap$$ S)), leading to (σX(S) $$\cap$$ S) $$\cup$$ R, and closed by adding the sufficiency claim, is at least as successful as X in the refined sense, and hence potentially conducive for refined nomic truth approximation. At this point a number of debunking remarks are in order: 1. (a) Having to focus in both cases first on the necessity claim and to add at last the sufficiency claim is not very elegant. 2. (b) Both revisions are rather ad hoc. However, as in general for ad hoc changes in a theory, the crucial question is whether they can be put to new (HD-) tests, and this is evidently the case. After all, it could even be the case that all further tests indicate that no new ad hoc maneuvers have to be made. 3. (c) Both revisions are rather diehard empiricist or instrumentalist. The ‘instrument’ X is precisely so adapted that it just saves the phenomena, not only with respect to R but also with respect to S. Note that this character will not change by weakening the correct data assumption, as suggested at the end of Sect. 2. 4. (d) If there is something like well-formed theories, there do not seem to be good reasons to expect that the two revisions will satisfy the criteria, even if R and S satisfy some derived criteria. 5. (e) Last, but not least, what remains of the idea behind X? A proper theory, even if it is without theoretical terms, in some sophisticated sense, is usually based on one or two ideas. It is difficult to imagine that such ideas do not become ‘mutilated’ by the revision. Be this as it may, the two results may stimulate the interaction between truth approximation and belief revision approaches for they fundamentally show that AGM belief revision provides means for nomic truth approximation. Footnotes 1 Zwart, however, disagrees about calling the second a concretization of the first. 2 The formal definition also leaves room for the case in which the input sentence does not belong to the original belief set. Then the outcome of contraction is simply the original belief set, i.e. judgment about the input sentence was and remains suspended. 3 They are: centered, centering and conditionally left and right reflexive. Here s is centered iff s(x,x,x) and centering iff s(x,y,x) implies x = y. s is conditionally left/right reflexive if s(x,y,z) implies all kinds of left and right reflexivity, i.e., s(x,x,y), s(x,x,z), s(y,y,z) and s(x,y,y), s(x,z,z), s(y,z,z), respectively. Note that this conditional form leaves room for incomparable structures (see text), which otherwise would not be the case. ## Acknowledgments The author wishes to thank Roberto Festa for the occasion to present this paper in 2009 at a symposium in Trieste, Gustavo Cevolani, Gerhard Schurz and Wlodek Rabinowicz for very useful comments, and the Netherlands Institute for Advanced Study (NIAS), Wassenaar, for providing again paradisiacal conditions for research and writing, including linguistic correction by Anne Simpson.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9080246686935425, "perplexity": 928.1820686222686}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274985.2/warc/CC-MAIN-20160524002114-00053-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/mixture-formula.121469/
# Mixture formula 1. May 20, 2006 What is the mixture formula when combining two or more fluids? Thanks Pavdarin 2. May 20, 2006 ### Hootenanny Staff Emeritus Do you mean the rate of diffusion? ~H 3. May 20, 2006 sorry for not beign specific before hand i mean't the mixture formula of heat difference for subnstances of the same phase 4. May 20, 2006 ### Hootenanny Staff Emeritus Sorry, I don't quite understand. Do you have a specific example? ~H 5. May 20, 2006 umm..... if i have 1 kg of water at 100 degrees and 10 kg of water at zero, what formaula or series of formulas would i use to to calculate the final temperature 6. May 20, 2006 ### Hootenanny Staff Emeritus Ahh, you would simply use the equation for specific heat capacity $\Delta Q = mc\Delta\theta$. You would have to use simultaneous equations. HINT: The energy lost by the water at 100 degrees must be equal to the energy gained by the water at zero degrees. ~H 7. May 20, 2006 k thanks hootenanny, sorry for the confusion 8. May 20, 2006 ### Hootenanny Staff Emeritus No problem, I was worried for a moment then because I'd never heard of such a formula ~H
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9155388474464417, "perplexity": 1799.0088331059148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541426.52/warc/CC-MAIN-20161202170901-00371-ip-10-31-129-80.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?p=4191354
# A hellim ballon in a bus by rbwang1225 Tags: ballon, hellim P: 116 1. The problem statement, all variables and given/known data Consider a hellium balloon with negiligible mass in the bus with all windows closed. When the bus is acclerating in ##\mathbf a=a \mathbf i##, where ## \mathbf i## is the unit vector in the positive x direction, describe the status of the ballon and explain the reason. If we consider the massive balloon, does your answer change? Explain the reason. (Status: Does it tilt or not? What are the tilting diraction and angle?) 2. Relevant equations Buoyent force ##ρgV## 3. The attempt at a solution I know the answer might be ##tan\theta=\frac{g}{a}## or something like this, but I don't really the reason behide this. I think it is related to physics in noninertial frames. Any help would be appreciated. Sincerely. Admin P: 23,577 Forget about the bus for a moment. What usually happens to the helium filled balloon and why? P: 116 The balloon will float in the air because of the density of the helium is smaller than that of air. I just got a new idea, but don't know if it's right or not. Because of the total force is ##\mathbf B+m\mathbf g## which equals ##m\mathbf a##, and I think the direction of the gravity is the same, therefore it is the change of the buoyant force that causes the change of the status of the balloon. Admin P: 23,577 A hellim ballon in a bus The balloon will float, or ascend? P: 116 Sorry, bad english, the balloon will ascend. Admin P: 23,577 OK, why does it ascend? P: 116 Because the pressure at the lower part of the balloon is greater than the upper part of it, and, the buoyant force is larger than the gravitational force, the balloon ascends. Admin P: 23,577 OK, but why do these differences exist? Do they exist in the zero gravity environment? Sci Advisor HW Helper Thanks PF Gold P: 5,234 If the balloon is not on a string held by someone, where do you think the balloon is located vertically before the bus starts to accelerate? (a) in mid air (b) at the roof of the bus. If your answer is (b), what do you think the magnitude of the force is that the roof exerts on the balloon? P: 116 Quote by Borek OK, but why do these differences exist? Do they exist in the zero gravity environment? No, they always exist in a gravitational environment. As for the reason of these differences, I don't really understand your question, sorry... P: 116 Quote by Chestermiller If the balloon is not on a string held by someone, where do you think the balloon is located vertically before the bus starts to accelerate? (a) in mid air (b) at the roof of the bus. If your answer is (b), what do you think the magnitude of the force is that the roof exerts on the balloon? My answer is (b), but I don't really know how to figure out the direction and magnitude of the normal force exerted by the roof and the buoyant force which, I think, has something to do with the condition of the windows. P: 358 Because of this part: "(Status: Does it tilt or not? What are the tilting diraction and angle?" I think we should consider it to be floating up in the air and it is on a sting held by a student. So the balloon actually acts as an accelerometer in this case: http://scienceblogs.com/dotphysics/2...-one-yourself/ P: 3,145 I think we should consider it to be floating up in the air and it is on a sting held by a student. or at least assume that the balloon is weighted so it has a tendency to float one way up eg with the knot pointing downwards. Perhaps it would help the OP to remember that when the bus is stationary gravity (an acceleration) is pulling the air vertically downwards. What happens when the bus and the air in it are also accelerating in another direction. It helps if you have ridden on a bus standing up! Admin P: 23,577 Let me reword the original problem. Imagine you have a helium filled balloon in the standing bus. Obviously, the balloon goes up till it stops at the roof and it stays there. Now, what will happen when the bus starts to move? P: 358 Related Discussions Introductory Physics Homework 15 Introductory Physics Homework 7 Introductory Physics Homework 0 Introductory Physics Homework 9
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935864567756653, "perplexity": 630.2828378670349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832052.6/warc/CC-MAIN-20140820021352-00117-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/find-intersection-of-3x2-matrices-using-qr-factorization.431305/
# Find Intersection of 3x2 Matrices Using QR Factorization 1. Sep 22, 2010 ### blabbate Figured it out. Last edited: Sep 22, 2010 2. Sep 29, 2010 ### qiaoshiya By any chance did you do following: Do the full QR factorization of the two matrices. Using the third column from each 'Q' matrix, build a new matrix, call it Q'. It is composed of the normals to the planes described by the original two matrices. Do a full QR of Q'. The new QR has a third column of Q that is orthogonal to both of those normal vectors. So it is therefore in the intersection. Is this correct? or did I make a bad assumption? 3. Sep 29, 2010 ### dannybrowne86 It seems that we are all in CSE 6643, which also seems all good according to the syllabus and working together. So, qiaoshiya, this seems like a very reasonable assumption. From your thoughts, and talking with Prof Alben, I went back through Chapter 7. On page 50 the text states "Notice that in the full QR factorization, the columns of q_j for j>n are orthogonal to range(A)." That means that the third column of Q should basically be equivalent to cross(x1, y1), which is one way of identifying a plane (use the plane's normal vector). With this, then using the 2 third columns of the Qs, then the third QR factorization would result in a vector that is perpendicular to both of the first two plane identifying vectors. That is exactly what we're are looking for. When I first saw this problem I went through (mostly) the exercise of finding the the final vector, since I knew how to do that. Now reading that line from page 50, the two processes seem to be identical. Other thoughts from anyone? Last edited: Sep 29, 2010
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8266054391860962, "perplexity": 671.8767215773813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584331733.89/warc/CC-MAIN-20190123105843-20190123131843-00074.warc.gz"}
https://electrical.codidact.com/posts/279585
Q&A # What is the difference between differential amplifier and differentiator? +1 −0 Since am interested in how a delta sigma modulator works, I need to know what is the difference between differential amplifier and differentiator if there is a difference of course. Why does this post require moderator attention? Why should this post be closed? +2 −0 A differential amplifier and a differentiator are two completely different circuit blocks. ### Differential Amplifier A differential amplifier has two inputs and one output. It takes the difference between the two inputs, multiplies that by the gain, and makes it the output. Out = (V1 - V2) ⋅ Gain In this example, the gain is A/B. ### Differentiator A differentiator takes the derivative of a signal. In other words, its output is proportional to how fast the input is changing. Note that the gain is not dimensionless, as it is for a normal amplifier. For example, the gain can be the output Volts divided by the input Volts/second, which comes out to units of seconds. In this example, the gain is proportional to -R1⋅C1. Why do you think the gain could not expressed as V/V? I'll assume this is referring to the differentiator, since the gain of the differential amplifier is a voltage divided by a voltage, resulting in a dimensionless value. For a differentiator, the output is the change in the input. Just dividing the output voltage by the input voltage doesn't yield anything meaningful. For example, you get 0 V out for any steady input voltage. Saying you get 0 V out for 10 V in, but also 0 V out for 3.97 V in (or any other voltage), isn't very useful. Since you didn't define any of your terms, nor the context, it's just meaningless characters. Why does this post require moderator attention? #### 1 comment OK - I was of the opinion that in a short comment it would be appropriate to use the well-known abbreviations for the open-loop gain Aol and the closed-loop gain Acl. The quantity beta was defined using the symbols shown in the drawing. Again, I like to point out that for sinusoidal signals it is, of course, possible to define a dimensionless gain (V/V). For control systems (control loops) It is common practice to define the gain in the frequency domain (PD or PID or PD-T1 blocks). LvW‭ about 2 months ago +0 −0 Both differential amplifier and differentiator react to a voltage difference. But in the differential amplifier, the difference is between two voltages applied to the amp inputs at the same time while in the differentiator, the difference is between two voltage values at adjacent moments of time. I have met a similar question about the difference between a differential amplifier and differential resistance. And in this case, what they have in common, is the voltage difference. But while in the differential amplifier the difference is between two input voltages, in the differential resistance, the difference is between two voltage values at adjacent values of the current. BTW there is a differential integrator - a 2-input op-amp circuit with two RC circuits. Maybe, it is possible to construct in a similar way a 2-input differential differentiator... Why does this post require moderator attention? Quote:..."in the differential resistance, the difference is between two voltage values at adjacent values of the current." Did the questioner (Pacifist) spoke about resistances? I think, he has mentioned instead a differentiating circuit. In this case, we could speak about two adjacent voltages at two different time slots...? LvW‭ about 2 months ago @LvW‭, Exactly... I just quoted an excerpt from another similar question that was asked to me some time ago... Circuit fantasist‭ about 2 months ago +0 −0 I need to know what is the difference between differential amplifier and differentiator A differential amplifier amplifies the difference voltage between two signal voltages. A differentiator performs a type of mathematical calculus on a signal. The two processes are wholly unrelated. Why does this post require moderator attention?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8864744901657104, "perplexity": 928.2044654738928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703499999.6/warc/CC-MAIN-20210116014637-20210116044637-00673.warc.gz"}
http://mathhelpforum.com/advanced-algebra/177505-finding-hermitian-matrix-h-2-x-3-matrix.html
# Math Help - Finding a Hermitian Matrix A^H for a 2 x 3 Matrix A 1. ## Finding a Hermitian Matrix A^H for a 2 x 3 Matrix A Hello, I'm having a really hard time with Hermitian matrices. In preparation for my exam next week, I've been trying to figure out this problem: Given a matrix $A = \left| \begin{matrix} i & 1 & i \\ 1 & i & i \end{matrix} \right|$ , compute $A^HA$ and $AA^H$. How do I find $A^H$ given the 2x3 matrix $A$? Please include all steps. Thank you! 2. $A^H=(\bar{A})^t=\begin{bmatrix}{-i}&{\;\;1}&{-i}\\{\;\;1}&{-i}&{-i}\end{bmatrix}^t=\begin{bmatrix}{-i}&{\;\;1}&\\{\;\;1}&{-i}\\{-i}&{-i}\end{bmatrix}$ 3. Thank you! That really helped. I've moved on to another similar problem, and want to make sure I have the correct $A^H$. If given matrix $A = \begin{bmatrix}{1}&{2i}&{i}\\{1}&{i}&{1+i}\end{bma trix}$ , would $A^H =\begin{bmatrix}{1}&{1}&\\{-2i}&{-i}\\{-i}&{1-i}\end{bmatrix}$ ? 4. Right.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8021191358566284, "perplexity": 434.2852519198575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195034286.17/warc/CC-MAIN-20150601214354-00098-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www2.cms.math.ca/Events/winter14/abs/sma
2014 CMS Winter Meeting McMaster University, December 5 - 8, 2014 Stochastic Models and Applications Org: Shui Feng (McMaster) and Bruno Remillard (HEC Montreal) [PDF] LOUIGI ADDARIO-BERRY, McGill University Random maps and their cores  [PDF] Let $Q$ be a large random quadrangulation let $R$ be its largest simple subgraph and $P$ be its second-largest simple subgraph. Then $|R|/|Q|$ is concentrated near a fixed integer $\alpha \in (0,1)$, and $|P|/|Q|$ is very likely close to zero; in other words, large quadrangulations with high probability have a unique simple "core" of linear size, decorated with small (sub-linear size) attachments. We use this picture to show that the pair $(Q,R)$, after suitable rescaling, converges in the Gromov-Hausdorff-Prokhorov sense to a limit $(M,M)$, where $M$ is a random variable with the law of the Brownian map. This requires showing that the distribution of mass in Q and R is asymptotically equal, which we establish through an "invariance principle for exchangeable, asymptotically negligible attachments" for measured metric spaces. LOUIS-PIERRE ARGUIN, Université de Montréal Probabilistic approach for the maxima of the Riemann Zeta function on the critical line  [PDF] A recent conjecture of Fyodorov, Hiary \& Keating states that the maxima of the Riemann Zeta function on a bounded interval of the critical line behave similarly to the maxima of a specific class of Gaussian fields, the so-called log-correlated Gaussian fields. These include important examples such as branching Brownian motion and the 2D Gaussian free field. In this talk, we will highlight the connections between the number theory problem and the probabilistic models. We will outline the proof of the conjecture in the case of a randomized model of the Zeta function. We will discuss possible approaches to the problem for the function itself. This is joint work with D. Belius (NYU) and A. Harper (Cambridge). RALUCA BALAN, University of Ottawa Intermittency for the stochastic wave and heat equations with fractional noise in time  [PDF] Stochastic partial differential equations (SPDEs) are mathematical objects that are used for modeling the behaviour of physical phenomena which evolve simultaneously in space and time, and are subject to random perturbations. A key component of an SPDE which determines the properties of the solution is the underlying noise process. An important problem is to study the impact of the noise on the behavior of the solution. In the study of SPDEs using the random field approach, the noise is typically given by a generalization of the Brownian motion, called the space-time white noise. In this talk, we consider the stochastic heat and wave equations driven by a Gaussian noise which is homogeneous in space and behaves in time like a fractional Brownian motion with index $H > 1/2$. We study a property of the solution $u(t,x)$ called intermittency. This property was introduced by physicists as a measure for describing the asymptotic behaviour of the moments of $u(t,x)$ as $t \rightarrow \infty$. Roughly speaking, $u$ is weakly intermittent'' if the moments of $u(t,x)$ grow as $\exp(ct)$ for some $c>0$. It is known that the solution of the heat (or wave) equation driven by space-time white noise is weakly intermittent. We show that when the noise is fractional in time and homogeneous in space, the solution $u$ is weakly $\rho$-intermittent'', in the sense that the moments of $u(t,x)$ grow as $\exp(ct^{\rho})$, where $\rho>0$ depends on the parameters of the noise.\This talk is based on joint work with Daniel Conus (Lehigh University). PHELIM BOYLE, Wilfrid Laurier University Beyond Perron Frobenius  [PDF] The classical Perron-Frobenius theorem provides a sufficient condition for the dominant eigenvector of an n by n matrix to be positive. The condition is that all the matrix elements are positive. An extension of this result has a direct application in finance. The dominant eigenvector of the correlation matrix of stock returns can proxy the market portfolio. As the market portfolio must have positive weights we are interested in the conditions under which elements of this eigenvector are positive. It turns out that one can have some negative elements in the correlation matrix and the matrix can still have a positive dominant eigenvector. We analyze these conditions and this leads to extensions of the Perron Frobenius theorem DONALD A. DAWSON, Carleton University Random walk, percolation and branching systems on the hierarchical group  [PDF] Spatial population models have been intensively studied for many years. Classical branching systems are well understood in homogeneous spaces such as $\mathbb{R}^2$ or $\mathbb{Z}^d$. Much less is known about more complex systems such as catalytic branching systems, in particular mutually catalytic systems, even in homogeneous spaces and much less is known in random media. The purpose of this lecture is to explain some recent work in this direction and some conjectures and open problems. As a starting point we introduce the hierarchical group giving some motivation and comparison to the Euclidean group. We then consider branching systems in which the spatial movement is given by a random walk in these spaces and the role of the potential theoretic properties, in particular the degree of transience-recurrence of the random walk. We then consider the question of percolation for a related class of random graphs embedded in these spaces and end with an open problem concerning the properties of branching systems on these percolation clusters serving as a random medium. This talk is based on joint projects with Luis Gorostiza and Andreas Greven. STEFANO FAVARO, University of Torino A new tool for nonparametric estimation of species variety with Gibbs-type priors  [PDF] Bayesian nonparametric inference for species sampling problems concerns with the estimation, conditional on an initial observed sample, of the species variety featured by an additional unobserved sample. Within the framework of Gibbs-type priors, we introduce a new tool for estimating species variety when the additional sample is required to be very large and the implementation of exact Bayesian nonparametric procedures is prevented by cumbersome computation. Our result is illustrated through a simulation study and the analysis of a real dataset in linguistics. RAFAL KULIK, University of Ottawa Heavy tailed time series with extremal independence  [PDF] We consider heavy tailed time series whose finite-dimensional distributions are extremally independent in the sense that extremely large values cannot be observed consecutively. This calls for methods beyond the classical multivariate extreme value theory which is convenient only for extremally dependent multivariate distributions. We use the Conditional Extreme Value approach to study the effect of an extreme value at time zero on the future of the time series. In formal terms, we study the limiting conditional distribution of future observations given an extreme value at time zero. To this purpose, we introduce conditional scaling functions and conditional scaling exponents. We compute these quantities for a variety of models, including Markov chains, exponential autoregressive models, stochastic volatility models with heavy tailed innovations or volatilities. DELI LI, Lakehead University A Characterization of a New Type of Strong Law of Large Numbers  [PDF] Let $0 < p < 2$ and $1 \leq q < \infty$. Let $\{X_{n};~n \geq 1 \}$ be a sequence of independent copies of a real-valued random variable $X$ and set $S_{n} = X_{1} + \cdots + X_{n}, ~n \geq 1$. We say $X$ satisfies the $(p, q)$-{\it type strong law of large numbers} (and write $X \in SLLN(p, q)$) if $\sum_{n = 1}^{\infty} \frac{1}{n}\left(\frac{\left|S_{n}\right|}{n^{1/p}} \right)^{q} < \infty$ almost surely. This talk is devoted to a characterization of $X \in SLLN(p, q)$. By applying results obtained from the new versions of the classical L\'{e}vy, Ottaviani, and Hoffmann-J{\o}rgensen (1974) inequalities proved by Li and Rosalsky (2013) and by using techniques developed by Hechner (2009) and Hechner and Heinkel (2010), we obtain sets of necessary and sufficient conditions for $X \in SLLN(p, q)$ for the six cases: $1 \leq q < p < 2$, $1 < p = q < 2$, $1 < p < 2$ and $q > p$, $q = p = 1$, $p = 1 < q$, and $0 < p < 1 \leq q$. The necessary and sufficient conditions for $X \in SLLN(p, 1)$ have been discovered by Li, Qi, and Rosalsky (2011). Versions of above results in a Banach space setting are also given. Illustrative examples are presented. NEAL MADRAS, York University Random 312-Avoiding Permutations  [PDF] A \textit{pattern} of length $k$ is simply a permutation of $\{1,..,k\}$. A permutation of $\{1,...,N\}$ (for $N>k$) is said to avoid a specific pattern $P$ if the (long) permutation has no subsequence of $k$ elements that appears in the same relative order as $P$. (E.g.\ the permutation (2463175) does not avoid the pattern (312) because the permutation contains the subsequence (615).) Pattern avoidance has been extensively studied by combinatorialists. Simulations suggest intriguing structural properties of permutations generated uniformly at random from $S_N[312]$, the subset of permutations of $\{1,..,N\}$ that avoid $312$. To elucidate these properties, we obtain exact and asymptotic probabilities that the $i^{th}$ entry of such a permutation equals $j$, as well as joint probabilities of such events. We also find that for large $N$, a cluster of points below the diagonal'' in a graph of such a permutation looks like the trajectory of a directed random walk with infinite mean. This is joint work with Lerna Pehlivan. DON L. MCLEISH, University of Waterloo Convergence of the Discrete Variance Swap in Time-Homogeneous Diffusion Models  [PDF] Discretely sampled variance swaps are financial instruments whose price depends on the observed volatility or variance of an underlying. They are traded in the market, and usually the fair strikes of continuously sampled variance swaps are used to approximate their discrete counterparts. There has been work (Jarrow, Kchia, Larsson and Protter (2013)) discussing conditions under which this approximation is valid for semi-martingales, and also several papers proposing studying explicit formulae of discretely sampled variance swaps for specific stochastic volatility models, such as the Heston stochastic volatility model (Broadie and Jain (2008)), the Hull-White and the Schobel-Zhu stochastic volatility models (Bernard and Cui (2014)). For stochastic volatility models based on time-homogeneous diffusions, we provide a simple necessary and sufficient condition for the discretely sampled fair strike of a variance swap to converge to the continuously sampled fair strike, extending Theorem 3:8 of Jarrow, Kchia, Larsson and Protter (2013). We also give conditions (not based on asymptotics) when the fair strike of the discrete variance swap is higher than the continuous one and discuss the convex order conjecture proposed by Griessler and Keller-Ressel (2014) in this context. This is joint work with Carole Bernard, University of Waterloo, and Zhenyu Cui, Brooklyn College of the City University of New York. CLARENCE SIMARD, UQAM General model for limit order book and market orders  [PDF] We introduce a general model for the structure and the dynamic of the limit order book in continuous time which includes the properties of depth, tightness and resilience. Our starting point is using random processes with value in the space of continuous functions to model the cost of transactions instead of modeling the behaviour of the asset price. The portfolio value takes into account the opposing forces between market orders, which deplete the limit order book, and the arrival of new limit orders. We prove that the existence of some equivalent probability measure is sufficient to rule out arbitrage and that the converse cannot hold in general. This result generalizes similar non-arbitrage theorems found in the literature on limit order book as well as the sufficiency part of the first fundamental of asset pricing. WEI SUN, Concordia University New criteria for Hunt’s hypothesis (H) of Levy processes  [PDF] A Markov process X is said to satisfy Hunt’s hypothesis (H) if every semi-polar set is polar. Roughly speaking, this means that if a set A cannot be immediately hit by X for any starting point, then A will never be hit by X. About fifty years ago, Professor R.K. Getoor conjectured that essentially all Levy processes satisfy (H). In this talk, we present novel necessary and sufficient conditions for the validity of (H) of Levy processes. As applications, we obtain new examples of Levy processes satisfying (H). Moreover, we show that a general class of pure jump subordinators can be decomposed into the summation of two independent subordinators satisfying (H). XIAOWEN ZHOU, Concordia University Some Support Properties of $\Lambda$-Fleming-Viot Processes with Brownian Spatial Motion  [PDF] A Fleming-Viot process is a probability-measure-valued stochastic process for mathematical population genetics. It describes the evolution of relative frequencies for different types of alleles in a large population that undergoes reproduction and mutation. \medskip In this talk I first briefly review the $\Lambda$-coalescent of multiple collisions and the lookdown representation of Donnelly and Kurtz for $\Lambda$-Fleming-Viot process with Brownian spatial motion. I then present several support properties obtained in [1,2,3] on the $\Lambda$-Fleming-Viot random measure. These properties include the compact support property, the modulus of continuity, Hausdorff dimensions and the disconnectedness. The lookdown representation is crucial in showing all these results. If time allows I will also introduce some recent work in progress. \medskip \noindent{\large\bf References} \medskip [1] H. Liu and X. Zhou (2012). Compact support property of the $\Lambda$-Fleming-Viot process with underlying Brownian motion. {\it Electronic Journal of Probability}, {\it 17}, No. 73, 1-20. [2] H. Liu and X. Zhou (2013). Some support properties for a class of $\Lambda$-Fleming-Viot processes. To appear. {\it Annales de L'Institut Henri Poincar\'e (B) Probabilit\'es et Statistiques}. Available at http://arxiv.org/abs/1307.3990. [3] X. Zhou (2014). On criteria of disconnectedness for $\Lambda$-Fleming-Viot support. {\it Electronic Communications in Probability}, {\it 19}, No 53, 1-16. YOUZHOU ZHOU, Zhongnan University of Economics and Law Some Large Deviation Principles and Law of Large Numbers for Random Energy Model  [PDF] Random Energy Model (in short REM) is a toy model for spin glasses, a special state for magnetic materials below a critical temperature $T_{c}$. The Poisson-Dirichlet distribution $P(\alpha,0)$, where $\alpha=\frac{T}{T_{c}}$, indicates the probability weights of infinitely many pure states in REM. In this talk, large deviations for $P(\alpha,0)$ as $T\to T_{c} ( i.e. \alpha\to1)$ is considered. Moreover, we will also consider large deviations for $$\pi_{\alpha,\lambda}(dp)=C_{\alpha,\lambda}\exp\left\{\lambda(\alpha)\sum_{i=1}^{\infty}p_{i}^{2}\right\}PD(\alpha,0)(dp),$$ where $C_{\alpha,\lambda}$ is a normalizing constant and $\alpha\to1$. Here $\pi_{\alpha,\lambda}$ resembles the Poisson-Dirichlet distribution with selection in population genetics. Interestingly the large deviations for $\pi_{\alpha,\lambda}$ reveals phase transition. The weak law of large numbers in critical case is also covered in this talk.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9179953932762146, "perplexity": 448.60287781087436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00294.warc.gz"}
https://export.arxiv.org/abs/2107.12410
hep-ph (what is this?) # Title: JUNO's prospects for determining the neutrino mass ordering Abstract: The flagship measurement of the JUNO experiment is the determination of the neutrino mass ordering. Here we revisit its prospects to make this determination by 2030, using the current global knowledge of the relevant neutrino parameters as well as current information on the reactor configuration and the critical parameters of the JUNO detector. We pay particular attention to the non-linear detector energy response. Using the measurement of $\theta_{13}$ from Daya Bay, but without information from other experiments, we estimate the probability of JUNO determining the neutrino mass ordering at $\ge$ 3$\sigma$ to be 31% by 2030. As this probability is particularly sensitive to the true values of the oscillation parameters, especially $\Delta m^2_{21}$, JUNO's improved measurements of $\sin^2 \theta_{12}$, $\Delta m^2_{21}$ and $|\Delta m^2_{ee}|$, obtained after a couple of years of operation, will allow an updated estimate of the probability that JUNO alone can determine the neutrino mass ordering by the end of the decade. Combining JUNO's measurement of $|\Delta m^2_{ee}|$ with other experiments in a global fit will most likely lead to an earlier determination of the mass ordering. Comments: 31 pages, 15 figures, many with multiple panels Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Experiment (hep-ex) Report number: FERMILAB-PUB-21-201-T Cite as: arXiv:2107.12410 [hep-ph] (or arXiv:2107.12410v1 [hep-ph] for this version) ## Submission history From: Stephen Parke [view email] [v1] Mon, 26 Jul 2021 18:04:52 GMT (724kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9290034174919128, "perplexity": 2185.7175414187273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055645.75/warc/CC-MAIN-20210917120628-20210917150628-00038.warc.gz"}
http://mathoverflow.net/questions/165/does-the-continuous-locus-of-a-function-have-any-nice-properties?sort=votes
# Does the “continuous locus” of a function have any nice properties? Suppose f:RR is a function. Let S={x∈R|f is continuous at x}. Does S have any nice properties? Here are some observations about what S could be: • S can be any closed set. For a closed set S, let g be a continuous function whose vanishing locus is S (for example, you could take g(x) to be the distance of x from S if S is non-empty). Then define f(x)=g(x) if x∈Q and f(x)=0 otherwise. Then the continuous locus of f is exactly S. • S can be an open interval. For an open interval S, define f(x)=0 if x∈S or x∈Q and f(x)=1 otherwise. Then the continuous locus of f is exactly S. • S can be the complement of any countable set. Let T={t1,t2,t3,...} be a countable set, and let ∑ai be some absolutely convergent series all of whose terms is non-zero (like ai=1/2i). Define f(x) = ∑i such that ti < x a_i. Then the continuous locus of f is exactly the complement of T. Here are some questions I'd like to know the answers to: • Can S be any open set? • Can S be non-measurable? (if f(x)=0 if x∈S and f(x)=1 otherwise, what will the continuous locus be?) - Yes, here's a quick proof that any given $G_\delta$ (in $\mathbb{R}$) can be realized as the set of continuity points of some real-valued function. Let $G$ be a given $G_\delta$ set in $\mathbb{R}$, meaning $G = \cap_{i=1}^\infty G_i$, each $G_i$ an open set. Define a function $f:\mathbb{R} \to \mathbb{R}$ as follows: $f(x)=0$ if $x$ is in $G$. If x is not in $G$, there is some $k$ such that $x$ is not in $G_k$; let $k$ be minimal with that property. Define $f(x)=1/k$ if $x$ is rational and $f(x)=-1/k$ if $x$ is irrational. If I'm not very much mistaken, $G$ is precisely the set of continuity points of this $f$. I'm happy to leave this as an exercise for now :-) Let me know if you're not sure how to do it, or - worse - if I'm just wrong about the construction. - Awesome. I think this works. Do you have a reference (or proof) that the continuous locus is G-delta? –  Anton Geraschenko Oct 7 '09 at 19:11 I proved it in my answer below. –  Eric Wofsey Oct 7 '09 at 19:41 @Eric: you're absolutely right. I somehow hadn't realized that you gave a complete proof. Sorry about that. –  Anton Geraschenko Oct 8 '09 at 3:10 This corresponds to a (starred) exercise in Munkres' Topology: A First Course. Unfortunately, I do not have the book at hand, but unless I am very much mistaken, looking up "G_delta set" in the index should take you to the place where this is given as an exercise. I apologize if this is not correct. –  Amitesh Datta Aug 28 '10 at 9:13 It's a standard result that the continuous locus is always G-delta. For each r>0, let U(r) be the set of points x such that some neighborhood of x maps into some ball of radius r. Then each U(r) is open, and the continuous locus is their intersection. Conversely, given a G-delta set, I'm pretty sure it's not hard to construct a function with that continuous locus, though I don't remember how off the top of my head. - I hope nobody would mind if I try to do the exercise. Clearly f is continuous on G. Suppose f is continuous on x and f(x)=1/k. Take epsilon=1/k. Let U be any neighborhood of x. U\cap G_1\cap .. \cap G_{k-1} contains an irrational number y. Hence |f(x)-f(y)|=2/k > epsilon. (if f(x)=-1/k, take y to be a rational number) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9832294583320618, "perplexity": 487.91604741363074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462141.68/warc/CC-MAIN-20150226074102-00028-ip-10-28-5-156.ec2.internal.warc.gz"}
http://slideplayer.com/slide/4742210/
## Presentation on theme: "Typography Usability & Readability"— Presentation transcript: Obj. 1.01 What’s the personality? Font choice should convey the meaning or personality that matches the purpose of the design Examples: Sympathy Card – Script Flyer Heading – Decorative The top typeface is more effective because it conveys a more serious personality that matches the purpose of the design. Which typeface is more effective? Where do I start? Font choice should give visual clues about the order text should be read Visual Hierarchy - an arrangement of text in a graduated series to help readers scan and know where to enter and exit the text Create hierarchy through Repetition Contrast Changes in weight, scale, positioning, color, tone, spacing, or font Examples: Headline larger than subheadings Using bold, italics, and color for emphasis Example of Visual Heirarchy YOU WILL READ THIS FIRST You will read this when skimming You will probably not read this on a skim. You will probably not read this unless a phrase is bolded. Your eye will be drawn to this before leaving the page because of contrast in font category and color. Example of Visual Hierarchy Example of Visual Hierarchy Headings formatted differently than body text Too many fonts spoil the design Font choice should be limited to 2 or 3 fonts Too many font choices can be distracting Do not mix 2 fonts from the same category Example: Times New Roman for a heading and Palatino for a subheading; 2 serif fonts Too many fonts used in this example Good use of font pairing Attitude is everything! Sans Serif paired with a Script typeface Who is my reader? Font choice should consider the target audience Young readers need fonts that accurately display letters Example: The lowercase “a” in Arial is not displayed the way young readers learn to write the letter “a” making the font difficult to read Teen readers enjoy fonts with a modern or edgy feel Clearview typeface for highway signs Is the font for digital or print display? Consider the medium – Test the font to see if it is legible on the intended output Test the Size – the vertical height of a character Test the Style – bold, italic, fill color, stroke color, shadow, small caps Test the Spacing Leading – vertical spacing between of lines of text Kerning – horizontal spacing between pairs of letters Tracking – horizontal spacing between all the characters in a large block of text. Just because you fall in love with a font does not mean it is the best choice. Always test readability. Leading Vertical spacing between lines of text. Used to: Pronounced “led-ding.” Referred to as line spacing Single Space Double Space Used to: Slightly increase or decrease the length of a column so that it is even with an adjacent column To force a block of text to fit in a space that is larger or smaller than the text block 2.01 Investigate typefaces and fonts. Leading Look in the nook to find the book that you borrowed to read. Leading (vertical spacing between lines of text) Kerning Horizontal spacing between pairs of letters Used create a more visually appealing and readable text. BOOK – before kerning. – after kerning the O’s. Kerning is most often used with text which has been enlarged since this tends to create too much space between individual letters. 2.01 Investigate typefaces and fonts. Tracking Horizontal spacing between all characters in a large block of text. Makes a block of text more open and airy or more dense. Used to expand or contract a block of text for the purpose of aligning two columns. Examples of Tracking 2.01 Investigate typefaces and fonts. LOOK in the nook to find the book that you borrowed to read. Kerning (horizontal spacing between pairs of letters) Leading (vertical spacing between lines of text) Tracking (horizontal spacing between all characters in a large block of text. Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8465319871902466, "perplexity": 3060.318316127029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00168.warc.gz"}
https://asmedigitalcollection.asme.org/GTINDIA/proceedings-abstract/GTINDIA2017/58516/V002T05A008/243763
The presence of crack introduces local flexibilities and changes physical characteristics of a structure which in turn alter its dynamic behavior. Crack depth, location, orientation and number of cracks are the main parameters that greatly influence the dynamics. Therefore, it is necessary to understand dynamics of cracked structures. Predominantly, every material may be treated as viscoelastic and most of the time material damping facilitates to suppress vibration. Thus present study concentrates on exploring the dynamic behavior of damped cantilever beam with single open crack. Operator based constitutive relationship is used to develop the general time domain, linear viscoelastic model. Higher order equation of motion is obtained based on Euler-Bernoulli and Timoshenko beam theory. Finite element method is utilized to discretize the continuum. Higher order equation is further converted to state space form for Eigen analysis. From the numerical results, it is observed that the appearance of crack decreases the natural frequency of vibration when compared to an uncracked viscoelastic beam. Under cracked conditions, the viscoelastic Timoshenko beam tends to give lower frequency values when compared to viscoelastic Euler-Bernoulli beam due to shear effect. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8080294132232666, "perplexity": 1059.4038621372463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891203.69/warc/CC-MAIN-20201026090458-20201026120458-00100.warc.gz"}
http://taggedwiki.zubiaga.org/new_content/445cf2ee5e88b1001ff8c6556aca197c
# Modified Newtonian dynamics In physics, Modified Newtonian dynamics (MOND) is a theory that proposes a modification of Newton's Second Law of Dynamics (F = ma) to explain the galaxy rotation problem. When the uniform velocity of rotation of galaxies was first observed, it was unexpected because Newtonian theory of gravity predicts that objects that are farther out will have lower velocities. For example, planets in the Solar System orbit with velocities that decrease as their distance from the Sun increases. MOND theory posits that acceleration is not linearly proportional to force at low values. The galaxy rotation problem may be understood without MOND if a halo of dark matter provides an overall mass distribution different from the observed distribution of normal matter. MOND was proposed by Mordehai Milgrom in 1981 to model the observed uniform velocity data without the dark matter assumption. He noted that Newton's Second Law for gravitational force has only been verified when gravitational acceleration is large. ## Overview: Galaxy dynamics Observations of the rotation rates of spiral galaxies began in 1978. By the early 1980s it was clear that galaxies did not exhibit the same pattern of decreasing orbital velocity with increasing distance from the center of mass observed in the Solar System. A spiral galaxy consists of a bulge of stars at the centre with a vast disc of stars orbiting around the central group. If the orbits of the stars were governed solely by gravitational force and the observed distribution of normal matter, it was expected that stars at the outer edge of the disc would have a much lower orbital velocity than those near the middle. In the observed galaxies this pattern is not apparent. Stars near the outer edge orbit at the same speed as stars closer to the middle. Figure 1 - Expected (A) and observed (B) star velocities as a function of distance from the galactic center. Figure 2 - Postulated dark-matter halo around a spiral galaxy The dotted curve A in Figure 1 at left shows the predicted orbital velocity as a function of distance from the galactic center assuming neither MOND nor dark matter. The solid curve B shows the observed distribution. Instead of decreasing asymptotically to zero as the effect of gravity wanes, this curve remains flat, showing the same velocity at increasing distances from the bulge. Astronomers call this phenomenon the "flattening of galaxies' rotation curves". Scientists hypothesized that the flatness of the rotation of galaxies is caused by matter outside the galaxy's visible disc. Since all large galaxies show the same characteristic, large galaxies must, according to this line of reasoning, be embedded in a halo of invisible "dark" matter as shown in Figure 2. ## The MOND Theory In 1983, Mordehai Milgrom, a physicist at the Weizmann Institute in Israel, published two papers in Astrophysical Journal to propose a modification of Newton's second law of motion. This law states that an object of mass m, subject to a force F undergoes an acceleration a satisfying the simple equation F=ma. This law is well known to students, and has been verified in a variety of situations. However, it has never been verified in the case where the acceleration a is extremely small. And that is exactly what's happening at the scale of galaxies, where the distances between stars are so large that the gravitational acceleration is extremely small. ### The change The modification proposed by Milgrom is the following: instead of F=ma, the equation should be F=mµ(a/a0)a, where µ(x) is a function that for a given variable x gives 1 if x is much larger than 1 ( x≫1 ) and gives x if x is much smaller than 1 ( 0 <x≪1 ). The term a0 is a proposed new constant, in the same sense that c (the speed of light) is a constant, except that a0 is acceleration whereas c is speed. Here is the simple set of equations for the Modified Newtonian Dynamics: $\vec{F} = m \cdot \mu\!\left( { a \over a_0 } \right) \ \vec{a}$ $\mu (x) = 1 \mbox{ if } |x|\gg 1$ $\mu (x) = x \mbox{ if } |x|\ll 1$ The exact form of µ is unspecified, only its behavior when the argument x is small or large. As Milgrom proved in his original paper, the form of µ does not change most of the consequences of the theory, such as the flattening of the rotation curve. In the everyday world, a is much greater than a0 for all physical effects, therefore µ(a/a0)=1 and F=ma as usual. Consequently, the change in Newton's second law is negligible and Newton could not have seen it. Since MOND was inspired by the desire to solve the flat rotation curve problem, it is not a surprise that using the MOND theory with observations reconciled this problem. This can be shown by a calculation of the new rotation curve. ### Predicted rotation curve Far away from the center of a galaxy, the gravitational force a star undergoes is, with good approximation: $F = \frac{GMm}{r^2}$ with G the gravitation constant, M the mass of the galaxy, m the mass of the star and r the distance between the center and the star. Using the new law of dynamics gives: $F = \frac{GMm}{r^2} = m \mu{ \left( \frac{a}{a_0}\right)} a$ Eliminating m gives: $\frac{GM}{r^2} = \mu{ \left( \frac{a}{a_0}\right)} a$ Assuming that, at this large distance r, a is smaller than a0 and thus $\mu{ \left( \frac{a}{a_0}\right)} = \frac{a}{a_0}$, which gives: $\frac{GM}{r^2} = \frac{a^2}{a_0}$ Therefore: $a = \frac{\sqrt{ G M a_0 }}{r}$ Since the equation that relates the velocity to the acceleration for a circular orbit is $a = \frac{v^2}{r}$ one has: $a = \frac{v^2}{r} = \frac{\sqrt{ G M a_0 }}{r}$ and therefore: $v = \sqrt[4]{ G M a_0 }$ Consequently, the velocity of stars on a circular orbit far from the center is a constant, and does not depend on the distance r: the rotation curve is flat. The proportion between the "flat" rotation velocity to the observed mass derived here is matching the observed relation between "flat" velocity to luminosity known as the Tully-Fisher relation. At the same time, there is a clear relationship between the velocity and the constant a0. The equation v=(GMa0)1/4 allows one to calculate a0 from the observed v and M. Milgrom found a0=1.2×10−10 ms−2. Milgrom has noted that this value is also "... the acceleration you get by dividing the speed of light by the lifetime of the universe. If you start from zero velocity, with this acceleration you will reach the speed of light roughly in the lifetime of the universe."[1] Retrospectively, the impact of assumed value of a>>a0 for physical effects on Earth remains valid. Had a0 been larger, its consequences would have been visible on Earth and, since it is not the case, the new theory would have been inconsistent. ## Consistency with the observations According to the Modified Newtonian Dynamics theory, every physical process that involves small accelerations will have an outcome different from that predicted by the simple law F=ma. Therefore, astronomers need to look for all such processes and verify that MOND remains compatible with observations, that is, within the limit of the uncertainties on the data. There is, however, a complication overlooked up to this point but that strongly affects the compatibility between MOND and the observed world: in a system considered as isolated, for example a single satellite orbiting a planet, the effect of MOND results in an increased velocity beyond a given range (actually, below a given acceleration, but for circular orbits it is the same thing), that depends on the mass of both the planet and the satellite. However, if the same system is actually orbiting a star, the planet and the satellite will be accelerated in the star's gravitational field. For the satellite, the sum of the two fields could yield acceleration greater than a0, and the orbit would not be the same as that in an isolated system. For this reason, the typical acceleration of any physical process is not the only parameter astronomers must consider. Also critical is the process's environment, which is all external forces that are usually neglected. In his paper, Milgrom arranged the typical acceleration of various physical processes in a two-dimensional diagram. One parameter is the acceleration of the process itself, the other parameter is the acceleration induced by the environment. This affects MOND's application to experimental observation and empirical data because all experiments done on Earth or its neighborhood are subject to the Sun's gravitational field, and this field is so strong that all objects in the Solar system undergo an acceleration greater than a0. This explains why the flattening of galaxies' rotation curve, or the MOND effect, had not been detected until the early 1980s, when astronomers first gathered empirical data on the rotation of galaxies. Therefore, only galaxies and other large systems are expected to exhibit the dynamics that will allow astronomers to verify that MOND agrees with observation. Since Milgrom's theory first appeared in 1983, the most accurate data has come from observations of distant galaxies and neighbors of the Milky Way. Within the uncertainties of the data, MOND has remained valid. The Milky Way itself is scattered with clouds of gas and interstellar dust, and until now it has not been possible to draw a rotation curve for the galaxy. Finally, the uncertainties on the velocity of galaxies within clusters and larger systems have been too large to conclude in favor of or against MOND. Indeed, conditions for conducting an experiment that could confirm or disprove MOND can only be performed outside the Solar system — farther even than the positions that the Pioneer and Voyager space probes have reached. In search of observations that would validate his theory, Milgrom noticed that a special class of objects, the low surface brightness galaxies (LSB), is of particular interest: the radius of an LSB is large compared to its mass, and thus almost all stars are within the flat part of the rotation curve. Also, other theories predict that the velocity at the edge depends on the average surface brightness in addition to the LSB mass. Finally, no data on the rotation curve of these galaxies was available at the time. Milgrom thus could make the prediction that LSBs would have a rotation curve which is essentially flat, and with a relation between the flat velocity and the mass of the LSB identical to that of brighter galaxies. Since then, many such LSBs have been observed, and some astronomers have claimed their data invalidated MOND. There is evidence that a contradiction exists.[2] An exception to MOND other than LSB is prediction of the speeds of galaxies that gyrate around the center of a galaxy cluster. Our galaxy is part of the Virgo supercluster. MOND predicts a rate of rotation of these galaxies about their center, and temperature distributions, that are contrary to observation.[3][4] One experiment that might test MOND would be to observe the particles proposed to contribute to the majority of the Universe’s mass; several experiments are endeavoring to do this under the assumption that the particles have weak interactions. Another approach to test MOND is to apply it to the evolution of cosmic structure or to the dynamics and evolution of observed galaxies.. Lee Smolin and co-workers have tried unsuccessfully to obtain a theoretical basis for MOND from quantum gravity. His conclusion is "MOND is a tantalizing mystery, but not one that can be resolved now."[5] Another attempt to provide a basis for MOND is Allen Rothwarf's aether model.[6] ## The mathematics of MOND In non-relativistic Modified Newtonian Dynamics, Poisson's equation, $\nabla^2 \Phi_N = 4 \pi G \rho$ (where ΦN is the gravitational potential and ρ is the density distribution) is modified as $\nabla\cdot\left[ \mu \left( \frac{\left\| \nabla\Phi \right\|}{a_0} \right) \nabla\Phi\right] = 4\pi G \rho$ where Φ is the MOND potential. The equation is to be solved with boundary condition $\left\| \nabla\Phi \right\| \rightarrow 0$ for $\left\| \mathbf{r} \right\| \rightarrow \infty$. The exact form of μ(ξ) is not constrained by observations, but must have the behaviour $\mu(\xi) \sim 1$ for ξ > > 1 (Newtonian regime), $\mu(\xi) \sim \xi$ for ξ < < 1 (Deep-MOND regime). In the deep-MOND regime, the modified Poisson equation may be rewritten as $\nabla \cdot \left[ \frac{\left\| \nabla\Phi \right\|}{a_0} \nabla\Phi - \nabla\Phi_N \right] = 0$ and that simplifies to $\frac{\left\| \nabla\Phi \right\|}{a_0} \nabla\Phi - \nabla\Phi_N = \nabla \times \mathbf{h}.$ The vector field $\mathbf{h}$ is unknown, but is null whenever the density distribution is spherical, cylindrical or planar. In that case, MOND acceleration field is given by the simple formula $\mathbf{g}_M = \mathbf{g}_N \sqrt{\frac{a_0}{\left\| \mathbf{g}_N \right \|}}$ where $\mathbf{g}_N$ is the normal Newtonian field. ## Discussion and criticisms An empirical criticism of MOND, released in August 2006, involves the Bullet cluster (Milgrom's comments[1]) , a system of two colliding galaxy clusters. In most instances where phenomena associated with either MOND or dark matter are present, they appear to flow from physical locations with similar centers of gravity. But, the dark matter-like effects in this colliding galactic cluster system appears to emanate from different points in space than the center of mass of the visible matter in the system, which is unusually easy to discern due to the high energy collisions of the gas in the vicinity of the colliding galactic clusters.[2]. MOND proponents admit that a purely baryonic MOND is not able to explain this observation. Therefore a “marriage” of MOND with ordinary hot neutrinos of 2eV has been proposed to save the hypothesis [3]. Beside MOND, three other notable theories try to explain the mystery of the rotational curves and/or the apparent missing dark matter, these are Nonsymmetric Gravitational Theory proposed by John Moffat, Weyl's conformal gravity by Philip Mannheim, and the more recently published Dynamic Newtonian Advanced gravitation (DNAg).[7] ## Tensor-vector-scalar gravity Tensor-Vector-Scalar gravity (TeVeS) is a proposed relativistic theory that is equivalent to Modified Newtonian dynamics (MOND) in the non-relativistic limit, which purports to explain the galaxy rotation problem without invoking dark matter. Originated by Jacob Bekenstein in 2004, it incorporates various dynamical and non-dynamical tensor fields, vector fields and scalar fields.[8] The break-through of TeVeS over MOND is that it can explain the phenomenon of gravitational lensing, a cosmic phenomenon in which nearby matter bends light, which has been confirmed many times. A recent preliminary finding is that it can explain structure formation without cold dark matter (CDM), but requiring ~2eV massive neutrinos. [4] and [5]. However, other authors (see Slosar, Melchiorri and Silk [6]) claim that TeVeS can't explain cosmic microwave background anisotropies and structure formation at the same time, i.e. ruling out those models at high significance. ## In-line references 1. ^ The actual result is within an order of magnitude of the lifetime of the universe. It would take 79.2 billion years, about 5.8 times the current age of the universe, to reach the speed of light with an acceleration of a0. Conversely, starting from zero velocity with an acceleration of a0, one would reach about 17.3% of the speed of light at the current age of the universe. 2. ^ RH Sanders (2001). "Modified Newtonian dynamics and its implications". in Mario Livio. The Dark Universe: Matter, Energy and Gravity, Proceedings of the Space Telescope Science Institute Symposium. Cambridge University Press. p. 62. ISBN 0521822270. 3. ^ Charles Seife (2004). Alpha and Omega. Penguin Books. pp. 100-101. ISBN 0142004464. 4. ^ Anthony Aguirre, Joop Schaye & Eliot Quataert (2001). "Problems for Modified Newtonian Dynamics in Clusters and the Lyα Forest?". The Astrophysical Journal 561: 550–558. doi:10.1086/323376. 5. ^ Lee Smolin (2007). The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. Mariner Books. p. 215. ISBN 061891868X. 6. ^ F Rothwarf, S Roy (2007). "Quantum Vacuum and a Matter - Antimatter Cosmology". Arxiv preprint. 7. ^ A.Worsley (2008). An advanced dynamic adaptation of Newtonian equations of gravity. Physics Essays 21: 3, 222-228 (2008). 8. ^ Jacob D. Bekenstein (2004). "Relativistic gravitation theory for the MOND paradigm". Phys. Rev. D70.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965720534324646, "perplexity": 721.4312831970993}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00582.warc.gz"}
http://mathhelpforum.com/calculus/47453-partial-derivative-proof-thermodynamics-notation-print.html
# Partial Derivative Proof (thermodynamics notation) • September 2nd 2008, 09:15 AM Jacobpm64 Partial Derivative Proof (thermodynamics notation) Show that: $\left(\frac{\partial z}{\partial y}\right)_{u} = \left(\frac{\partial z}{\partial x}\right)_{y} \left[ \left(\frac{\partial x}{\partial y}\right)_{u} - \left(\frac{\partial x}{\partial y}\right)_{z} \right]$ I have Euler's chain rule and "the splitter." Also the property, called the "inverter" where you can reciprocate a partial derivative. If I write Euler's chain rule, I only know how to write it when there are 3 variables, I usually write it in the form: $\left(\frac{\partial x}{\partial y}\right)_{z} \left(\frac{\partial y}{\partial z}\right)_{x} \left(\frac{\partial z}{\partial x}\right)_{y} = -1$ Where I can write x,y,z in any order as long as each variable is used in every spot. However, I do not know how to work this chain rule if I have an extra variable (u in this case). I also tried using the "splitter" to do something like writing: $\left(\frac{\partial z}{\partial y} \right)_{u} = \left(\frac{\partial z}{\partial x} \right)_{u} \left(\frac{\partial x}{\partial y}\right)_{u}$ However, I do not know what to do with this because I have the term $\left(\frac{\partial z}{\partial x} \right)_{u}$ , which doesn't appear in the original problem. Any help would be appreciated. (This is for a thermodynamics course, but we are still in the mathematics introduction.) • September 2nd 2008, 04:30 PM i too am an engineer but i only took it to the undergrad lvl and am not familiar with this notation... can you clarify? • September 2nd 2008, 04:33 PM Jacobpm64 Sure thing. If I say something like $\left(\frac{\partial z}{\partial x}\right)_{y}$. This means... The partial derivative of z with respect to x, holding y constant. • September 2nd 2008, 04:35 PM ThePerfectHacker Quote: Originally Posted by Jacobpm64 Sure thing. If I say something like $\left(\frac{\partial z}{\partial x}\right)_{y}$. This means... The partial derivative of z with respect to x, holding y constant. There is no need to write, $\left(\frac{\partial z}{\partial x}\right)_{y}$ Because the notation, $\frac{\partial z}{\partial x}$ Means exactly that i.e. "you hold all variables constant except x". • September 2nd 2008, 04:38 PM Jacobpm64 Oh, i thought it was different in thermodynamics, Please excuse me. • September 2nd 2008, 04:41 PM
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9541529417037964, "perplexity": 1110.448629074121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145519.33/warc/CC-MAIN-20160205193905-00133-ip-10-236-182-209.ec2.internal.warc.gz"}
http://link.springer.com/article/10.1007%2Fs10701-011-9558-z
Foundations of Physics , Volume 42, Issue 1, pp 192–208 # Clifford Algebras and the Dirac-Bohm Quantum Hamilton-Jacobi Equation Article DOI: 10.1007/s10701-011-9558-z Hiley, B.J. & Callaghan, R.E. Found Phys (2012) 42: 192. doi:10.1007/s10701-011-9558-z ## Abstract In this paper we show how the dynamics of the Schrödinger, Pauli and Dirac particles can be described in a hierarchy of Clifford algebras, $${\mathcal{C}}_{1,3}, {\mathcal{C}}_{3,0}$$, and $${\mathcal{C}}_{0,1}$$. Information normally carried by the wave function is encoded in elements of a minimal left ideal, so that all the physical information appears within the algebra itself. The state of the quantum process can be completely characterised by algebraic invariants of the first and second kind. The latter enables us to show that the Bohm energy and momentum emerge from the energy-momentum tensor of standard quantum field theory. Our approach provides a new mathematical setting for quantum mechanics that enables us to obtain a complete relativistic version of the Bohm model for the Dirac particle, deriving expressions for the Bohm energy-momentum, the quantum potential and the relativistic time evolution of its spin for the first time. ### Keywords Clifford algebrasSchrödinger, Pauli and relativistic Dirac-Bohm modelRelativistic quantum potentialSpin evolution
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8807825446128845, "perplexity": 570.1213206169897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00440-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.jiskha.com/questions/1817768/find-an-equation-of-a-rational-function-that-satisfies-the-following-conditions
# algebra Find an equation of a rational function that satisfies the following conditions: • Vertical asymptotes: x = −3 • Horizontal asymptote: y=3/2 • x -intercept: 5 • Hole at x =2 1. 👍 1 2. 👎 0 3. 👁 463 1. • Vertical asymptotes: x = −3 y = 1/(x+3) • Hole at x =2 y = (x-2)/((x+3)(x-2)) • x -intercept: 5 y = ((x-2)(x-5))/((x+3)(x-2)) • Horizontal asymptote: y=3/2 y = (3(x-2)(x-5))/(2(x+3)(x-2)) graph with your favorite utility to confirm. 1. 👍 1 2. 👎 0 ## Similar Questions 1. ### Alegbra 2 Use the rational root theorem to list all possible rational roots for the equation. X^3+2x-9=0. Use the rational root theorem to list all possible rational roots for the equation. 3X^3+9x-6=0. A polynomial function P(x) with 2. ### Math All rational functions can be expressed as f(x) = p(x)/q(x), where p and q are __________ functions and q(x) ≠ 0. A. horizontal asymptotes B. polynomial C. vertical asymptotes D. slant asymptotes Is the answer D. vertical 3. ### Math The twice–differentiable function f is defined for all real numbers and satisfies the following conditions: f(0)=3 f′(0)=5 f″(0)=7 a)The function g is given by g(x)=e^ax+f(x) for all real numbers, where a is a constant. Find 4. ### Math Write a rational function satisfying the following criteria. vertical Asymptote: x=-1, slant asymptote: y=x+2, zero of the function: x=3 I had f(x)=x^2+3x+2/x+1, that only works for the asymptotes and not the zero can someone 1. ### Math State an equation of a rational function that satisfies the given conditions: vertical asymptote at x=5, horizontal asymptote at y=-3, and x-intercept is 5/2. Need help solving. 2. ### math what is a function that Contains no vertical asymptotes but has a hole at x=2 and another function that contains a horizontal asymptote of 1, vertical asymptotes of 2 and -3, and a hole at x=4. 3. ### pre cal Find a possible formula for the function graphed below. Assume the function has only one x-intercept at the origin, and the point marked on the graph below is located at (2,12). The asymptotes are x=−2 and x=1. Give your formula 4. ### Math Write the equation of the rational function that passes through the points (0,0) and (4,8/7), has the x-axis as a horizontal asymptote, and has 2 vertical asymptotes at x=3 and x=-3. 1. ### Algebra Find a rational function that satisfies the given conditions. Vertical asymptotes x=-2,x=7 Horizontal asymptote y=7/2 ​x-intercept ​(−5​, ​0) 2. ### Geomatry For each of the rational functions find: a. domain b. holes c. vertical asymptotes d. horizontal asymptotes e. oblique asymptotes f. y-intercept g. x-intercepts 1. f(x)= x^2+x-2 / x^2-x-6 3. ### College Algebra HELP Use the seven step method described in the book to graph the following rational function f(x)=(2x^2+x-3)/(2x^2-7x) 1) Determine the symmetry of the function 2) Find the y-intercept 3) Find the x-intercept 4) Find the vertical 4. ### Pre-Calculus Find the vertical asymptotes, if any, of the graph of the rational function. Show your work. f(x) = (x-4)/(x(x-4)) (x-4)/x(x-4) the common factors cancel out and all is left is f(x)= 1/x... how do I solve this problem?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395673274993896, "perplexity": 2268.4151254486346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141732696.67/warc/CC-MAIN-20201203190021-20201203220021-00693.warc.gz"}
http://www.global-sci.com/intro/article_detail/cicp/12953.html
Volume 25, Issue 5 Locating Multiple Multipolar Acoustic Sources Using the Direct Sampling Method Commun. Comput. Phys., 25 (2019), pp. 1328-1356. Published online: 2019-01 Preview Full PDF 383 3940 Export citation Cited by • Abstract This work is concerned with the inverse source problem of locating multiple multipolar sources from boundary measurements for the Helmholtz equation. We develop simple and effective sampling schemes for location acquisition of the sources with a single wavenumber. Our algorithms are based on some novel indicator functions whose indicating behaviors could be used to locate multiple multipolar sources. The inversion schemes are totally "direct" in the sense that only simple integral calculations are involved in evaluating the indicator functions. Rigorous mathematical justifications are provided and extensive numerical examples are presented to demonstrate the effectiveness, robustness and efficiency of the proposed methods. • Keywords Inverse source problem, direct sampling method, Helmholtz equation, multipolar sources, acoustic wave. • AMS Subject Headings 49N45, 65M32, 74G75 • BibTex • RIS • TXT @Article{CiCP-25-1328, author = {}, title = {Locating Multiple Multipolar Acoustic Sources Using the Direct Sampling Method}, journal = {Communications in Computational Physics}, year = {2019}, volume = {25}, number = {5}, pages = {1328--1356}, abstract = { This work is concerned with the inverse source problem of locating multiple multipolar sources from boundary measurements for the Helmholtz equation. We develop simple and effective sampling schemes for location acquisition of the sources with a single wavenumber. Our algorithms are based on some novel indicator functions whose indicating behaviors could be used to locate multiple multipolar sources. The inversion schemes are totally "direct" in the sense that only simple integral calculations are involved in evaluating the indicator functions. Rigorous mathematical justifications are provided and extensive numerical examples are presented to demonstrate the effectiveness, robustness and efficiency of the proposed methods. }, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2018-0020}, url = {http://global-sci.org/intro/article_detail/cicp/12953.html} } TY - JOUR T1 - Locating Multiple Multipolar Acoustic Sources Using the Direct Sampling Method JO - Communications in Computational Physics VL - 5 SP - 1328 EP - 1356 PY - 2019 DA - 2019/01 SN - 25 DO - http://doi.org/10.4208/cicp.OA-2018-0020 UR - https://global-sci.org/intro/article_detail/cicp/12953.html KW - Inverse source problem, direct sampling method, Helmholtz equation, multipolar sources, acoustic wave. AB - This work is concerned with the inverse source problem of locating multiple multipolar sources from boundary measurements for the Helmholtz equation. We develop simple and effective sampling schemes for location acquisition of the sources with a single wavenumber. Our algorithms are based on some novel indicator functions whose indicating behaviors could be used to locate multiple multipolar sources. The inversion schemes are totally "direct" in the sense that only simple integral calculations are involved in evaluating the indicator functions. Rigorous mathematical justifications are provided and extensive numerical examples are presented to demonstrate the effectiveness, robustness and efficiency of the proposed methods. Deyue Zhang, Yukun Guo, Jingzhi Li & Hongyu Liu. (2020). Locating Multiple Multipolar Acoustic Sources Using the Direct Sampling Method. Communications in Computational Physics. 25 (5). 1328-1356. doi:10.4208/cicp.OA-2018-0020 Copy to clipboard The citation has been copied to your clipboard
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196397662162781, "perplexity": 1802.3357961711777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00306.warc.gz"}
https://tug.org/pipermail/texhax/2008-December/011440.html
# [texhax] inline form of an equation Uwe Lück uwe.lueck at web.de Tue Dec 16 18:51:28 CET 2008 [2008/12/12 repeated without HTML, thanks to Reinhard Kotucha, U. L.] > On Fri, 12 Dec 2008, Lars Madsen wrote: >>> Okay, I have: >>> $\sqrt{\frac{\sum d^2}{2n}}$ >>> Would it be okay to do: >>> $\sqrt{{\sum d^2}/{2n}}$ >> I don't see any problems with the second one, though some might prefer >> $\sqrt{\sum d^2/(2n)}$ > > I typeset $\sqrt{\frac{\sum d^2}{2n}}$ using package nath and the output > is equivalent to $\sqrt{(\sum d^n)/2n}$ which seems to avoid ambiguity > about what is being summed. Indeed, when I thought of the matter while shopping, I had forgotten the first example and thought \sum(d^2/2n) was ment, \sum binding the variable n. I haven't done math for a while and only now see that this series diverges, but it may show that the notation is significantly ambiguous in the way Aditya says (not with respect to 2n). And even if the reader knows what is meant from the context, such ambiguities may distract attention.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9841246604919434, "perplexity": 4129.059596842964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530253.25/warc/CC-MAIN-20190421060341-20190421082341-00262.warc.gz"}
https://www.physicsforums.com/threads/gravitation-and-orbiting-satellites.203620/
# Gravitation and Orbiting Satellites 1. Dec 9, 2007 ### ThenewKid 1. Two satellites are orbiting around Earth. One satellite has a period of 1.4hours and is 200km above Earths surface. The other satellite has a period of 6.0h. Use Kepler's laws and the fact the radius of Earth is 6.37 x 10^6meters to determine the height of the second satellite above Earths surface. 2. Keplers Third Law: (Ta/Tb)^2 = (Ra/Rb)^3 Fg=Gm1m2/r2 G=6.67x10^-11 Nm^2/kg^2 Msv2/r = GMeMs/r^2 g=Fg/m g=9.80N/kg g=GMe/Re^2 3. Ok first I know that Fg is equal to Fc, but I dont know the equation to find out Fc. So I just decided to attempt to find out the period but dividing 6hours by 1.4hours and using that number (4.29 rounded) multipled it with 200km and got 858km. I then used 858km as the height of the 2nd satellite above the Earths surface. I also calculated the velocity of satellite one, which is 7.79 x 10^3 m/s (or just 7.8m/s) Now, since I am doing a long distance course and not good at all at algrebra, and the lack of good example question within the textbook and coursebook to go on, I stumped. The answer I have cannot be worth 5 marks from the work Ive shown, but I just cant find any way to prove how my answer is correct with so little to go on. Last edited: Dec 9, 2007 2. Dec 9, 2007 ### tyco05 You will only need to use Kepler's Third Law. 3. Dec 9, 2007 ### D H Staff Emeritus You were given the orbital periods for both satellites, the altitude of one of the satellites, and the radius of the Earth. That information plus Kepler's third law are all you need to solve the problem. 4. Dec 9, 2007 ### ThenewKid So Ta=1.4hours and Tb=6hours What I got after doing the first part of the equation is 0.054 Now Ra and Rb are pretty much the radius of the Earth, and when I finished that equation all I got on my calculator was 1E36. ...These numbers dont really seem like the height of the second satellite no matter how I look at it and add/subtract/multiple/divide them together. 5. Dec 9, 2007 ### Roger Wilco Are you using r in meters and T in seconds? 6. Dec 9, 2007 ### D H Staff Emeritus Good. Not so good. Solve for Ra (you don't use Kepler's laws here) and then solve for Rb (here you do use Kepler's law). That's not necessary here. The left-hand and right-hand side of the expression $$(T_a/T_b)^2 = (R_a/R_b)^3$$ are unitless. If you so wished, you could even express length in furlongs and time in fortnights with this question, so long as all lengths are expressed in furlongs and all times are expressed in fortnights. What is important in this question is that all similar items (i.e., all lengths) be expressed in the same units. Here we have the radius of the Earth in meters and height in kilometers. Those units must be made commenserate with one another. Similar Discussions: Gravitation and Orbiting Satellites
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.933720052242279, "perplexity": 1561.9803483712938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425339.22/warc/CC-MAIN-20170725162520-20170725182520-00150.warc.gz"}
https://www.physicsforums.com/threads/waves-string-physics-correct-my-units.160469/
# Waves: String Physics? Correct my units 1. Mar 12, 2007 ### pugfug90 Waves: String Physics?? Correct my units.. 1. The problem statement, all variables and given/known data The velocity of a wave on a string depends on how hard the string is stretched and on the mass per unit length of the string. If T is the force exerted on the string and Mu is the mass/unit length, then velocity v is http://nas.cl.uh.edu/blanford/FormulasWaves_files/image012.gif [Broken] (I think) A piece of string 5.3m long has a mass of 15g. What must the force on the string be to make wavelength of a 125Hz wave 120cm? 2. Relevant equations 3. The attempt at a solution I converted wavelength lambda to 0.12M and 15g to 0.015kg.. wavelength=velocity/frequency.. so wavelength*frequency=velocity.. 0.12M*125Hz=15 m/s=velocity 15m/s = [square root of T(force)/Mu(mass length ratio)] He also gave us another equation.. instead of [square root T/Mu], there is also [square root T*length/mass] So.. 15m/s=[square root of T*5.3m/0.015kg] [(15^2 m^2)(0.015kg)]/[(s^2)(5.3m)]=T.. T=~0.636N..which would be right if it were multiplied by 100.. Anyone see where I went wrong? Last edited by a moderator: May 2, 2017 2. Mar 13, 2007 ### Kurdt Staff Emeritus 120 cm is 1.2 meters not 0.12 meters. 3. Mar 13, 2007 ### pugfug90 Har harrr Oopss and thanks! Similar Discussions: Waves: String Physics? Correct my units
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8190286159515381, "perplexity": 4096.38342200302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806066.5/warc/CC-MAIN-20171120130647-20171120150647-00517.warc.gz"}
https://www.physicsforums.com/threads/coulombs-epsilon-zero-and-its-name.86829/
# Coulomb's epsilon zero and its name 1. Aug 30, 2005 ### DaTario In Coulomb's law the term epsilon zero appears in the denominator and receives the name of permittivity constant [\b]. As it comes from the word permit (allow) then it would seem reasonable, for me at least, to expect that, as epsilon zero increases, the vacuum would be allowing one charge to better "see" the other, and then the force would be greater. But it is the opposite. Is this name justifiable in some acceptation of this word ? 2. Aug 30, 2005 ### Crosson Epsilon zero is the ratio between the charge enclosed by a surface and the electric flux through that surface. The name "permitivity of vacuum" is archaic and pointless. 3. Aug 30, 2005 ### DaTario But consider a space full of smoke. If the force had to do with seeing the other, this part played by the space has some sense in permiting the force to cross the space in between. I would like seeing epsilon zero in the numerator so as to call it permitivity. 4. Aug 31, 2005 ### lightgrav A large value of epsilon "permits" lots of charge to build up, with fairly small Voltage. It's archaic, and inverse to the physicists' viepoint which usually emphasizes the fields, but is reasonable for engineering purposes (as in capacitors). 5. Sep 2, 2005 ### DaTario I am satisfied with this answer. The capacitor view point is quite understandable. Best Regards, DaTario Similar Discussions: Coulomb's epsilon zero and its name
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9685078263282776, "perplexity": 2652.459141119505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693363.77/warc/CC-MAIN-20170925201601-20170925221601-00607.warc.gz"}
https://webwork.libretexts.org/webwork2/html2xml?answersSubmitted=0&sourceFilePath=Library/UCSB/Stewart5_3_5/Stewart5_3_5_59.pg&problemSeed=1234567&courseID=anonymous&userID=anonymous&course_password=anonymous&showSummary=1&displayMode=MathJax&problemIdentifierPrefix=102&language=en&outputformat=libretexts
Use the table below to estimate the value of $h'(.5)$, where $h(x)=f(g(x))$ to the nearest tenth. To estimate the appropriate derivatives, use the average of the two second slopes near the point in question (if approximating a derivative at $x=.1$, do the secant slopes on $[0,.1]$ and $[.1,.2]$, and then average them). $h'(.5) =$
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951829314231873, "perplexity": 199.30443885329808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00401.warc.gz"}
https://www.physicsforums.com/threads/problems-understanding-photons.80808/
Problems understanding photons 1. Jun 30, 2005 lazarus1907 It's just too abstract for me: A superposition of magnetic and electric fields, right; but... fields "expand" at speed of light radially, so how come photons don't "dissolve" in all directions rather than remain as particles in one direction? If you move your arm from left to right, and then stop it, you've accelerated and decelerated lots of charges in your arm. This theoritically produces photons (electromagnetic radiation), but how many photons? What is the frequency of these photons? Does this make sense? 2. Jun 30, 2005 James R When you wave your arm, you're waving equal numbers of positive and negative charges, so I don't think you'll get much EM radiation. 3. Jun 30, 2005 dextercioby We've got photons "running" in all possible directions. Remember that a spherical light wave is produced by a pointlike source, which doesn't exist (it's one of the models physicists work with). Daniel. 4. Jun 30, 2005 Staff: Mentor In the classical picture, the magnitude of the field radiated from a source decreases inversely with distance ($1 / r$). The energy density associated with the field is proportional to the square of the magnitude of the field. Therfore, the energy density falls off as the square of the distance ($1 / r^2$). Therefore, the energy falling on a target of a given size (say 1 m^2 for simplicity) per second also decreases according to the square of the distance between the source and the target. In the photon picture, the source emits some number of photons per second. Assume they're distributed uniformly in all directions for simplicity. Now imagine a sphere centered on the source. No matter how big the sphere is, all the photons hit its surface eventually. The total number of photons hitting the sphere per second is the same regardless of the radius. But the surface area of the sphere is proportional to the square of the radius. Therefore the number of photons per second per square meter decreases according to the square of the distance, and so does the energy per second per square meter. None of the photons "dissolve", they just spread apart as they get further from the source. 5. Jul 1, 2005 lazarus1907 Assume a different scenario: You have a single travelling photon, which is modelled as a point-like particle. In representing an electromagnetic wave books often give this naive picture of two perpendicular transversal waves that resemble oscillating ropes rather than EM fields. A rope clearly won't dissolve, but these EM fields won't be constrained in the same way the rope is; the effect of their fields travels at the speed of light and it's quite far reaching (of course, inversely proportional to the square of the distance). Trying to visualize this, I can't help seeing these fields "expanding" in all directions, making it almost impossible to imagine how can a photon be point-like; I'd rather expect a photon to become larger and larger... and tend to dissolve. I know this is wrong, but I just can't see it. By the way, another scenario: a single electron is travelling at, say 10,000 m/s, and it is brought to a halt in 0.001 seconds. How many photons do I get? What frequency do they have? Is it possible to predict its direction? Thanks 6. Jul 1, 2005 Staff: Mentor I think the root of your conceptual problem is that you're trying to apply the classical picture of electromagnetic waves in a domain where it is not valid. If you're in a situation where you can deal with individual photons, I don't think the classical wave picture has much meaning. The electromagnetic field only gives a probabilistic description of where a photon *might* go, similar to the relationship between an electron and the quantum-mechanical wave function that describes its behavior. To put it another way, the electromagnetic radiation field has its classical meaning only when you have lots and lots of photons, so that you can describe their effects to a very good approximation as a classical field. 7. Jul 1, 2005 masudr I was gonna say all this, but it seems jtbell has gotten there first. The photon is part of the description when the classical electromagnetic field is quantised. Electromagnetic waves are a solution of Maxwell's equations. Quantised EM fields (and it's quanta, the photon) and classical electromagnetic vector fields lines are related very indirectly at best, and certainly your visual interpretation is wrong, as you say. At a situation like this, the mathematics is all we can rely on, since the maths of field theory is overwhelming and a visual picture is not very useful. 8. Mar 7, 2011 cesarsvs I have still the same doubt as lazarus1907. Imagine a electron moved from point A to point B. So the information to change the electromagnetic field vector to point to the new electron position travels through space with speed c. My first question is: Is this information travel associated with photons? If yes then wouldn't the number of photons per area decrease and the spacing between photons get bigger as the (approximate) sphere (representing the changing field) increased? If so then there would be a place far away from the moved electron that no photon would hit it and then the field vector wouldn't change? Or the photon get bigger as the sphere increase? Or the place that it would happen had to be so far away that the field magnitude there was so small that it wouldn't matter?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.897466778755188, "perplexity": 365.04457033225145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864364.38/warc/CC-MAIN-20180622065204-20180622085204-00048.warc.gz"}
https://www.physicsoverflow.org/428/on-shell-symmetry-from-a-path-integral-point-of-view?show=2158
# On-shell symmetry from a path integral point of view + 5 like - 0 dislike 193 views Normally supersymmetric quantum field theories have Lagrangians which are supersymmetric only on-shell, i.e. with the field equations imposed. In many cases this can be solved by introducing auxilary fields (field which don't carry dynamical degrees of freedom, i.e. which on-shell become a function of the other fields). However, there are cases where no such formulation is known, e.g. N=4 super-Yang-Mills in 4D. Since the path integral is an integral over all field configurations, most of them off-shell, naively there is no reason for it to preserve the on-shell symmetry. Nevertheless the symmetry is preserved in the quantum theory. Of course it is possible to avoid the problem by resorting to a "Hamiltonian" approach. That is, the space of on-shell field configurations is the phase space of the theory and it is (at least formally) possible to quantize it. However, one would like to have an understanding of the symmetries survival in a path integral approach. So: How can we understand the presence of on-shell symmetry after quantization from a path integral point of view? This post has been migrated from (A51.SE) Dear @Squark, surely you may write $N=4$ in the $N=1$ superspace, making the $N=1$ subalgebra manifest even off-shell and even in the path integral, can't you? The path integral for the $N=1$ language is trivialy equivalent to the $N=0$ "in components" formulation – the only slightly nontrivial statement behind this assertion is that the measure flip including the aux. fields doesn't spoil SUSY. So in this sense, I think that SUSY is manifest even in the non-SUSY $N=0$ "in components" formalism of the path integral, off-shell. If you see some problems with this conclusion, tell me details. This post has been migrated from (A51.SE) The equivalence between N=1 and N=0 is by integrating over the auxiliary fields, as far as I see. Hence it is not quite manifest in the N=0 language. For N=4 you can sure use the N=1 superspace, moreover the GIKOS approach apparently allows making an N=3 sub supergroup manifest. However this doesn't prove the whole symmetry is preserved. This post has been migrated from (A51.SE) OK, I think I see the answer. Once you can prove the equivalence between N=0 and N=1 you can get N=4 by choosing different N=1 subsupergroups This post has been migrated from (A51.SE) Well, I need to chew a bit more on the N=4 SYM case, but, in the meanwhile, consider N=1 SYM in 10D. Already we have no off shell formulation and the N is minimal. This post has been migrated from (A51.SE) + 5 like - 0 dislike How can we understand the presence of on-shell symmetry after quantization from a path integral point of view? One can derive a Schwinger-Dyson equation associated with the current conservation, also known as a Ward identity; see e.g. Peskin and Schroeder, An Introduction to Quantum Field Theory, Section 9.6; or Srednicki, Quantum Field Theory, Chapter 22. This post has been migrated from (A51.SE) answered Nov 5, 2011 by (2,860 points) I don't have access to the book. How do you derive it using the path integral if the symmetry only exists on shell? This post has been migrated from (A51.SE) @Squark: Srednicki's book (up to some small changes) is available on [his webpage](http://web.physics.ucsb.edu/~mark/qft.html). This post has been migrated from (A51.SE) OK, and where is the answer to my question in Srednicki's book? In fact it seems to me he doesn't go beyond N=1 4D supersymmetry This post has been migrated from (A51.SE) I interpreted the question(v1) as asking about on-shell symmetry on general grounds, cf. the title(v1). The references do not have any specific mentioning of $N=4$ 4D supersymmetry. This post has been migrated from (A51.SE) OK. So Peskin and Schroeder have a treatment of on-shell symmetry on general grounds? Which example they consider if not supersymmetry? Also, I still have no access to the book. This post has been migrated from (A51.SE) Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:$\varnothing\hbar$ysicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8651500344276428, "perplexity": 691.8079795058421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00107.warc.gz"}
https://math.stackexchange.com/questions/1634488/how-could-we-define-the-factorial-of-a-matrix/1634551
# How could we define the factorial of a matrix? Suppose I have a square matrix $\mathsf{A}$ with $\det \mathsf{A}\neq 0$. How could we define the following operation? $$\mathsf{A}!$$ Maybe we could make some simple example, admitted it makes any sense, with $$\mathsf{A} = \left(\begin{matrix} 1 & 3 \\ 2 & 1 \end{matrix} \right)$$ For any holomorphic function $$G$$, we can define a corresponding matrix function $$\tilde{G}$$ via (a formal version of) the Cauchy Integral Formula: We set $$\tilde{G}(B) := \frac{1}{2 \pi i} \oint_C G(z) (z I - B)^{-1} dz ,$$ where $$C$$ is an (arbitrary) anticlockwise curve that encloses (once each) the eigenvalues of the (square) matrix $$B$$. Note that the condition on $$C$$ means that restrictions on the domain of $$G$$ determine restrictions on the domain of $$\tilde{G}$$. So, we could make sense of the factorial of a matrix if we had a holomorphic function that restricted to the factorial function $$n \mapsto n!$$ on nonnegative integers. Fortunately, there is such a function: The function $$F: z \mapsto \Gamma(z + 1),$$ where $$\Gamma$$ denotes the Gamma function, satisfies $$F(n) = n!$$ for nonnegative integers $$n$$. (There is a sense in which $$F$$ is the best possible function extending the factorial function, but notice the target of that link really just discusses the real Gamma function, which our $$\Gamma$$ preferentially extends.) Thus, we may define factorial of a (square) matrix $$B$$ by substituting the second display equation above into the first: $$\color{#df0000}{\boxed{B! := \tilde{F}(B) = \frac{1}{2 \pi i} \oint_C \Gamma(z + 1) (z I - B)^{-1} dz}} .$$ The (scalar) Cauchy Integral Formula shows that this formulation has the obviously desirable property that for scalar matrices it recovers the usual factorial, or more precisely, that $$\pmatrix{n}! = \pmatrix{n!}$$ (for nonnegative integers $$n$$). Alternatively, one could define a matrix function $$\tilde G$$ (and in particular define $$B!$$) by evaluating formally the power series $$\sum_{i = 0}^{\infty} a_k (z - z_0)^k$$ for $$G$$ about some point $$z_0$$, that is, declaring $$\tilde G(B) := \sum_{i = 0}^{\infty} a_k (B - z_0 I)^k$$, but in general this definition is more restrictive than the Cauchy Integral Formula definition, simply because the power series need not converge everywhere (where it does converge, it converges to the value given by the integral formula). Indeed, we cannot use a power series for $$F$$ to evaluate $$A!$$ directly for our particular $$A$$: The function $$F$$ has a pole on the line segment in $$\Bbb C$$ with endpoints the eigenvalues of $$A$$, so there is no open disk in the domain of $$F$$ containing all of the eigenvalues of $$A$$, and hence there is no basepoint $$z_0$$ for which the series for $$\tilde F$$ converges at $$A$$. We can define $$\tilde G$$ in yet another way, which coincides appropriately with the above definitions but which is more amenable to explicit computation: If $$B$$ is diagonalizable, so that we can decompose $$B = P \pmatrix{\lambda_1 & & \\ & \ddots & \\ & & \lambda_n} P^{-1} ,$$ for eigenvalues $$\lambda_a$$ of $$B$$ and some matrix $$P$$, we define $$\tilde{G}(B) := P \pmatrix{G(\lambda_1) & & \\ & \ddots & \\ & & G(\lambda_n)} P^{-1} .$$ Indeed, by substituting and rearranging, we can see that this coincides, at least formally, with the power series characterization. There is a similar but more complicated formula for nondiagonalizable $$B$$ that I won't write out here but which is given in the Wikipedia article Matrix function. Example The given matrix $$A$$ has distinct eigenvalues $$\lambda_{\pm} = 1 \pm \sqrt{6}$$, and so can be diagonalized as $$P \pmatrix{1 - \sqrt{6} & 0 \\ 0 & 1 + \sqrt{6}} P^{-1} ;$$ indeed, we can take $$P = \pmatrix{\tfrac{1}{2} & \tfrac{1}{2} \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}}}.$$ Now, $$F(\lambda_{\pm}) = \Gamma(\lambda_{\pm} + 1) = \Gamma (2 {\pm} \sqrt{6}),$$ and putting this all together gives that \begin{align*}\pmatrix{1 & 3 \\ 2 & 1} ! = \bar{F}(A) &= P \pmatrix{F(\lambda_-) & 0 \\ 0 & F(\lambda_+)} P^{-1} \\ &= \pmatrix{\tfrac{1}{2} & \tfrac{1}{2} \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}}} \pmatrix{\Gamma (2 - \sqrt{6}) & 0 \\ 0 & \Gamma (2 + \sqrt{6})} \pmatrix{1 & -\frac{\sqrt{3}}{\sqrt{2}} \\ 1 & \frac{\sqrt{3}}{\sqrt{2}}} .\end{align*} Multiplying this out gives $$\color{#df0000}{\boxed{\pmatrix{1 & 3 \\ 2 & 1} ! = \pmatrix{\frac{1}{2} \alpha_+ & \frac{\sqrt{3}}{2 \sqrt{2}} \alpha_- \\ \frac{1}{\sqrt{6}} \alpha_- & \frac{1}{2} \alpha_+}}} ,$$ where $$\color{#df0000}{\alpha_{\pm} = \Gamma(2 + \sqrt{6}) \pm \Gamma(2 - \sqrt{6})}.$$ It's perhaps not very illuminating, but $$A!$$ has numerical value $$\pmatrix{1 & 3 \\ 2 & 1}! \approx \pmatrix{3.62744 & 8.84231 \\ 5.89488 & 3.62744} .$$ To carry out these computations, one can use Maple's built-in MatrixFunction routine (it requires the LinearAlgebra package) to write a function that computes the factorial of any matrix: MatrixFactorial := X -> LinearAlgebra:-MatrixFunction(X, GAMMA(z + 1), z); To evaluate, for example, $$A!$$, we then need only run the following: A := Matrix([[1, 3], [2, 1]]); MatrixFactorial(A); (NB executing this code returns an expression for $$A!$$ different from the one above: Their values can be seen to coincide using the the reflection formula $$-z \Gamma(z) \Gamma(-z) = \frac{\pi}{\sin \pi z} .$$ We can further simplify the expression using the identity $$\Gamma(z + 1) = z \Gamma(z)$$ extending the factorial identity $$(n + 1)! = (n + 1) \cdot n!$$ to write $$\Gamma(2 \pm \sqrt{6}) = (6 \pm \sqrt{6}) \Gamma(\pm \sqrt{6})$$ and so write the entries as expressions algebraic in $$\pi$$, $$\sin(\pi \sqrt{6})$$, and $$\Gamma(\sqrt{6})$$ alone. One can compel Maple to carry out these substitutions by executing simplify(map(expand, %)); immediately after executing the previous code.) To compute the numerical value, we need only execute evalf(%); immediately after the previous code. By the way, we need not have that $$\det B \neq 0$$ in order to define $$B!$$. In fact, proceeding as above we find that the factorial of the (already diagonal) zero matrix is the identity matrix: $$0! = \pmatrix{\Gamma(1) & 0 \\ 0 & \Gamma(1)} = I .$$ Likewise using the formula for nondiagonalizable matrices referenced above together with a special identity gives that the factorial of the $$2 \times 2$$ Jordan block of eigenvalue $$0$$ is, somewhat amusingly, $$\pmatrix{0 & 1\\0 & 0} ! = \pmatrix{1 & -\gamma \\ 0 & 1} ,$$ where $$\gamma$$ is the Euler-Mascheroni constant. • A truly excellent answer. – goblin Jan 31 '16 at 15:57 • Thank you, @Mehrdad, I'm glad you found it interesting! The C.I.F. definition is useful because it takes advantage of the behavior of holomorphic functions but avoids issues of convergence entailed in power series expansions. By construction, $\overline{\exp}$ so defined is just the usual matrix exponential. – Travis Willse Feb 1 '16 at 11:02 • @YoTengoUnLCD One can use, e.g., {\Large !} to increase the size of the factorial symbol, but the way MathJax aligns elements vertically makes this look strange for font sizes as large as you might like them. A kludge for forcing the vertical alignment is embedding the factorial symbol in a (bracketless) matrix, with something like \pmatrix{a&b\\c&d}\!\!\matrix{\Huge !}, which produces $$\pmatrix{a&b\\c&d}\!\!\matrix{\Huge !}$$ The commands \! are used to improve the kerning. – Travis Willse Feb 1 '16 at 11:06 • @KimPeek *thou took'st, if I'm not mistaken :-D – The Vee Feb 1 '16 at 15:44 • @TobiasKienzler The function $\Gamma$ is holomorphic on its domain, and indeed, this much is necessary to guarantee path-independence of the integral in the definition of $\bar{F}$. This is also why we need to enclose all of the eigenvalues: The integrand $F(z) (z I - B)^{-1}$ has poles at the eigenvalues of $B$, and in general these poles contribute to the resulting integrand, so an integral over some loop not enclosing all the eigenvalues of $B$ will simply give a value other than $\bar{F}(B)$. – Travis Willse Feb 2 '16 at 11:59 The gamma function is analytic. Use the power series of it. EDIT: already done: Some properties of Gamma and Beta matrix functions (maybe paywalled). • (+1) for mentioning this technique, but I don't believe it's possible to use power series to compute $A!$ for the particular example matrix $A$ given: The line segment connecting the eigenvalues of $A$ contains a pole of $z \mapsto \Gamma(z + 1)$, so no power series for that function (i.e., for any basepoint) converges at $A$. – Travis Willse Jan 31 '16 at 14:23 • This would not have occurred to me! But the issue of convergence is a complicated one, I think. The gamma function has an infinite number of poles, after all, so it doesn't have a power series valid everywhere. – TonyK Jan 31 '16 at 14:23 • @Travis, obviously the convergence will depend of the matrix. – Martín-Blas Pérez Pinilla Jan 31 '16 at 14:29 • @VašekPotoček I don't think that's true; since the function $x \mapsto \Gamma(x + 1)$ is well-behaved at the eigenvalues of the given matrix $A$, I believe we can use the Jordan Canonical Form to make sense of $A!$. See my answer for more---comments and corrections are most welcome! – Travis Willse Jan 31 '16 at 14:51 • @Martín-BlasPérezPinilla Yes, my above comment was restricted to the example in the question. But the same reasoning shows that the relevant power series will not converge (again, for any base point) for a large open set of matrices: I think this is the case, for example, if a matrix has an eigenvalue $\lambda$ with $\Re \lambda < -1$ and $|\Im \lambda| > \frac{1}{2}$. – Travis Willse Jan 31 '16 at 15:56 I don't have enough reputation points to comment on Travis' answer, but his numerical result is incorrect. Using Julia I get A = [1 3;2 1] EVD = eigfact(A) V = EVD[:vectors] g = gamma(EVD[:values]) gammaA = V * diagm(g) * inv(V) factA = A * gammaA 3.62744 8.84231 5.89488 3.62744 As long as cond(V) isn't too terrible, I've found the above procedure to be a practical way to evaluate arbitrary matrix functions. • Thanks for pointing this out. There was a bug in my Maple code, and it affected the exact value, too; I've since corrected both values in my point. – Travis Willse Feb 1 '16 at 10:37 I would start from the logical definition of the matrix factorial, without assuming that we want to cover all properties that we know from factorial in set of reals. We define standard factorial as $1 \cdot (1+1) \cdot (1+1+1) \cdot ... \cdot (1+1+...+1+1)$ So first let us define $[n]!$ using the same logic replacing 1 with identity matrix. The obvious way to define it is $$[n]!=\prod\limits_{k=1}^{n}\begin{bmatrix} k & 0\\ 0 & k \end{bmatrix}=\begin{bmatrix} n! & 0\\ 0 & n! \end{bmatrix}$$ All properties of the standard factorial are there. Now, we were defining Gamma function by simple extension $\Gamma (x+1)=x\Gamma (x)$ where $n!=\Gamma (n+1)$. That is all that is required. So we want to find matrix Gamma $\Gamma ([x]+I)=[x]\Gamma ([x])$ If we define $$\Gamma (\begin{bmatrix} x & 0\\ 0 & x \end{bmatrix})=\begin{bmatrix} \Gamma (x) & 0\\ 0 & \Gamma (x) \end{bmatrix}$$ we are totally fine because $$\begin{bmatrix} x & 0\\ 0 & x \end{bmatrix}\begin{bmatrix} \Gamma (x) & 0\\ 0 & \Gamma (x) \end{bmatrix}=\begin{bmatrix} x\Gamma (x) & 0\\ 0 & x\Gamma (x) \end{bmatrix}=\begin{bmatrix} \Gamma (x+1) & 0\\ 0 & \Gamma (x+1) \end{bmatrix}$$ There is nothing amiss if we start from $\begin{bmatrix} x & 0\\ 0 & y \end{bmatrix}$ because $$\begin{bmatrix} x & 0\\ 0 & y \end{bmatrix}\begin{bmatrix} \Gamma (x) & 0\\ 0 & \Gamma (y) \end{bmatrix}=\begin{bmatrix} x\Gamma (x) & 0\\ 0 & y\Gamma (y) \end{bmatrix}=\begin{bmatrix} \Gamma (x+1) & 0\\ 0 & \Gamma (y+1) \end{bmatrix}$$ The remaining part is the other diagonal. What to do with $A=\begin{bmatrix} x_{0} & x_{1}\\ x_{2} & x_{3} \end{bmatrix}$? So we start from what we would like to have $\Gamma([A]+I)=[A]\Gamma([A])$. If we are able to diagonalize $A=P^{-1}\overline{A}P$ and to express in the same manner $\Gamma([A]) = P^{-1}\Gamma(\overline{A})P$ then we have $$\Gamma([A]+I) = P^{-1} \overline{A} P P^{-1} \Gamma(\overline{A}) P = P^{-1} \overline{A} \Gamma(\overline{A}) P = P^{-1} \Gamma(\overline{A+I}) P=\Gamma(A+I)$$ so all should be fine. Since $\overline{A}$ is diagonal with eigenvalues on the main diagonal $\lambda_{1},\lambda_{2}$ and we know how to deal with that type of matrix, we have the full definition of $\Gamma(A)$ even for matrices. $$\Gamma(A)=P^{-1}\begin{bmatrix} \Gamma (\lambda_{1}) & 0\\ 0 & \Gamma (\lambda_{2}) \end{bmatrix}P$$ and now $A!=\Gamma(A+I)$ making it all $$A!=P^{-1}\begin{bmatrix} \Gamma (\lambda_{1}+1) & 0\\ 0 & \Gamma (\lambda_{2}+1) \end{bmatrix}P$$ Instead of giving the solution just to the example I will give a general form for 2x2 matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$. Take discriminant $D=\sqrt{(a-d)^2+4bc} \neq 0, c \neq 0$. Then $$\begin{bmatrix} a & b \\ c & d \end{bmatrix} ! = \begin{bmatrix} \frac{a-d-D}{2c} & \frac{a-d+D}{2c} \\ 1 & 1 \end{bmatrix} \begin{bmatrix} \Gamma (\frac{a+d-D}{2}+1 ) & 0 \\ 0 & \Gamma ( \frac{a+d+D}{2} +1)\end{bmatrix} \begin{bmatrix} -\frac{c}{D} & \frac{a-d+D}{2D} \\ \frac{c}{D} & -\frac{a-d-D}{2D} \end{bmatrix}$$ From here you can nicely conclude that the factorial matrix can be expressed using classical integer factorial if $a+d \pm D$ are even positive integers (including $0$). For other values we use the extension of $\Gamma(x)$ itself. • Very nice! Could you fix the two little "bugs" in the LaTeX code? ^^ Then I'll read it with pleasure! – Von Neumann Feb 3 '16 at 19:54 • @KimPeek: still editing, I have to look for myself how it looks. I think it looks fine now – user195934 Feb 3 '16 at 19:55 • @AlexPeter: If you can add the factorial of the matrix which is given in the question that would be awesome. Any way very nice answer. ++1 – Bumblebee Feb 9 '16 at 7:09 I use the well known (and simple) definitions $$n!=\Gamma (n+1)$$ and $$\Gamma (A+1)=\int_0^{\infty } \exp (-t) \exp (A \log (t)) \, dt$$ Now, if A is a (square) matrix, all we need is to define the exponential function for a matrix. This can always be done (in principle) via the power series which only requires to calculate powers of the matrix and adding matrices. We skip here a possible diagonalization procedure (which was shown by others before) and use the function MatrixExp[] of Mathematica. For the Matrix given in the OP $$A=\left( \begin{array}{cc} 1 & 3 \\ 2 & 1 \\ \end{array} \right);$$ we have Ax = MatrixExp[A Log[t]] which gives $$\left( \begin{array}{cc} \frac{t^{1-\sqrt{6}}}{2}+\frac{t^{1+\sqrt{6}}}{2} & \frac{1}{2} \sqrt{\frac{3}{2}} t^{1+\sqrt{6}}-\frac{1}{2} \sqrt{\frac{3}{2}} t^{1-\sqrt{6}} \\ \frac{t^{1+\sqrt{6}}}{\sqrt{6}}-\frac{t^{1-\sqrt{6}}}{\sqrt{6}} & \frac{t^{1-\sqrt{6}}}{2}+\frac{t^{1+\sqrt{6}}}{2} \\ \end{array} \right)$$ We observe that in some cases the exponent of t is less than -1 ($1-\sqrt{6}=-1.44949$). This leads to a divergent integral which will then be understood as the analytic continuation. This is accomplished simply by replacing each $t^q$ by $\Gamma (q+1)$. fA = Ax /. t^q_ -> Gamma[q + 1] giving $$\left( \begin{array}{cc} \frac{\Gamma \left(2-\sqrt{6}\right)}{2}+\frac{\Gamma \left(2+\sqrt{6}\right)}{2} & \frac{1}{2} \sqrt{\frac{3}{2}} \Gamma \left(2+\sqrt{6}\right)-\frac{1}{2} \sqrt{\frac{3}{2}} \Gamma \left(2-\sqrt{6}\right) \\ \frac{\Gamma \left(2+\sqrt{6}\right)}{\sqrt{6}}-\frac{\Gamma \left(2-\sqrt{6}\right)}{\sqrt{6}} & \frac{\Gamma \left(2-\sqrt{6}\right)}{2}+\frac{\Gamma \left(2+\sqrt{6}\right)}{2} \\ \end{array} \right)$$ Numerically this is $$\left( \begin{array}{cc} 3.62744 & 8.84231 \\ 5.89488 & 3.62744 \\ \end{array} \right)$$ This is in agreement with the result of hans and travis. Discussion (1) Let me point out that the definitions presented here do not need any specific property of the matrix. For example, is does not matter whether the eigenvalues and eigenvectors are singular or not, the matrix might well be deficient. (2) I have used Mathematica here just to facilitate things. In the end we all use some tools at a certain stage to come to terms. The main ideas are idependent. (3) The procedure desribed here obviously generalizes to other more or less complicated analytic functions. As a more exotic example let us take the harmonic number H(A) of a matrix A. This function can be defined using the integral representation (see e.g. Relation between binomial coefficients and harmonic numbers) $$H(\text{A})\text{=}\int_0^1 \frac{1-(1-x)^A}{x} \, dx$$ This definition also needs only the exponential function of the matrix. The result for our matrix A is (after some analytic continuation) $$\left( \begin{array}{cc} \frac{1}{2} \left(H_{-\sqrt{6}}+H_{\sqrt{6}}+\frac{1}{1-\sqrt{6}}+\frac{1}{1+\sqrt{6}}\right) & \frac{1}{2} \sqrt{\frac{3}{2}} \left(-H_{-\sqrt{6}}+H_{\sqrt{6}}-\frac{1}{1-\sqrt{6}}+\frac{1}{1+\sqrt{6}}\right) \\ \frac{-H_{-\sqrt{6}}+H_{\sqrt{6}}-\frac{1}{1-\sqrt{6}}+\frac{1}{1+\sqrt{6}}}{\sqrt{6}} & \frac{1}{2} \left(H_{-\sqrt{6}}+H_{\sqrt{6}}+\frac{1}{1-\sqrt{6}}+\frac{1}{1+\sqrt{6}}\right) \\ \end{array} \right)$$ Numerically, $$\left( \begin{array}{cc} 1.51079 & 0.542134 \\ 0.361423 & 1.51079 \\ \end{array} \right)$$ It would be good to mention that (almost) any matrix function can be made into a power-series expansion, which eventually involves the values of the function on the eigenvalues of the matrix multiplied by the eigenvectors. In other words the matrix function is completely characterised by the values it takes on the eigenvalues of the matrix (even if a power-series expansion may be needed). The above hold for matrices which are diagonalisable (i.e. the number of linearly independent eigenvectors is equal to the matrix dimension). There are ways to expand an arbitrary matrix into what is referred to as generalised eigenvectors, but this will not be pursued further here. Furthermore, since any square, finite-dimensional, matrix satisfies its characteristic polynomial equation (if seen as a matrix function), aka Cayley-Hamilton theorem, the powers of $A^k$ for $k \ge n$ ($n$ is the dimension) can be expressed as a function of the powers of $A$ up to $n$. So eventually the matrix function power-series expansion collapses to polynomial expansion (for square matrices). Finally, this polynomial expansion, for a given function, can be found more easily by methods such as variation of parameters or polynomial modeling. • "any matrix function can be made into a power-series expansion, which eventualy involves the values of the function on the eigen-values of the matrix multiplied by the eigen-vectors." "every matrix function is completely characterised by the values it takes on the eigen-values of the matrix" Both statements are wrong for nondiagonalizable matrices. – Did Feb 4 '16 at 6:42 • Your remark about full-rank or non full-rank matrices is completely offtopic. Please refer to some simple examples of nondiagonalizable matrices to check if the statements in your answer are valid for such matrices (they are not). – Did Feb 4 '16 at 12:30 • Full-rank or not full-rank $\ne$ Diagonalisable or not. Please refer to some textbook on matrices. (How come that "eigenvectors" in your first comment mutate to "generalised eigenvectors"? Is this some kind of rhetorical trick? Are "full-rank" and "diagonalisable" supposed to become "generalized full-rank" and "generalized diagonalizable"?) – Did Feb 4 '16 at 18:39 • @Did, i'm not sure who is trying to use rhetorical tricks here. Anyway "The diagonalization theorem states that an n×n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors, i.e., if the matrix rank of the matrix formed by the eigenvectors is n", i hope you do like Wolfram, as for the rest they are not touched upon. If you think the phrasing can be made better, no problem, else it is beating around the bush and poor use of my time – Nikos M. Feb 4 '16 at 21:18 • @NikosM. "Full-rank" does not mean that the eigenvectors have full span. Please revise what the rank of a matrix is. – Did Feb 5 '16 at 6:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 70, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796746969223022, "perplexity": 335.7272100264502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482038.36/warc/CC-MAIN-20191205190939-20191205214939-00485.warc.gz"}
https://phys.libretexts.org/Courses/Joliet_Junior_College/PHYS202_-_JJC_-_Testing/02%3A_Conceptual_Objective_2/2.1%3A_Coulomb%E2%80%99s_Law
$$\require{cancel}$$ # 2.1: Coulomb’s Law learning objectives • Apply the superposition principle to determine the net response caused by two or more stimuli The superposition principle (also known as superposition property) states that: for all linear systems, the net response at a given place and time caused by two or more stimuli is the sum of the responses which would have been caused by each stimulus individually. For Coulomb’s law, the stimuli are forces. Therefore, the principle suggests that total force is a vector sum of individual forces. ### Coulomb Force The scalar form of Coulomb’s Law relates the magnitude and sign of the electrostatic force F, acting simultaneously on two point charges q1 and q2: $| \mathrm { F } | = \dfrac { 1 } { 4 \pi a r \epsilon _ { 0 } } \dfrac { \left| q _ { 1 } q _ { 2 } \right| } { \mathrm { r } ^ { 2 } }$ Lorentz Force on a Moving Particle: Lorentz force f on a charged particle (of charge q) in motion (instantaneous velocity v). The E field and B field vary in space and time. where r is the separation distance and ε0 is electric permittivity. If the product q1q2 is positive, the force between them is repulsive; if q1q2 is negative, the force between them is attractive. The principle of linear superposition allows the extension of Coulomb’s law to include any number of point charges—in order to derive the force on any one point charge by a vector addition of these individual forces acting alone on that point charge. The resulting force vector happens to be parallel to the electric field vector at that point, with that point charge removed. To calculate the force on a small test charge q at position rr, due to a system of N discrete charges: $\mathrm{F(r)=\dfrac{q}{4πarϵ_0} }\sum _ { i = 1 } ^ { N } q _ { i } \frac { \mathrm { r } - \mathrm { r } _ { \mathrm { i } } } { \left| \mathrm { r } - \mathrm { r } _ { \mathrm { i } } \right| ^ { 3 } } = \frac { q } { r 4 \pi a r \epsilon _ { 0 } }\sum _ { \mathrm { i } = 1 } ^ { \mathrm { N } } \mathrm { q } _ { \mathrm { i } } \dfrac { \hat { \mathrm { R } _ { i } } } { \left| \mathrm { R } _ { \mathrm { i } } \right| ^ { 2 } }$ where qi and ri are the magnitude and position vector of the i-th charge, respectively, and $$\hat { \mathrm { R } _ { \mathrm { i } } }$$ is a unit vector in the direction of $$\mathrm { R } _ { \mathrm { i } } = \mathrm { r } - \mathrm { r } _ { \mathrm { i } }$$ (a vector pointing from charges qi to q. ) Of course, our discussion of superposition of forces applies to any types (or combinations) of forces. For example, when a charge is moving in the presence of a magnetic field as well as an electric field, the charge will feel both electrostatic and magnetic forces. Total force, affecting the motion of the charge, will be the vector sum of the two forces. (In this particular example of the moving charge, the force due to the presence of electromagnetic field is collectively called Lorentz force (see ). ## Spherical Distribution of Charge The charge distribution around a molecule is spherical in nature, and creates a sort of electrostatic “cloud” around the molecule. learning objectives • Describe shape of a Coulomb force from a spherical distribution of charge Through the work of scientists in the late 18th century, the main features of the electrostatic force —the existence of two types of charge, the observation that like charges repel, unlike charges attract, and the decrease of force with distance—were eventually refined, and expressed as a mathematical formula. The mathematical formula for the electrostatic force is called Coulomb ‘s law after the French physicist Charles Coulomb (1736–1806), who performed experiments and first proposed a formula to calculate it. Charge distribution in a water molecule: Schematic representation of the outer electron cloud of a neutral water molecule. The electrons spend more time near the oxygen than the hydrogens, giving a permanent charge separation as shown. Water is thus a polar molecule. It is more easily affected by electrostatic forces than molecules with uniform charge distributions. Modern experiments have verified Coulomb’s law to great precision. For example, it has been shown that the force is inversely proportional to distance between two objects squared (F∝1/r2) to an accuracy of 1 part in 1016. No exceptions have ever been found, even at the small distances within the atom. Coulomb’s law holds even within the atoms, correctly describing the force between the positively charged nucleus and each of the negatively charged electrons. This simple law also correctly accounts for the forces that bind atoms together to form molecules and for the forces that bind atoms and molecules together to form solids and liquids. Generally, as the distance between ions increases, the energy of attraction approaches zero and ionic bonding is less favorable. As the magnitude of opposing charges increases, energy increases and ionic bonding is more favorable. An electric field is a vector field which associates to each point of the space the Coulomb force that will experience a test unity charge. Given the electric field, the strength and direction of a force F on a quantity charge q in an electric field E is determined by the electric field. For a positive charge, the direction of the electric field points along lines directed radially away from the location of the point charge, while the direction is towards for a negative charge. This distribution around a charged molecule is spherical in nature, and creates a sort of electrostatic “cloud” around the molecule. The attraction or repulsion forces within the spherical distribution of charge is stronger closer to the molecule, and becomes weaker as the distance from the molecule increases. This image shows the outer electron cloud of a neutral water molecule. The charge distribution of the oxygen molecule is negative, and attracts the two positive hydrogen molecules. The attraction between the two opposing charges forms a neutral water molecule. It is a polar molecule because there is still a permanent charge separation because the electrons spend more time near the oxygen than the hydrogens. ## Solving Problems with Vectors and Coulomb’s Law Coulomb’s Law, which calculates the electric force between charged particles, can be written in vector notation as $$\mathrm { F } ( \mathrm { E } ) = \frac { \mathrm { kq } _ { 1 } \mathrm { q } _ { 2 } } { \mathrm { r } ^ { 2 } } \mathrm { r } +$$. learning objectives • Explain when the vector notation of Coulomb’s Law can be used ### Electric Force Between Two Point Charges To address the electrostatic forces among electrically charged particles, first consider two particles with electric charges q and Q, separated in empty space by a distance r. Suppose that we want to find the electric force vector on charge q. (The electric force vector has both a magnitude and a direction. ) We can express the location of charge q as rq, and the location of charge Q as rQ. In this way we can know both how strong the electric force is on a charge, but also what direction that force is directed in. Coulomb’s Law using vectors can be written as: $\mathbf { F } _ { \mathbf { E } } = \dfrac { \operatorname { kq } Q \left( \mathrm { r } _ { \mathrm { q } } - \mathrm { r } _ { Q } \right) } { \left| \mathrm { r } _ { \mathrm { q } } - \mathrm { r } _ { Q } \right| ^ { 3 } }$ In this equation, k is equal to $$\frac { 1 } { 4 \pi \varepsilon _ { 0 } \varepsilon }$$ ,where $$\varepsilon _ { 0 }$$  is the permittivity of free space and εε is the relative permittivity of the material in which the charges are immersed. The variables $$\mathbf { F} _ { \mathbf { E } } , \mathbf { \Gamma } _ { \mathrm { q } }$$ and $$\mathbf{ R}_Q$$ are in bold because they are vectors. Thus, we need to find $$\mathbf { r } _ { \mathrm { q } } - \mathbf { r } _ { \mathrm { Q } }$$ by performing standard vector subtraction. This means that we need to subtract the corresponding components of vector$$\mathbf{r}_\mathrm{Q}}$$ from vector $$\mathrmbf{r}_\mathrm{q}$$. This vector notation can be used in the simple example of two point charges where only one of which is a source of charge. Application of Coulomb’s Law: In a simple example, the vector notation of Coulomb’s Law can be used when there are two point charges and only one of which is a source charge. ### Electric Force on a Field Charge Due to Fixed Source Charges Suppose there is more than one point source charges providing forces on a field charge. diagrams a fairly simple example with three source charges (shown in green and indexed by subscripts) and one field charge (in red, designated q). We assume that the source charges are fixed in space, and the field charge q is subject to forces from the source charges. Multiple point charges: Coulomb’s Law applied to more than one point source charges providing forces on a field charge. Note the coordinate system that has been chosen. All of the charges lie on the corners of a square, and the origin is chosen to collocate with the lower right source charge, and aligned with the square. Since we can have only one origin of coordinates, no more than one of the source points can lie at the origin, and the displacements from different source points to the field point differ. The total force on the field charge q is due to applications of the force described in the vector notation of Coulomb’s Law from each of the source charges. The total force is therefore the sum of these individual forces. Displacements of field charge: The displacements of the field charge from each source charge are shown as light blue arrows. Applying Coulomb’s Law three times and summing the results gives us: $\mathbf { F } _ { \mathbf { E } _ { \mathbf { q } } } =\dfrac { \operatorname { kq } \cdot q _ { 1 } \left( \mathbf { r } _ { q } - \mathbf { r } _ { q 1 } \right) } { \left| \mathbf { r } _ { \mathrm { q } } - \mathbf { r } _ { \mathrm { q } 1 } \right| ^ { 3 } }+ \dfrac { \mathrm { kq } \cdot \mathbf { q } _ { 2 } \left( \mathbf { r } _ { \mathrm { q } } - \mathbf { r } _ { \mathrm { q } 2 } \right) } { \left| \mathbf { r } _ { \mathrm { q } } - \mathbf { r } _ { \mathrm { q } 2 } \right| ^ { 3 } }+ \dfrac { \mathrm { kq } \cdot \mathrm { q } _ { 3 } \left( \mathbf { r } _ { \mathrm { q } } - \mathbf { r } _ { \mathrm { q } 3 } \right) } { \left| \mathbf { r } _ { \mathrm { q } } - \mathbf { r } _ { \mathrm { q } 3 } \right| ^ { 3 } }$ This equation can further be simplified and applied to a fixed number of charge points. $\mathbf { F } _ { \mathbf { n } } =\sum _ { i \neq n }\dfrac { q _ { n } q _ { i } \left( \mathbf { r } _ { n } - \mathbf { r } _ { i } \right) } { 4 \pi \epsilon _ { 0 } \left| \mathbf { r } _ { \mathrm { n } } - \mathbf { r } _ { \mathrm { i } } \right| ^ { 3 } }$ Coulomb’s Law: In this video I continue with my series of tutorial videos on Electrostatics. It’s pitched at undergraduate level and while it is mainly aimed at physics majors, it should be useful to anybody taking a first course in electricity and magnetism such as engineers etc. ## Key Points • The superposition principle suggests that the net response at a given place and time caused by two or more stimuli is the sum of the responses which would have been caused by each stimulus individually. • Total Coulomb force on a test charge due to a group of charges is equal to the vector sum of all the Coulomb forces between the test charge and other individual charges. • The superposition of forces is not limited to Coulomb forces. It applies to any types (or combinations) of forces. • The force between two objects is inversely proportional to the square of the distance between two objects. • The attraction or repulsion forces within the spherical distribution of charge is stronger closer to the molecule and becomes weaker as the distance from the molecule increases. • This law also accounts for the forces that bind atoms together to form molecules and for the forces that bind atoms and molecules together to form solids and liquids. • The vector notation of Coulomb ‘s Law can be used in the simple example of two point charges where only one of which is a source of charge. • The total force on the field charge for multiple point source charges is the sum of these individual forces. • Coulomb’s Law can be further simplified and applied to a fixed number of charge points. ## Key Items • Lorentz force: The force exerted on a charged particle in an electromagnetic field. • unit vector: A vector with length 1. • electrostatic force: The electrostatic interaction between electrically charged particles; the amount and direction of attraction or repulsion between two charged bodies. • coulomb’s law: the mathematical equation calculating the electrostatic force vector between two charged particles
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571790099143982, "perplexity": 320.92289813374384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657907.79/warc/CC-MAIN-20190116215800-20190117001800-00550.warc.gz"}
http://tex.stackexchange.com/tags/tikz-pgf/new
# Tag Info 2 A short code with pstricks: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{lmodern} \usepackage{mathtools} \usepackage{pstricks-add, multido} \usepackage{auto-pst-pdf} \begin{document} \sffamily \begin{pspicture} \psset{dimen=middle, linewidth=0.6pt, braceWidthOuter=4pt, braceWidthInner=4pt, braceWidth=0.8pt, labelsep =-2ex} ... 1 Ultimately, there's a minimum width of the trees that arises from the text inside the leaf (end) nodes. For both trees, you can see that placing all the x_n <- 1 end to end results already takes over half the text width. If you want to force those trees to be side-by-side, you'll either have overlapping trees (as you currently do), or overlapping nodes. ... 4 For starting point in your learning (in case that as basic tool you select TikZ package) can serve the following MWE: \documentclass[tikz, border=3mm]{standalone} \usetikzlibrary{chains,decorations.pathreplacing} \begin{document} \begin{tikzpicture}[ node distance=0pt, start chain = A going right, X/.style = {rectangle, draw, ... 1 So far I've been able to draw such a thing using pgfplots: \documentclass{standalone} \usepackage{pgfplots} \pgfplotsset{compat=1.12} \begin{document} \begin{tikzpicture} \begin{axis}[ xmin=0, xmax=1, ymin=0, ymax=1, zmin=0, zmax=1, axis equal, ticks=none, hide axis, ] %lower face, drawn ... 0 As is explained in How do I draw shapes inside a tikz node? pics can be used for defining new objects. My main problem using pics is how to place where you want because they aren't nodes and positioning them is not so easy. Following code shows how to define EDFA block. EDFA/.pic={ \begin{scope}[scale=.5] \draw (-1,0) coordinate (in) -- ... 1 Regarding distance between edge labels and edges: see if the following addition to Alenanno code gives what you looking for: \tikzset{el/.style = {% edge label midway, outer sep=1.5mm, #1} % <--- #1: for position (left, right) } Put this before \begin{forest} and than instead edge label={node[midway,left]{...} use `edge ... 4 That's because you're setting the caption as a node, which is also not the standard way of doing this. Captions are added to figures in a Latex document externally to the picture, i.e. they are not part of it. Also, you're manually assigning a number to your figure and this makes the use of Latex a bit pointless, because one of the great advantages of ... 3 You could also use a matrix to simplify the code. \documentclass{exam} \usepackage{tikz} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{latexsym} \usepackage{mathabx} \usepackage{MnSymbol} \usetikzlibrary{shapes.geometric} \begin{document} \begin{center} \begin{tikzpicture}[scale=0.8] ... 4 I have to admit that my "cylinder" doesn't look very realistic, but in any case, the result can be achieved with a much shorter code. If you do not understand something, feel free to ask, but I think that typing a lot of \node definitions gets tedious. I left your package list as it was because I don't know if you use them somewhere else in your document, ... 1 \documentclass{amsart} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{tikz} \usetikzlibrary{calc,angles,positioning,intersections,quotes,decorations.markings,decorations.pathreplacing} \begin{document} \begin{tikzpicture} % axes + grid \draw[step=5mm,gray,dashed, line width=0.2pt] (-0.75,-1.25) grid (4.25,2.25); \draw[latex-latex] (-1,0) -- (5,0) ... 2 Coordinates (yaxis |- X) is determined by center of yaxis node, not with its right (east) border, as you like to have. So, all somilar constructed coordinates you need to change to yaxis.east and similarly to xaxis.north. \documentclass{article} \usepackage{tikz} \usetikzlibrary{intersections} \begin{document} \begin{tikzpicture}[font=\footnotesize] ... 3 You can print their value into a node but if you just want to see them without doing any node trick then you can print them in the log file \documentclass[tikz]{standalone} \usetikzlibrary{calc} \def\shoutmyn#1{\expandafter\show\csname tikz@cc@n@#1\endcsname} \begin{document} \begin{tikzpicture}[] \node (D1) {D1}; \node (D) at (3,2) {D}; \node (D2) at ... 3 A quick hack which is slightly better \documentclass[tikz]{standalone} \usetikzlibrary{backgrounds} \begin{document} \begin{tikzpicture} \node[circle, draw=black, inner sep=0.5mm, font=\tiny] at (3, 10) (v0) {0}; \node[circle, draw=black, inner sep=0.5mm, font=\tiny] at (2.3, 9.65) (v1) {1}; \node[circle, draw=black, inner sep=0.5mm, font=\tiny] at ... 4 Another interpretation of "half dashed": \documentclass[tikz,border=5]{standalone} \usetikzlibrary{decorations.pathreplacing,calc} \tikzset{draw half paths/.style 2 args={% decoration={show path construction, lineto code={ \draw [#1] (\tikzinputsegmentfirst) -- ($(\tikzinputsegmentfirst)!0.5!(\tikzinputsegmentlast)$); \draw [#2] ... 1 I just realised you supplied images for the icons. Oh, well. Here is a pure TikZ solution. At least, it uses forest which is based on TikZ. In addition, it uses two pics for the icons, which are then used within the tree. This makes use of the new edges library for forest which includes a folder style for directory trees. It can draw the folders, too, but ... 1 The problem is that pgfplots does some juggling with \label, under the assumption that it means the same as in the LaTeX kernel, which unfortunately is false with tufte-book. \documentclass{tufte-book} \usepackage{lipsum} \usepackage{pgfplots} \usepackage{etoolbox} % patch pgfplots so that \label does the original job % tufte-book saves the original ... 1 I think you based your code on the question here. I tried to make a cleaner MWE using code from the pgfplots manual. As you can see, I don't have a solution but I got a little further. 2 out of 3 \ref commands worked as expected. In this case it didn't work if there was a line and marks. Maybe this helps others to look deeper into the issue. ... 2 First: You can ignore the warning or you set xmin and xmax symmetrical to 0. For example: xmin=-1 and xmax=1. Second: The bounding box of your picture is enlarged to the left by the long plot title. So with \raggedright the plot title is left aligned. So you have to change the position of the plot title. Code: \documentclass[paper=a4, parskip=half-, ... 2 Update Here is another suggestion without package titlesec. Now there are no rules below the headings. \documentclass[a4paper,12pt,oneside]{scrbook}[2015/10/03] \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[bitstream-charter]{mathdesign} \usepackage[scaled]{berasans} \usepackage[scaled]{beramono} \usepackage[english]{babel} ... 0 It's easier if you replace the command with a node, so you can use that to draw everything else. The line that goes over the text width is kind of manual for now. Meaning that it spans the text width and then the length of the node, plus the inner sep, is removed from that value. Output Code \documentclass[11pt]{book} ... 3 I can imagine why you might want to stick with TikZ syntax for self-confidence, familiarity and so on, but I would still recommend pgfplots for this or at least TikZ' own graphdrawing library. Anyways, for the inner sep outer sep stuff, maybe a visualization might help. The node contents are put in a placeholder (an \hbox or minipage environment and then ... 3 As Peter said, you should add at to specify the coordinates. However, you don't need these extra nodes. You can add the nodes directly to the "bars" above. You can add nodes to any path, and the rectangle is still a path. Output Code \documentclass[margin=10pt]{standalone} \usepackage{tikz} \begin{document} \begin{tikzpicture} \shade[top ... 4 The lines do end at the nodes, you have to consider that nodes have some padding (inner sep, outer sep), and lines are drawn to the edge of the node, not the center. Add draw to the node options and you'll see this: \documentclass{standalone} \usepackage{tikz} \begin{document} \begin{tikzpicture} \draw (4,2) node(p1)[draw,label={[label ... 5 In the second part to this answer a custom coordinate system was given. This can be used to plot the grids (albeit a bit slowly). The other requirements (not done here) involve re-orienting the x, y, and z vectors, and changing the content and positioning of the labels. \documentclass[tikz,border=5]{standalone} \usetikzlibrary{arrows} \tikzset{declare ... 3 Just about does it (although without the axes): \documentclass[tikz,border=5]{standalone} \usetikzlibrary{decorations.pathreplacing} \tikzset{arrow path/.style={decoration={show path construction, lineto code={ \path [->, every lineto/.try] (\tikzinputsegmentfirst) -- (\tikzinputsegmentlast); }}, decorate}, every lineto/.style={draw, ... 5 As you have named the coordinates, just loop over their names: \documentclass[margin=10pt]{standalone} \usepackage{tikz} \usetikzlibrary{arrows,calc} \begin{document} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-0.5,-1) rectangle (5.5,3.5); % defining coordinates \coordinate (1) at (0,0); \coordinate (2) at ... 3 Here's how I would probably actually do this. I don't suggest this is an obvious solution, but the code is succinct and can be easily tweaked for the entire diagram. If anybody wishes to try this at home, let me know and I will give you a copy of the experimental package it uses. (I hope to get its sister to CTAN shortly, and maybe this one as well, but ... 2 I'm not very sure, what you mind with "half dashed line" ... if it is composed from two lines, one solid and one (on small distance from solid) dashed, see, if this is solve your problem: \begin{scope}[on background layer] % half dashed line \draw (n1) -- (n13); \draw[dashed] ($(n1)!2pt!90:(n13)$) edge ($(n13)!2pt!-90:(n1)$); \end{scope} This addition ... 3 You might use \bbordermatrix from \bordermatrix with brackets [ ] instead of parentheses ( ) \documentclass{article} \usepackage{xcolor} \usepackage{tikz} \usepackage{etoolbox} \usetikzlibrary{arrows,matrix,positioning,fit,arrows.meta,} \definecolor{ocre}{RGB}{0,173,239} \tikzset{% highlight/.style={rectangle,rounded corners,fill=ocre!50,draw, fill ... 3 This is rather close to just re-asking the same question but if you want to highlight a column in the first array rather than the second, just put the marks there. Or egreg wants me to do this \documentclass[11pt]{book} \documentclass[11pt]{book} \usepackage{amsmath,amsfonts,amssymb,amsthm} \usepackage{xcolor} \definecolor{ocre}{RGB}{0,173,239} ... 7 So the problem with the code is that \subnode is never defined. You can get it defined by loading the tikzmark library. However, \newcommand\tikzmark... will then fail as the library defines the standard \tikzmark command. This problem can be avoided by simply choosing a different macro name, such as \mytikzmark. You cannot, however, use \mytikzmark or ... 2 An alternative possible solution with TikZ: It is generated by the following (to my opinion very concise) code: \documentclass{amsart} \usepackage{amsmath,amssymb} \usepackage{tikz} \usetikzlibrary{arrows,calc,positioning} % for show only a picture \usepackage[active,tightpage]{preview} \PreviewEnvironment{tikzpicture} ... 2 Another option is to use a LaTeX box --- never had problems with this (although you probably can't connect to "internal" objects). For simple cases it's quite easy: \documentclass{standalone} \usepackage{tikz} \usetikzlibrary{arrows,positioning,calc} \begin{document} \newsavebox{\genericfilt} \savebox{\genericfilt}{% \begin{tikzpicture}[font=\small, ... 2 You must move the "free variable" node ; arrow will automatically follow. \documentclass[11pt]{book} \usepackage{amsmath,amsfonts,amssymb,amsthm} \usepackage{xcolor} \definecolor{ocre}{RGB}{0,173,239} \usepackage{blkarray} \makeatletter \renewcommand*\env@matrix[1][*\c@MaxMatrixCols c]{% \hskip -\arraycolsep \let\@ifnextchar\new@ifnextchar ... 4 This answer will focus on the funnel object, at least for the moment. Changes: Fixed funnel shapes. Removed one \foreach statement and included it in the previous one. Better node positioning. Output Code \documentclass{article} \usepackage[margin=2cm]{geometry} \usepackage{pgfplots} \definecolor{myellow}{RGB}{228,212,0} ... 1 Some very simple cross-sections will be possible in PGFplots and PSTricks (though I am not very familiar with the latter); however, arbitrary cross-sections is perhaps out of the league of these packages. In the case of PGFplots, it can handle 3D plots of the form z = f(x,y) quite well, but more complicate surfaces (such as parametric plots) will often lead ... 2 Instead of TiKZ-matrix you prefer to use tcolorboxes. Text is easily compound inside a tcolorbox than inside a node. With a tcbraster your boxes can be distributed like a matrix. And, of course, tikzmark is compatible with them. \documentclass{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{lmodern} ... 1 I've changed below to right and changed inner sep= to 3pt in your code: \documentclass[12pt, a4paper]{article} \usepackage[a4paper,top=1 in,bottom=1 in,left=0.7 in,right=0.7 in]{geometry} \usepackage[utf8]{inputenc} %\usepackage[misc]{ifsym} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \usepackage{tikz} ... 3 A shading will do it. Here a custom horizontal shading is used to (try to) avoid sharp lines at the edge of the of shaded region. How successful this is may be viewer dependent. Also, the shading is put on a background layer so it doesn't cover the lines: \documentclass[tikz,border=5]{standalone} \usepackage{tikz-3dplot} \usetikzlibrary{backgrounds} ... 2 This is a bit modified solution from the follow-up question's solution. I dedicded to draw the "steps" on top of the graphs, because this space is unused. Then also the vertical lines don't cross the xticklabels of the first axis environment. And I think it looks odd to have a full fill but an interrupted drawing. Please have a look at the comments of the ... 3 The graph drawing library is probably one of the more complex parts and on top of this, this particular diagram is a particularly complex diagram too. Looking through your code, there are a few issues I can identify: Firstly, there are quite a few superfluous libraries. This is not detrimental, but not exactly recommended either; The graph drawing ... 5 The simplest way is write matrix as TikZ matrix and add desired column frame and note to it: \documentclass[11pt]{book} \usepackage{amsmath,amsfonts,amssymb,amsthm} \usepackage{tikz} \usetikzlibrary{arrows.meta,bending,matrix,positioning} \begin{document} \begin{tikzpicture}[ node distance=1mm and 0mm, baseline] ... 1 Here is a more "automated" way of Zarkos anwer, where you don't have to draw all the "step lines" by hand. For details have a look at the comments in the code. \documentclass[border=2mm,many]{standalone} \usepackage{pgfplots} \usepackage{pgfplotstable} \pgfplotsset{compat=1.11} \usepackage{filecontents} \begin{filecontents}{data.txt} Time ... 0 For the shown example I am not convinced that you need to draw the "existing figure" outside the PGFPlots axis environment. But even if it should really be the case, you can match the PGFPlots coordinate system to the one of tikz and then use all of the possibilities PGFPlots offers to draw the heatmap. I also want to mention, that I would draw it the other ... 5 Fill the empty cell: \documentclass{article} \usepackage{tikz} \usepackage{booktabs} \usetikzlibrary{calc} \newcommand{\tikzmark}[1]{\tikz[overlay,remember picture] \node (#1) {};} \newcommand{\DrawBox}[3][]{% \tikz[overlay,remember picture]{ \draw[black,#1] ($(#2)+(-0.5em,2.0ex)$) rectangle ($(#3)+(0.75em,-0.75ex)$);} } ... 2 Your MWE compile without error but result seems to be different from your sketch ... Since you already got answer on your similar questions, which answer also solve problems you emphasized in question, I made (on basis of mine previous answers) the following flowchart: with following code: \documentclass{article} \usepackage{tikz} ... 2 This solution uses tikzmark and involves turning the itemize environment into an enumerate using the label option of enumitem. The list looks just the same, but the item number is used to turn the bullets into sub-nodes which can be referenced later in the picture. \documentclass[tikz, border=10pt, multi]{standalone} \usepackage{calc,enumitem} ... 19 I couldn't resist, so here's a solution using pgfplots (and some tikz), plus arara for creating the .gif animation. Output Click for bigger size Code % arara: animate: {density: 160, delay: 8} \documentclass[tikz]{standalone} \usepackage{amsmath,amssymb} \usepackage{pgfplots} \pgfplotsset{compat=1.13} \usepgfplotslibrary{fillbetween} \begin{document} ... 22 Like this? \documentclass[tikz]{standalone} \usepackage{tikz} \begin{document} \foreach \angle in {0,10,...,360} { \begin{tikzpicture} % fill circle and plot \fill[blue!50] (-1,0) arc (0:\angle:1) -- (-2,0) -- cycle; \fill[blue!50] plot[smooth,domain=0:\angle] (pi/180*\x,{sin(\x)}) |- (0,0); % draw connection \draw (-2,0) +(\angle:1) ... 1 This is an area where the tkz-euclide package excels: \documentclass[border=5mm]{standalone} \usepackage[dvipsnames]{xcolor} \usepackage{tkz-euclide} \usetkzobj{all} \begin{document} \begin{tikzpicture} % Set up the canvas \tkzInit[xmin=0, xmax=7, ymin=0, ymax=4.5] % Clip things outside the canvas \tkzClip[space=0.5] % Define two starting points on a ... Top 50 recent answers are included
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9267276525497437, "perplexity": 3965.89923595927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160950.71/warc/CC-MAIN-20160205193920-00093-ip-10-236-182-209.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/163286-singular.html
# Math Help - singular 1. ## singular if V is a finite dimensional over F and T belong to A(V) T is singulare iff there exists v not equal to zero...such that vT=0 please prove the reverse part....that is if vT=0 then T is singular Thanks 2. Originally Posted by prashantgolu if V is a finite dimensional over F and T belong to A(V) T is singulare iff there exists v not equal to zero...such that vT=0 please prove the reverse part....that is if vT=0 then T is singular Thanks Tonio 3. By singular I mean that it is either left invertible or right invertible...but not both sided... 4. Originally Posted by prashantgolu By singular I mean that it is either left invertible or right invertible...but not both sided... Weird definition...but never minds: if $T$ were invertible then there'd exist $S\in A(V)$ s.t. $ST =TS=I$ , with I = the identity (operator or matrix, it never minds), and then $0=Tv\Longrightarrow 0 = Sv = STv=Iv= v\Longrightarrow v=0$ Tonio
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971866250038147, "perplexity": 3424.6106409626937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663739.33/warc/CC-MAIN-20140930004103-00011-ip-10-234-18-248.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/34122/how-to-shift-carrier-frequency-in-qam-signal
# How to Shift Carrier Frequency in QAM Signal? If I wanted to apply a 100MHz frequency shift to a QAM signal with a 400MHz carrier frequency, I would 1. Demodulate it at the carrier frequency 2. Apply a frequency shift to the original signal 3. Modulate it again at the new carrier frequency Is there a way to apply a frequency shift to the IQ values of a QAM signal without demodulating it? • Is there any reason why plain heterodyning (multiply by sinusoid at difference of carrier frequencies and filter to remove unwanted image signal) does not work? – Dilip Sarwate Jun 18 '12 at 17:36 • @DilipSarwate It looks like this'll work. Could you write up a little bit more about heterodyning in an answer and I'll accept it. Since it creates extra harmonics, will it degrade the signal after filtering? – Atav32 Jun 18 '12 at 18:56 I am not sure what the phrase to correct frequency offset in the title of this question means. Does it mean that the carrier frequency is supposed to be $10$ MHz but actually is $10.001$ MHz, that is, off by $1$ kHz, and what is wanted is a method to fix this problem? If so, the method described below will not work. Frequency translation by substantial amounts, e.g. changing a $10$ MHz to, say, $455$ kHz, is generally accomplished by heterodyning or mixing the signal with another carrier signal at a different frequency and bandpass filtering the mixer output. Suppose that the QAM signal at carrier frequency $f_c$ Hz is $$x(t) = I(t)\cos(2\pi f_c t) - Q(t)\sin(2\pi f_c t)$$ where $I(t)$ and $Q(t)$ are the in-phase and quadrature baseband data signals. The spectrum of the QAM signal occupies a relatively narrow band of frequencies, say, $\left[f_c-\frac{B}{2}, f_c+ \frac{B}{2}\right]$ centered at $f_c$ Hz. Multiplying this signal by $2\cos(2\pi\hat{f}_ct)$ and applying the trigonometric identities \begin{align*}2\cos(C)\cos(D) &= \cos(C+D) + \cos(C-D)\\ 2\sin(C)\cos(D) &= \sin(C+D) + \sin(C-D) \end{align*} gives us \begin{align*} 2x(t)\cos(2\pi \hat{f}_ct) &= \quad \left(I(t)\cos(2\pi (f_c +\hat{f}_c) t) - Q(t)\sin(2\pi (f_c+\hat{f}_c)t)\right)\\ &\quad +\ \left(I(t)\cos(2\pi (f_c-\hat{f}_c)t) - Q(t)\sin(2\pi(f_c- \hat{f}_c)t)\right) \end{align*} which is the sum of two QAM signals with identical data streams but different carrier frequencies shifted up and down by $\hat{f}_c$ Hz from the input carrier frequency $f_c$. The frequency spectra of these two QAM signals occupy bands of width $B$ Hz centered at $f_c+\hat{f}_c$ and $f_c-\hat{f}_c$ respectively, and if $$f_c-\hat{f}_c + \frac{B}{2} < f_c+\hat{f}_c - \frac{B}{2} \Rightarrow \hat{f_c} > \frac{B}{2},$$ then bandpass filtering can be used to eliminate one of the two QAM signals while retaining the other. Needless to say, if the frequency shift is much larger than the QAM signal bandwidth, that is, if $\hat{f}_c \gg B/2$, then the task of designing and implementing the bandpass filter is easier. Note also that this method cannot be used to correct small frequency offsets because the two QAM signals produced at the mixer output will have overlapping spectra and cannot be separated by filtering. • You have a good point that I wasn't clear enough in my question. Although I'm actually looking to shift the frequency by a tiny amount (300Hz out of 100MHz), I'll edit the question because this answer is great. Thanks! – Atav32 Jun 22 '12 at 15:24 Yes it is possible for doppler blue-shift only. If you have red shift then you can not predict future. For this correction the system must have infinite memory capacity. Imaginte infinite queue fed with blue-shifted signal on one end and consumer of queue retransmitting signal with corrected carrier. The requirement to have queue comes because of phase component. Say amplitude component left intact by doppler, but frequency shift is simply ever-running phase lag/boost. As system needs an infinite resource it is impractical. It is more practical to build queue to store demodulated information, that what your system will do. Like in your description of steps 1-2-3. There is a subtle problem with Doppler shift of datarate. The datarate shift still stays even if you have corrected the carrier shift. So that what queue will be needed for, if you will reconstruct the datarate as well. In all practical systems queue will have capacity as large as the packet. If your source has infinite packet, then perfect correction is impossible for capacity reasons and unpredictable future reason. There is a funny paradox related to modulation: Say someone sends single AM CW packet of fixed frequency. According to fourier series it must be possible to detect carrier and sidebands of signal at ANY given time including -T (predict the future), because the signal is exactly series of infinite in time sinusoids. Infinite, means, that the sinusoids existed all time before, during and after the signal was sent.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8140876293182373, "perplexity": 1123.4044010902069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670597.74/warc/CC-MAIN-20191120162215-20191120190215-00435.warc.gz"}
https://math.stackexchange.com/users/54692/user10444?tab=topactivity
user10444 9 Prove that f is differentiable in $\mathbb{R}$ 6 Show that $f(z)=\sqrt{|xy|}$ is not analytic at $(0,0)$ even though the Cauchy Riemann conditions are satisfied there. 5 Is $\sum_{n=1}^{\infty}\frac{\log n}{n^{2}}$ convergent? How to show that? 5 Necessary and sufficient conditions for differentiability. 5 composition of continuous functions ### Reputation (2,546) This user has no recent positive reputation changes ### Questions (37) 16 Showing that $\sqrt \pi$ is transcendental 9 Finite Abelian groups: $G \times H \cong G\times K$ then $H\cong K$ 8 Proof of a trigonometric inequality 7 $f(x)=\displaystyle \sum\limits_{n=0}^{\infty}(-1)^n\frac{x^n}{n!n!}$ decreasing? 6 Order of an element in the factor group divides order or element ### Tags (42) 24 real-analysis × 23 6 abstract-algebra × 11 22 calculus × 12 6 sequences-and-series × 7 18 analysis × 7 5 derivatives × 3 12 complex-analysis × 3 5 convergence-divergence 8 continuity × 2 4 uniform-convergence × 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9038378596305847, "perplexity": 1446.2937919331048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145654.0/warc/CC-MAIN-20200222054424-20200222084424-00506.warc.gz"}
https://www.physicsforums.com/threads/classical-circular-polarization-vs-photon-spin-eigenstates.782884/
Classical Circular Polarization vs. Photon Spin Eigenstates Tags: 1. Nov 18, 2014 referframe Hello, Given an electromagnetic wave that is, from a classical point-of-view, not circular polarized. Does that correspond in QM to photons with the ZERO spin eigenstate? 2. Nov 18, 2014 Orodruin Staff Emeritus Photons do not have a spin 0 eigenstate, since they are massless spin 1 particles. Also be very careful in trying to treat classical fields as photons. Classical fields are generally described by coherent quantum states and not states carrying a particular number of photons. 3. Nov 18, 2014 Avodyne No, it corresponds to a superposition of states with spin +1 and -1, just like the classical wave can be written as a superposition (sum) of circularly polarized waves. 4. Nov 18, 2014 referframe So, by "spin +1" and "-1", are you referring to 2 of the 3 eigenvalues of the spin 1 operator/matrix for the Z direction? 5. Nov 19, 2014 Orodruin Staff Emeritus This is the thing, a massless spin 1 particle has only 2 eigenstates. 6. Nov 19, 2014 referframe I just got it. Thank you both. Similar Discussions: Classical Circular Polarization vs. Photon Spin Eigenstates
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9357464909553528, "perplexity": 1457.3621180325933}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581033.57/warc/CC-MAIN-20171216010725-20171216032725-00155.warc.gz"}
http://www.scholarpedia.org/article/Hamiltonian_dynamics
# Hamiltonian Systems (Redirected from Hamiltonian dynamics) Post-publication activity Curator: James Meiss A dynamical system of $$2n\ ,$$ first order, ordinary differential equations $\tag{1} \dot z=J\nabla H(z,t),\quad J= \left( {\begin{array}{*{20}c} 0 & I \\{ - I} & 0 \\ \end{array}} \right) \;,$ is an $$n$$ degree-of-freedom (d.o.f.) Hamiltonian system (when it is nonautonomous it has $$n + 1/2$$ d.o.f.). Here $$H$$ is the ''Hamiltonian'', a smooth scalar function of the extended phase space variables $$z$$ and time $$t\ ,$$ the $$2n \times 2n$$ matrix $$J$$ is the Poisson matrix and $$I$$ is the $$n\times n$$ identity matrix. The equations naturally split into two sets of $$n$$ equations for canonically conjugate variables, $$z = (q,p)\ ,$$ i.e. $\tag{2} \dot q=\partial H/\partial p,\quad \dot p=-\partial H/\partial q \;.$ Here the $$n$$ coordinates $$q$$ represent the configuration variables of the system (e.g. positions of the component parts) and their canonically conjugate momenta $$p$$ represent the impetus gained by movement. Hamiltonian systems are universally used as models for virtually all of physics. ## Formulation In 1834 William Rowan Hamilton showed that Newton's equations $$F = ma$$ for a set of particles in a conservative force field $$F = -\nabla V$$ with "potential energy" $$V$$ could be derived from a single function that he called the "characteristic function", $\tag{3} H(q,p) = \sum_{i=1}^n \frac{|p_i|^2}{2m_i} + V(q_1,q_2,\ldots,q_n) \;.$ Here $$q_i$$ is the position of the $$i^{th}$$ particle whose mass is $$m_i\ ,$$ and $$p_i$$ is its canonical momentum $$p_i = m_i \dot{q}_i\ .$$ The equations of motion are obtained by (2), which can in turn be converted to Newton's second order form by differentiating the equation $$\dot{q}_i = {p_i}/{m_i}\ .$$ At first it seems that Hamilton's formulation gives only a convenient restatement of Newton's system---the convenience perhaps most evident in that the scalar function $$H(q,p)$$ encodes all of the information of the $$2n$$ first order dynamical equations. However, a Hamiltonian formulation gives much more than just this simplification. Indeed, if we allow more general functions $$H(q,p,t)$$ and a more general relationship between the canonical momenta and the velocities $$\dot{q}$$ then virtually all of the models of classical physics have a Hamiltonian formulation, including electromagnetic forces, which are not derivable from a (scalar) potential. Moreover, waves in inviscid fluids such as surface water waves or magnetohydrodynamic waves also have a Hamiltonian (PDE) formulation. Quantum mechanics is formally obtained from classical mechanics by replacing the canonical momentum in the Hamiltonian by a differential operator. Hamiltonian structure provides strong constraints on the flow. Most simply, when $$H$$ does not depend upon time (autonomous) then its value is constant along trajectories: the energy $$E = H(q,p)$$ is constant, see Energy Conservation. Similarly if the Hamiltonian is independent of one of the configuration variables (the variable is ignorable), then (2) implies that the corresponding canonical momentum is an invariant. This gives a simple explanation for the relation between symmetries (for example rotational symmetry) and invariants (for example angular momentum)---see Noether's Theorem. One of the stronger constraints imposed by Hamiltonian structure relates to stability: it is impossible for a trajectory to be asymptotically stable in a Hamiltonian system. Even more structure applies: for each eigenvalue $$\lambda$$ of an equilibrium there is a corresponding opposite eigenvalue $$-\lambda\ .$$ For example an equilibrium of a one degree-of-freedom system must either be a center (two imaginary eigenvalues, $$\pm i\omega$$ or a saddle (two real eigenvalues, $$\pm\lambda$$) or have a double zero eigenvalue. Another geometric implication is that knowledge of $$n$$ invariants is enough to fully characterize a solution of the $$2n$$ equations for an $$n$$ degree-of-freedom system, i.e., the Hamiltonian is integrable. This follows from Liouville's Integrability Theorem. Moreover, if the orbits of such a system are bounded, then almost all of them must lie on $$n$$-dimensional tori. Kolmogorov, Arnold and Moser proved that a sufficiently smooth, nearly-integrable Hamiltonian system still has many such invariant tori (see KAM theory). This strong structural stability of Hamiltonian dynamics was unexpected even in the middle of the $$20^{th}$$ century when physicists began the first computer simulations of dynamical systems (see Fermi Pasta Ulam problem). ## Examples For many mechanical systems, the Hamiltonian takes the form $$H(q,p) = T(q,p) + V(q)\ ,$$ where $$T(q,p)$$ is the kinetic energy, and $$V(q)$$ is the potential energy of the system. Such systems are called natural Hamiltonian systems. The simplest case is when the kinetic energy is of the form in (3) for a set of particles with kinetic momenta $$p_i \in \mathbb{R}^3$$ and masses $$m_i\ .$$ More generally, when the extent of the bodies is taken into account the kinetic energy can depend upon the configuration of the system, but it is typically a quadratic function of the momenta, so that $$T(q,p) = \frac12 p^T M(q)^{-1} p \ ,$$ where the $$n \times n$$ mass matrix $$M(q)$$ represents the shape as well as the inertia of the system, and the vector $$p \in \mathbb{R}^n$$ includes both linear momenta and angular momenta. ### Springs Figure 1: Coupled Springs A harmonic spring has potential energy of the form $$\frac{k}{2}x^2\ ,$$ where $$k$$ is the spring's force coefficient (the force per unit length of extension) or the spring constant, and $$x$$ is the length of the spring relative to its unstressed, natural length. Thus a point particle of mass $$m$$ connected to a harmonic spring with natural length $$L$$ that is attached to a fixed support at the origin and allowed to move in one dimension has a Hamiltonian of the form $$H(q,p) = \frac{1}{2m} p^2 + \frac{k}{2}(q-L)^2$$ and thus its equations of motion are $\dot{q} = p/m \;, \quad \dot{p} = -k(q-L) \;.$ If the spring is hanging vertically in a constant gravitational field, then the new equations are obtained by simply adding the gravitational potential energy $$m g q$$ to $$H\ .$$ A set of point masses that are coupled by springs has potential energy given by the sum of the potential energies of each spring in the system. For example suppose that there are two masses connected to three springs as shown in (Figure 1). The Hamiltonian is $H(q,p) = \frac{1}{m_1} p_1^2 + \frac{1}{m_2} p_2^2 + \frac{k_1}{2}q_1^2 + \frac{k_2}{2} (q_2-q_1)^2 + \frac{k_3}{2}(L-q_2)^2 \;.$ One advantage of the Hamiltonian formulation of mechanics is that the equations for arbitrarily complicated arrays of springs and masses can be obtained by simply finding the expression for the total energy of the system (However, it is often easier to do this using the Lagrangian formulation of mechanics which does not require knowing the form of the canonical momenta in advance). ### Pendulum Figure 2: Planar Pendulum The ideal, planar pendulum is a particle of mass $$m$$ in a constant gravitational field, that is attached to a rigid, massless rod of length $$L\ ,$$ as shown in (Figure 2). The canonical momentum of this system is the angular momentum $$p = mL^2 \dot{\theta}$$ and the potential energy is the gravitational energy $$-mgL \cos \theta\ ,$$ where $$\theta$$ is the angle from the vertical. The Hamiltonian is $\tag{4} H(\theta,p) = \frac{p^2}{2mL^2} - mgL \cos\theta \;.$ This gives the equations of motion $\dot{\theta} = \frac{p}{mL^2} \;,\quad \dot{p} = -mgL \sin \theta \;.$ While these equations are simple, their explicit solution requires elliptic functions. However, the trajectories of the pendulum are easy to visualize since the energy is conserved, see (Figure 3). When the energy is below $$mgL$$ the angle cannot exceed $$\pi$$ and the pendulum oscillates. Since the energy is conserved, the orbit must be periodic. For energies larger than $$mgL\ ,$$ the pendulum rotates, and the angle either monotonically grows with time (if the angular momentum is positive) or decreases (negative $$p$$). The critical level set is the separatrix; the two orbits on this level set asymptotically approach the equilibrium $$(\pm\pi,0)$$ as $$t \to \pm \infty\ .$$ These are called homoclinic orbits. Figure 3: Phase Space of the Pendulum ### N-body problem A set of point masses interacting by Newton's gravitational force is also a Hamiltonian system of the natural form (3) with potential energy $V(q_1,\ldots q_n) = - \sum_{i<j} \frac{Gm_im_j}{||q_i-q_j||}$ where $$q_i \in \mathbb{R}^3$$ is the position of the $$i^{th}$$ body. In addition to the conserved energy $$H = E$$ this system has additional conserved quantities. Since $$H$$ is a function only of the difference between particle positions, the total momentum $\tag{5} P = \sum_{i=1}^n p_i$ is conserved. Since $$H$$ is a function only of the distance between the bodies, the total angular momentum is also conserved $L = \sum_{i=1}^n q_i \times p_i$ For the case of two bodies, the Hamiltonian has six degrees of freedom (the three components of the position and momentum for each body), however, the conservation of total momentum means that if we choose coordinates moving with the center of mass $Q = \frac{1}{M} \sum_{i=1}^n {m_i q_i}$ where $$M = \sum m_i$$ is the total mass, then the Hamiltonian is independent of $$Q\ ,$$ so that its conjugate momentum (5) is constant. Thus the system is reduced to three degrees of freedom, depending only upon the inter-particle vector $$q = q_1-q_2\ ,$$ and its conjugate momentum, $$p = \mu \dot{q}\ ,$$ where $$\mu = \frac{m_1m_2}{M}$$ is the reduced mass. In these coordinates the Hamiltonian becomes $\tag{6} H(q,Q,p,P) = \frac{P^2}{2M} + \frac{p^2}{2\mu} - \frac{Gm_1m_2}{||q||}$ The total angular momentum splits as well $$L = Q\times P + q \times p\ .$$ Since $$P$$ is constant, and $$\dot{Q} = MP\ ,$$ the first term is itself individually conserved, so $$l = q \times p$$ is also constant, a fact that can also be seen from (6) directly. The dynamics of three or more bodies can be extremely complex. ### Electromagnetic Forces A nonrelativistic charged particle in an electromagnetic field has the equations of motion $m \ddot{q} = e E(q,t) + \frac{e}{c}\dot{q} \times B(q,t)$ where $$E$$ is the electric field, and $$B$$ is the magnetic field and we use Gaussian (cgs) units. This system is Hamiltonian, with $\tag{7} H(q,p,t) = \frac{1}{2m} \left (p - \frac{e}{c} A \right)^2 + e \phi ,$ where the scalar and vector potentials $$\phi$$ and $$A$$ are defined through $E = \nabla \phi + \frac{\partial A}{\partial t} , \quad B = \nabla \times A .$ The momentum occurring in (7) is not the kinetic momentum $$m \dot{q}\ ,$$ but rather a canonical momentum defined by $$p = m \dot{q} + \frac{e}{c}A\ .$$ For systems that also have a Lagrangian formulation, the canonical momentum is defined by $p = \frac{\partial L(q,\dot{q})}{\partial \dot{q}} \;.$ Note that the first term in the Hamiltonian (7) is simply the kinetic energy as usual, and the last term is the electrical potential energy. ## Geometric Structure Much of the elegance of the Hamiltonian formulation stems from its geometric structure. Hamiltonian phase space is an even dimensional space with a natural splitting into two sets of coordinates, the configuration variables $$q$$ and the momenta $$p\ .$$ For most physical systems the momenta are similar to velocities, which are tangent vectors to trajectories, but the difference--emphasized in the electromagnetic example--is that they are cotangent vectors, as we will explain further below. In this case the Hamiltonian phase space is the cotangent bundle of the configuration space. More abstractly, the phase space of a Hamiltonian system is an even dimensional manifold $$M$$ that is endowed with a nondegenerate two-form, $$\omega\ .$$ This two-form allows us to define a pairing between vectors and covectors. Given a Hamiltonian function $$H: M \to \mathbb{R}\ ,$$ the Hamiltonian vector field $$\dot{z} = X(z)$$ is defined by $\tag{8} i_X \omega \equiv \omega(X,\cdot) = dH .$ This is just a coordinate-free version of (1). Indeed, a famous theorem of Darboux implies that near each point in $$M$$ there exists a set of canonical variables $$z = (q,p)\ ,$$ such that $\omega = dq \wedge dp \;,$ where $$\wedge$$ is the "wedge product". In terms of these coordinates, $$\omega(v,w) = v^T J w\ ,$$ where $$J$$ is the Poisson matrix (1), and the equations (8) become $J^{T} X = \nabla H ,$ which is a restatement of (1). ### Conservation of Energy If a Hamiltonian does not depend explicitly on time, then its value, the energy, is constant. Indeed differentiating along a trajectory gives $\frac{dH}{dt} = \frac{\partial H}{\partial q} \frac{dq}{dt} + \frac{\partial H}{\partial p} \frac{dp}{dt} = 0 ,$ by (2). Thus $$H(q(t),p(t)) = H(q(0),p(0)) = E\ .$$ While Hamiltonian systems are often referred to as conservative systems, these two types of dynamical systems should not be confounded. In the autonomous case, a Hamiltonian system conserves energy, however, it is easy to construct nonHamiltonian systems that also conserve an energy-like quantity. Moreover, in the nonautonomous case, the Hamiltonian depends explicitly on time $$H(q,p,t)$$ and there is no conserved energy. ### Liouville's Theorem One direct consequence of the form (2) is that the divergence of a Hamiltonian vector field is zero $\nabla \cdot X = \nabla \cdot J \nabla H = \sum_{i,j} J_{ij} \frac{\partial H}{\partial z_i \partial z_j} = 0 .$ since $$J$$ is antisymmetric and the Hessian matrix $$D^2H$$ is symmetric. This immediately implies that the volume of any bundle of trajectories is preserved. That is, suppose $$A$$ is a set of initial conditions with volume $V(A) = \int_A dz .$ If $$z$$ evolves to $$\varphi_t(z)\ ,$$ the flow of the vector field, then the new volume $\int_{\varphi_t(A)} dz = \int_A \delta(\varphi_t(z)-y) dy ,$ is the same as the original volume $$V(A)\ .$$ This is known as Liouville's theorem. It is valid for any divergence free vector field, $$\nabla \cdot X = 0\ .$$ Note that Hamiltonian flow is volume preserving even when it is nonautonomous. ### Poincaré's Invariant In addition to preserving volume, Hamiltonian systems also preserve a loop action, or Poincaré invariant. Given any loop $$L$$ in the extended phase space $$(q,p,t)\ ,$$ let $\tag{9} A(L) = \oint_L p dq - H(q,p,t)dt .$ Then under a Hamiltonian flow the loop action is preserved $A(\varphi_t(L)) = A(L) .$ Even more generally, suppose $$T$$ is the two dimensional tube obtained from the flow of $$L\ :$$ $$T = \{ \varphi_t(L): t \in \mathbb{R}\}$$ and $$L'$$ is any loop on $$T$$ that is homotopic to $$L\ .$$ Then $$A(L') = A(L)\ .$$ This fact is used, for example, in the construction of a Poincaré section for Hamiltonian systems. ### Symplectic Maps A map $$f: M \to M$$ is symplectic if it preserves the symplectic form $$\omega\ .$$ Geometrically, we say that $$f^*\omega = \omega\ ,$$ which becomes in components $\tag{10} Df^T J Df = J$ where $$Df$$ is the $$2n \times 2n$$ Jacobian matrix $Df(q,p) = \begin{pmatrix} \frac{\partial f_q}{\partial q} & \frac{\partial f_q}{\partial p} \\ \frac{\partial f_p}{\partial q} & \frac{\partial f_p}{\partial p} \end{pmatrix} \;.$ The preservation of the loop action (9) implies that the time-$$T$$ map of any Hamiltonian flow is symplectic. This follows from Stokes's theorem and the fact that for a loop at a fixed value of time, the loop action reduces to $$\oint_L p dq \ .$$ Note that this holds even if the Hamiltonian depends explicitly on time $$H(q,p,t) \ .$$ Another way in which symplectic maps arise is for Poincaré sections of autonomous Hamiltonian flows on an energy surface. For example if the surface $$Q = \{(q,p): q_n = 0, \dot{q}_n > 0, H(q,p) = E\}$$ is selected, then the resulting return map to $$Q$$ is symplectic with the form $$\omega|_Q = \sum_{i=1}^{n-1} dq_i \wedge dp_i\ .$$ This is especially useful for the visualization of the motion of a two-degree-of-freedom system, since the resulting map is two-dimensional. The set of linear mappings that obey (10) is called the symplectic group; it is a Lie group. Any quadratic Hamiltonian $H(z) = \frac12 z^T K z \;,$ where $$K$$ is a (constant) symmetric matrix, has a linear flow that is generated by the exponential $$\Phi(t) = e^{tJK}\ .$$ Each of the matrices in the curve $$\Phi(t)$$ is symplectic. Indeed, the collection $$\{JK: K^T = K\}$$ forms the Lie Algebra of the symplectic group. ## Integrable Systems A dynamical system is integrable when it can be solved in some way. One (rather restrictive) way in which this can happen is if the flow of the vector field can be constructed analytically. However, since this can almost never be done (in terms of elementary functions), this is not an especially useful class of systems. However, there is a class of Hamiltonian systems, action-angle systems, whose solutions can be obtained analytically, and there is a well-accepted definition of integrability for Hamiltonian dynamics due to Liouville in which each integrable Hamiltonian is (locally) equivalent to these action-angle systems. ### Action-Angle Variables A Hamiltonian system is written in action-angle form if there is a set of canonical variables $$(\theta, I)$$ where $$\theta \in \mathbb{T}^n$$ and $$I \in \mathbb{R}^n$$ and such that $$H$$ depends only upon the actions$H(I)\ .$ In this case the equations of motion (1) become simple indeed: $\tag{11} \dot{\theta} = \nabla H(I) = \Omega(I) \;, \quad \dot{I} = 0$ These equations can be easily solved, giving $(\theta(t), I(t)) = (\theta_o + \Omega(I_o) t , I_o)$ Thus the angles move along the invariant torus $$I = I_o$$ with a fixed frequency vector $$\Omega\ .$$ For example, the simple harmonic oscillator Hamiltonian $H(q,p) = \frac12 (p^2 + q^2)$ can be written in action angle form by setting $$(q,p) = (\sqrt{2I} \sin \theta, \sqrt{2I} \cos \theta)\ .$$ The new variables are canonical since $$dq \wedge dp = d\theta \wedge dI$$ (i.e., the transformation is canonical). In the new coordinates the Hamiltonian becomes $$H(\theta, I) = I\ .$$ Thus it is in action-angle form with $$\Omega = 1\ .$$ A more general, anharmonic oscillator, with a natural Hamiltonian of the form (3) with a potential energy $$V(q)$$ with a unique minimum at $$q = 0$$ has a Hamiltonian that depends in a nonlinear way upon the action, but which nevertheless can be reduced to action-angle form. Hamiltonian systems with two or more degrees of freedom cannot always be reduced to action-angle form, giving rise to chaotic motion. ### Liouville Integrability Liouville and Arnold showed that the motion in a larger class of Hamiltonian systems is as simple as that of (11). Suppose that an $$n$$ degree-of-freedom Hamiltonian system (2) has a set of $$n$$ invariants $$F_i$$ that are almost everywhere independent (their gradients span an $$n$$-dimensional space except on sets of zero measure) and that are in involution, that is, their Poisson brackets vanish: $\{F_i, F_j\} \equiv \omega(\nabla F_i, \nabla F_j ) = 0 \;.$ Then if a regular level set of the invariants $$L_c = \{ F_i(q,p) = c_i: i = 1,\ldots n \}$$ is compact it must be a torus . Moreover, there is a neighborhood of $$L_c$$ in which there exist action-angle coordinates such that the equations of motion reduce to (11). See (Arnold, 1978). For example, every one degree-of-freedom, autonomous Hamiltonian system is Liouville integrable. However, the action-angle coordinates may not be globally defined. In the case of the pendulum (4), there are action-variables away from the separatrix. Generically the dynamics on an invariant torus are quasiperiodic. ## KAM Theory Andrey Kolmogorov discovered a general method for the study of perturbed, integrable Hamiltonian systems. The method lead to theorems by Vladimir Arnold for analytic Hamiltonian systems (Arnold, 1963) and by Jurgen Moser for smooth enough area-preserving mappings (Moser 1962), and the ideas have become known as KAM theory. Roughly speaking, KAM theory implies that a Hamiltonian system of the form $H(\theta,I) = H_0(I) + \epsilon H_1(\theta,I) \;,$ which is integrable at $$\epsilon = 0\ ,$$ still has a large set of invariant tori if $$\epsilon$$ is small enough (a set whose measure approaches the total measure as $$\epsilon \to 0$$). In order that KAM theory apply, the Hamiltonian must be sufficiently smooth, and (for the simplest version of the theorem) the unperturbed Hamiltonian must satisfy a nondegeneracy or twist condition, that $$D^2H_0(I)$$ is nonsingular. For more details see Kolmogorov-Arnold-Moser Theory. ## Hamiltonian Chaos Figure 4: Poincaré section at $$t = 2\pi k$$ of the two-wave Hamiltonian (12) for $$a = 4\ ,$$ $$b = 6$$ and $$\epsilon = 0.1\ .$$ Resonant island chains with rotation numbers $$0/1, 1/2, 2/3, 4/5$$ and $$1/1$$ are shown. Though many invariant tori of an integrable system typically persist upon a perturbation, tori that commensurate or nearly commensurate are typically destroyed. Chaotic dynamics often occurs in the neighborhood of these destroyed tori. An invariant torus is characterized by its frequency vector $$\Omega\ .$$ It is commensurate if there exists an integer vector $$m \in \Z^n$$ such that $m \cdot \Omega = 0$ Commensurate tori of an integrable system are generically destroyed by any perturbation. For example, consider the 1.5 degree-of-freedom system $\tag{12} H(q,p) = \frac12 p^2 + \epsilon( a cos(2 \pi q) + b cos(2\pi (q-t))$ that represents the motion of (for example) a charge particle in the field of two electrostatic waves. Here the phase space can be taken to be $$\mathbb{T}^2 \times \mathbb{R}$$ since $$H$$ is a periodic function of $$q$$ and $$t$$ For $$\epsilon = 0\ ,$$ the momentum is constant and the orbits lie on two-dimensional tori with the frequency vector $$\Omega = (p,1)^T\ .$$ Consequently every torus with a rational value of $$p$$ is commensurate--indeed such orbits are periodic in this case. KAM theory implies that if $$p$$ is "sufficiently" irrational, then the torus is preserved for $$|\epsilon| \ll 1\ .$$ However commensurate tori and nearby irrational tori are destroyed. For small $$\epsilon$$ the destroyed tori are replaced by chains of islands formed from a pair of periodic orbits, one a saddle and the other elliptic (see Stability of Hamiltonian Flows). Surrounding the elliptic orbit are a family of two-dimensional tori with a new topology (not homotopic to $$p = constant\ ,$$ see (Figure 4). Moreover, the stable and unstable manifolds of the saddle typically intersect transversely, giving rise to a Smale horseshoe and chaotic motion (albeit chaos that is limited to a narrow layer about the separatrix. As $$\epsilon$$ grows these chaotic layers also grow, and they can envelope larger regions of phase space, see (Figure 5). Figure 5: Poincaré section of the two-wave Hamiltonian (12) for $$\epsilon = 0.2\ .$$ ## References Abraham, R. and J. E. Marsden (1978). Foundations of Mechanics. Reading, Benjamin. Arnold, V. I. (1963). “Proof of a Theorem of A.N. Kolmogorov on the Invariance of Quasiperiodic Motions Under Small Perturbations of the Hamiltonian.” Russ. Math. Surveys 18:5: 9-36. Arnold, V. I. (1978). Mathematical Methods of Classical Mechanics. New York, Springer. MacKay, R. S. and J. D. Meiss, Eds. (1987). Hamiltonian Dynamical Systems: a reprint selection. London, Adam-Hilgar Press. McDuff, D. and D. Salamon (1995). Introduction to Symplectic Topology. Oxford, Clarendon Press. Meyer, K. R. and G. R. Hall (1992). Introduction to the Theory of Hamiltonian Systems. New York, Springer-Verlag. Moser, J. K. (1962). “On Invariant Curves of Area-Preserving Mappings of an Annulus.” Nachr. Akad. Wiss. Göttingen, II Math. Phys. 1: 1-20. Siegel, C. L. and J. K. Moser (1971). Lectures on Celestial Mechanics. New York, Springer-Verlag. Internal references
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9743437170982361, "perplexity": 249.04636820806158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808935.79/warc/CC-MAIN-20171124195442-20171124215442-00450.warc.gz"}
http://cstheory.stackexchange.com/questions/152/dfa-intersection-in-subquadratic-space
# DFA intersection in subquadratic space? The intersection of two (minimal) DFAs with n states can be computed using O(n2) time and space. This is optimal in general, since the resulting (minimal) DFA may have n2 states. However, if the resulting minimal DFA has z states, where z=O(n), can it be computed in space n2-eps, for some constant eps>0? I would be interested in such a result even for the special case where the input DFAs are acyclic. - Um...if two n-state DFAs are acyclic, then each merely accepts a finite set of words of length at most n, in which case their intersection is just the intersection of the two labelled transition graphs, which will have n states and can be computed in linear time and space. Or am I missing something? –  Joshua Grochow Aug 17 '10 at 14:50 Yes, acyclic DFAs accept only a finite set of words. But there are examples of acyclic DFAs whose intersection has size n^2. E.g., think about one DFA that accepts strings of the form AABC (where ABC are strings of length k), and one that accepts strings of the form ABCC. –  Rasmus Pagh Aug 17 '10 at 18:00 retagging: cs.cc is an arxiv designation, so the given tags don't need the cs.cc prefix. –  Suresh Venkat Aug 18 '10 at 1:59 The answer is yes without any requirement on the size of the automaton. It can be computed in $O(\log^2 n)$ space even for $k$ DFAs where $k$ is a constant. Let $A_i = (Q_i, \Sigma_i, \delta_i, z_i, F_i)$ ($i \in [k])$ be $k$ DFAs. We show that, given $\langle A_1, \ldots, A_k \rangle$, computing the minimal DFA recognizing $\text{L}(A_1) \cap \cdots \cap\text{L}(A_k)$ can be done in $O(\log^2 n)$ space. We first prove some technical results. Definition 1: Let $q, r$ be two states then $q \equiv r$ iff $\forall w \in \Sigma^*$, $q . w \in F \Leftrightarrow r . w \in F$ We now consider the automaton $A$ given by the classical cartesian product construction. Let $q = (q_1, \ldots, q_k)$ and $r = (r_1, \ldots, r_k)$ be states of $A$. Lemma 1: Deciding whether $q \equiv r$ is in NL. Proof (sketch): We show that testing inequivalence is in NL and use NL = coNL. Guess a word $w \in \Sigma^*$ (one letter at the time) such that $q . w$ is a final state and $r . w$ isn't. This can be achieved by computing $q_i . w, r_i . w$ in log-space for $i \in [k]$ and using the fact that $q$ is final iff $q_i \in F_i \, \forall i \in [k]$. It can be shown that $q \not\equiv r$ implies the existence of a $w$ of poly-size. Lemma 2: Deciding whether $q$ is (in)accessible is in NL. Proof (sketch): Guess (poly-size) paths from $z_i$ to $q_i$ ($i \in [k]$). Definition 2: Consider the states of $A$ in lexicographical order. Define $s(1)$ as being the first accessible state and $s(i)$ the first accessible state following $s(i-1)$ which isn't equivalent to any previous state. We define $c(q)$ as the unique $i$ such that $q \equiv s(i)$. Lemma 3: $s(i)$ can be computed in $O(\log^2 n)$ space. Proof (sketch): Definition 2 yields an algorithm. We use $k$ counters to iterate over the states. Let $j \leftarrow 0$ and $q$ be the current state. At each state, we use lemma 2 to verify if $q$ is accessible. If it is, we loop on every previous states and we verify if any of them is equivalent to $q$. If there isn't any, we increment $j$ and output $q$ if $j = i$. Otherwise, we store $q$ as being $s(j)$ and we continue. Since we only store a constant number of counters and our tests can be carried out in $\text{NL} \subseteq \text{DSPACE}(\log^2 n)$, this completes the proof. Corollary 1: $c(q)$ can be computed in $O(\log^2 n)$ space. Theorem: Minimizing $A$ can be done in $O(\log^2 n)$ space. Proof (sketch): Let $1 \leq m \leq |Q_0| \cdots |Q_1|$ be the largest $i$ such that $s(i)$ is defined (ie. the number of classes of $\equiv$). We give an algorithm outputting an automaton $A' = (Q', \Sigma, \delta', z', F')$ where • $Q' = \lbrace s(i) : i \in [m] \rbrace$; • $F' = \lbrace q \in Q' : q_i \in F_i \, \forall i \in [k] \rbrace$; • $z' = s(c(q))$ where $q = (z_0, \ldots, z_k)$. We now show how to compute $\delta'$. For every $i \in [m], a \in \Sigma$, compute $q \leftarrow s(i) . a$ and output the transition $\left(s(i), a, s(c(q))\right)$. By lemma 3 and corollary 1, this algorithm runs in $O(\log^2 n)$ space. It can be checked that $A'$ is minimal and $\text{L}(A') = \text{L}(A)$. - Nice algorithm! Here is a slightly different way to look at this algorithm. Its core is that the state minimization of any given DFA can be done in polynomial time and $O(\log^2 n)$ space. After that, it is easy to construct some DFA representing the intersection in the logarithmic space (hence in polynomial time and $O(\log^2 n)$ space), and we can compose two functions computable in polynomial time and $O(\log^2 n)$ space (in a similar way to composing two logarithmic-space reductions), yielding the whole algorithm in polynomial time and $O(\log^2 n)$ space. –  Tsuyoshi Ito Oct 2 '10 at 17:17 I just saw this answer... I don't see why the algorithm runs in polytime and $O(\log^2 n)$ space simultaneously. Yes, $NL \subseteq P \cap DSPACE[\log^2 n]$, but it is not known if $NL \subseteq TISP[n^{O(1)}, \log^2 n]$ -- that is, we can get an algorithm running in polytime, and we can get another algorithm running in $O(\log^2 n)$ space, but I do not know how to solve $NL$ problems in polytime and $O(\log^2 n)$ space with a single algorithm. –  Ryan Williams Jan 10 '13 at 16:57 You are right, I don't know how either. I posted this a long time ago, so I'm not sure why I wrote it this way, but perhaps I meant "polynomial time or O(log² n)". I will edit it because it is misleading. Thank you! –  Michael Blondin Jan 14 '13 at 18:53 Dick Lipton and colleagues recently worked on this problem, and Lipton blogged about it here: http://rjlipton.wordpress.com/2009/08/17/on-the-intersection-of-finite-automata/ It appears that doing better than O(n^2) is open even for the very-special case of determining if the DFA intersection defines the empty language. The paper gives complexity consequences that would result from a much-improved algorithm handling not just 2 DFAs in the intersection, but larger numbers as well. - and what about lower bounds? –  Marcos Villagra Aug 17 '10 at 23:34 Just to clarify the questions: I'm happy to spend O(n^2) time (or maybe even n^O(1) time) to improve the space bound. –  Rasmus Pagh Aug 20 '10 at 7:58 If you're given k DFAs (k is part of the input) and wish to know if their intersection is empty, this problem is PSPACE-complete in general: Dexter Kozen: Lower Bounds for Natural Proof Systems FOCS 1977: 254-266 Perhaps if you carefully study this proof (and similar constructions by Lipton and his co-authors), you might find some sort of space lower bound even for fixed k. - Thanks for this pointer. I'm guessing that this could possibly lead to an n^Omega(1) space lower bound on the additional space needed, apart from the input. But could it possibly lead to a super-linear space lower bound? –  Rasmus Pagh Aug 20 '10 at 8:01 Hmmm... intuition says that the k DFAs together can encode all possible configurations of a machine using space O(k*log n). I am not sure what time/space lower bounds you can infer from this, but it seems that if you could faithfully represent the intersection of the k DFA in n^{o(k)} space then something unlikely should happen, for example, possibly subexponential size circuits for QBF. –  Ryan Williams Aug 22 '10 at 19:18 He doesn't want to test for the emptiness, he only wants to build the minimal automaton for the intersection which appears to be feasible in O(log^2 n) space according to my last post. –  Michael Blondin Oct 2 '10 at 16:46 Cool! Although it might be slightly off topic, yes, you can show a space complexity lower bound for solving the fixed k intersection non-emptiness problem. :) –  Michael Wehar Nov 9 '14 at 0:23 Given two automata $A$, $B$ accepting finite languages (acyclic automata), the state complexity of $L(A) \cap L(B)$ is in $\Theta(|A| \cdot |B|)$ (1). This result also holds for unary DFAs (not necessarily acyclic) (2). However, you seem to be talking about the space required to compute the intersection of two automata. I don't see how the classic construction using the Cartesian product uses $O(n^2)$ space. All you need is a constant number of counters of logarithmic size. When you compute the transition function for the new state $(q,r)$ you only have to scan the input without looking to any previously generated data. Perhaps you want to output the minimal automaton? If this is the case, then I have no clue whether it can be achieved. The state complexity of the intersection for finite languages doesn't seem encouraging. However, unary DFAs have the same state complexity and I think it can be achieved with such automata. By using results from (2), you can get the exact size of the automaton recognizing the intersection. This size is described by the length of the tail and the cycle, thus the transition function can be easily computed with very few space since the structure is entirely described by those two sizes. Then, all you have to do is to generate the set of final states. Let $n$ be the number of states in the resulting automaton, then for all $1 \leq i \leq n$, state $i$ is a final state iff $a^i$ is accepted by both $A$ and $B$. This test can be carried with few space. - Yes, I am interested in the minimal automaton, or at least an automaton of similar size. Thanks for the pointers to unary DFAs. However, this does not seem to help much for the general case. –  Rasmus Pagh Aug 25 '10 at 11:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9667863249778748, "perplexity": 329.7174459659171}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736675218.1/warc/CC-MAIN-20151001215755-00219-ip-10-137-6-227.ec2.internal.warc.gz"}
http://www.emathematics.net/g8_percents.php?def=compare_per_frac
User: Compare percentages to fractions When comparing percents to decimals write the fractions as percentages and compare. Which is greater, $\frac{3}{20}$ or 16%? Write $\frac{3}{20}$ as a percent: $\frac{3}{20}=\frac{15}{100}=15%$ 15% is less than 16%. So, 16% is the greater number. Which is greater? $\frac{65}{56}$ is greater 43% is greater neither; they are equal
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9174020290374756, "perplexity": 1674.775896596339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189490.1/warc/CC-MAIN-20170322212949-00213-ip-10-233-31-227.ec2.internal.warc.gz"}
https://koreascience.or.kr/search.page?keywords=MR16+halogen+lamp&pageSize=10&pageNo=1
• Title, Summary, Keyword: MR16 halogen lamp ### A Study on the Heat Radiation of LED Luminaires and the Indoor Temperature Increase (LED 등기구의 발열과 실내온도 상승에 관한 연구) • Kim, Dong-Geon;Kil, Gyung-Suk • Journal of the Korean Institute of Electrical and Electronic Material Engineers • / • v.25 no.9 • / • pp.738-742 • / • 2012 • This paper conducted a study on how the heat radiation of light emitting diode(LED) luminaires affects the indoor temperature increase. The effect was compared with that of a 20 W compact fluorescent lamp(CFL) and a 50 W MR16 halogen lamp which are most widely used inside of cruises, a LED downlight and a 4W MR16 LED replacing each of them. We installed a luminarie inside a thermally shielded chamber, measuring the temperature changes under the same volume every 5 minutes and compared the result with theoretically calculated heat radiation. The temperature changes in the chamber was measured four times, on seven hours' period in order to keep sufficient time once the temperature reaches the thermal equilibrium state. The results showed that the temperature of the 20 W E26 CFL and the 10 W LED downlight increased by $21.1^{\circ}C$ and $10.4^{\circ}C$ respectively, while that of the 50 W halogen MR16 and the 4 W LED MR16 increased by $33.9^{\circ}C$ and $4.8^{\circ}C$ respectively. The experimental heat radiation were calculated from the results and the experimental heat radiation of the CFL and the LED downlight were 171.5 cal and 86.5 cal, and those of the halogen MR16 and the LED MR16 were 275.3 cal and 36.5 cal. Therefore, the heat radiation was reduced by 49.5% and 86.7%, respectively, by replacing conventional light source with LED. In conclusion, we can expect a reduction of power consumption in air condition system and the effect on indoor temperature increase by application of LED luminaires. ### Fabrication and Performance Evaluation of MR-16 Lamp Series with Narrow Angular Distribution of Luminous Intensity Using an Aspherical Planar-convex 2×2 Fly-eye Lens Type (평면-비구면 2×2 fly-eye 렌즈형태의 2차 렌즈를 사용한 고효율의 좁은 배광각을 갖는 MR-16 램프 시리즈 제작 및 성능평가) • Chu, Kyung-duk;Ryu, Jae Myung;Hong, Chun-Gang;Jeong, Youn Hong;Jo, Jae Heung • Journal of the Korea Academia-Industrial cooperation Society • / • v.18 no.8 • / • pp.25-33 • / • 2017 • This paper reports the optical design of the MR-16 lamp series with a LED second lens and an aspherical plano-convex lens suitable for a simple and rapid injection molding fabrication method. The fabrication and performance evaluation of the MR-16 lamp series, which was designed with a narrow angular distribution of luminous intensity, were conducted to replace halogen lamps with LED lamps. Four types of LED lamps were fabricated, which have angular distributions of luminous intensity of $22.4^{\circ}$, $31.1^{\circ}$, $37.3^{\circ}$, and $59.9^{\circ}$ and luminous efficiencies of 76.5 lm/W, 75.2 lm/W, 72.0 lm/W, and 77.8 lm/W, respectively, while their spreading angles with an illuminance uniformity of 81% were $3^{\circ}$, $15^{\circ}$, $22^{\circ}$, and $49^{\circ}$, respectively. After eliminating a yellow tail of the LED lamps using a diffusion sheet, the angular distributions of the luminous intensity were measured to be $20.8^{\circ}$, $31.5^{\circ}$, $37.8^{\circ}$, and $68.7^{\circ}$. ### Design of Optical System for LED Lamp using MR16 (MR16용 LED 램프 조명설계) • Kim, Jun-Hyun;Moon, Byung-Kwon;Ryu, In-Ho • Journal of the Korea Academia-Industrial cooperation Society • / • v.13 no.10 • / • pp.4725-4732 • / • 2012 • This paper studies MR16 that can strengthen the strength and make up for the weakness of MR16 by replacing halogen light source using multifaceted Reflector(MR16) with LED light source. To achieve this, developed MR16 for LED applying optical system that four aspheric lens is one sheet. Optical system is designed by optics software and lighting performance of the designed data is predicted lighting simulation program. Also, heatsink's heat radiation analysis program to predict the thermal performance. Finally, optical prototype system based on simulation analysis data is manufactured and the results comparing performance of the developed system and the designed data are follows: Radiation angle was around $50^{\circ}{\sim}60^{\circ}$ in results of simulation analysis and the test of the prototype system. Also, temperature measurement result indicates that the thermal equilibrium is realized after one minute and thirty seconds and heat is generated up $60^{\circ}C$ in all of simulation analysis and the test of the prototype system. Finally, simulation analysis result on light disturbance curve of MR16 is similar to that of performance of the prototype system.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8065845966339111, "perplexity": 4185.0299334152205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991641.5/warc/CC-MAIN-20210511025739-20210511055739-00380.warc.gz"}
https://physics.stackexchange.com/questions/119337/deceleration-rate-of-objects-of-different-mass-but-the-same-otherwise
Deceleration rate of objects of different mass but the same otherwise Using a tennis ball as an example object, if one ball weighs 1 ounce and the other is 2 ounces, and both are struck at 100 mph on the same trajectory, would there be any difference in the deceleration rate between the 2 different mass balls? (All other things about the two balls being equal). For instance, would the elapsed time for the ball to travel let's say 100 feet be different or the same? FOLLOW UP QUESTION (based on answer #1): If it's true that the lighter ball would decelerate faster (and consequently take longer to travel a given distance), then what would be the difference in the initial velocity of the two balls (one 1 ounce, the other 2 ounces) if both are struck with the same implement with the same force (let's say a tennis racket travelling 100 mph). I'm assuming the lighter ball would have a higher initial speed. If so, would the higher initial speed of the lighter ball offset the increased deceleration rate for the lighter ball. In practical terms: with the initial speed being different, which ball would arrive first in the above example of 100 feet, the lighter ball or the heavier ball? If not too difficult, could you explain how this relationship (first the initial speed difference and then the total travel time difference for 100 feet) would be calculated? • – Qmechanic Jun 15 '14 at 4:41 They have the same drag force, the lighter one will decelerate faster • So the elapsed time to travel 100 feet would be longer for the lighter ball. - just to finish answering the question.. – Floris Jun 15 '14 at 3:12 When an object (ball) of mass $m$ at rest is struck elastically by another object of mass $M$ traveling with initial velocity $v$, then the velocity after impact is given by $$v_{ball}=\frac{2M}{m+M}$$ Two limiting cases: $m=M$, maximum transfer of energy (the racket stops and the ball travels with the velocity of the racket); when $m<<M$, the final velocity is twice the initial velocity (but the racket maintains most of its energy). With the values given you are in an intermediate regime - the lighter ball will travel faster immediately after impact but it will slow down faster. The math for this is complicated in 2D - but we can make some progress in 1D. Drag force on a sphere is given roughly by $$F= \frac12 \rho v^2 A C_D$$ Where $\rho= 1.2 kg/m^3, C_D=0.47, A= 0.0035 m^2$, so $F=0.002 v^2$. The acceleration $a=F/m$ so as you can see the lighter ball will decelerate more quickly. The equations of motion become $$\frac{dv}{dt}=-kv^2\\ \frac{dv}{v^2}=-k\cdot dt\\ \frac{1}{v}=kt + \frac{1}{v_0}\\ v(t)=\frac{1}{kt+\frac{1}{v_0}}\\ x(t)=\frac{1}{k}\log(v_0 k t +1)$$ Where $k=0.002/m$ - it depends on the mass of the ball The typical mass of a tennis ball is about 58 gram and typical velocity about 30 m/s. A racket has a weight around 250 - 300 gram, so the extra speed you get for the lighter ball is small - but the deceleration is real. Putting in round numbers: $$v_{racket} = 20 m/s$$ $$v_1= 20\frac{600}{360} = 33 m/s\\ v_2 = 20\frac{600}{330} = 36 m/s$$ Plotting a graph of velocity and position as a function of time for a 1 oz and 2 oz (nominal) ball, I get for the position: and for the ball speed: This confirms that the lighter ball will initially go a little bit faster - but that drag will quickly eliminate its advantage over any but the shortest distances. I will add graphs of the above function of x(t) when I get near a computer (hard to do on a phone...) If the force applied to each ball is the same then it will provide the same impuls to both, then using J = M(v-u) it can be seen that the lighter ball will reach a higher maximum speed. Following this, the ball's are now experiencing a drag force, depending on the assumptions made we can either assume that the drag force is constant, or that it is a function of the ball's velocity F = f(v) depending on the complexity of the model we are using. Either way using Newtons Second Law, F = MA we can determine the acceleration on each of the balls. From here it is a case of using calculus to determine the time taken to travel a specific distance. In this case integrating twice, from here it is simply a case of subbing in numbers. I hope that this helped. • Same force would have to be applied for the same time for your first sentence to be true... – Floris Mar 6 '15 at 15:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8669639229774475, "perplexity": 243.5377022502461}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00589.warc.gz"}
https://fullfrontalnerdity.wordpress.com/category/probability/
Let us define a random variable (RV) $Z = X + Y$ where $X$ and $Y$ are two independent RVs. If $f_X(x)$ and $f_Y(y)$ are probability density functions of $X$ and $Y$, then what can we say about $f_Z(z)$, the pdf of $Z$? A rigorous double “E” graduate course in stochastic processes is usually sufficient to answer this question. It turns out $f_Z(z)$ is the convolution of the densities $f_X(x)$ and $f_Y(y)$. See this (p.291) for more. It is tempting to ask the counter question: if $f_Z(z) = f_X(x)*f_Y(y)$, then does it imply $Z = X + Y$ where $X$ and $Y$ are independent RVs? I found the answer in this amazing text: Counterexamples in Probability by Jordan Stoyanov. The book is an adorable compilation of some 300 counterexamples to the probability questions which might be bothering you during a good night-sleep. So, the answer to my question is no. The counterexample is using the Cauchy distributions. It turns out that the convolution of two Cauchy distributions is always a Cauchy distribution whether $X$ and $Y$ are independent or not. Coming to think of the convolution of pdfs, our favorite website has a list of convolution of common pdfs.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9421987533569336, "perplexity": 139.2986675061528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812841.74/warc/CC-MAIN-20180219211247-20180219231247-00778.warc.gz"}
https://www.physicsforums.com/threads/planes-stress-in-a-truss.897182/
# Planes stress in a truss 1. Dec 14, 2016 ### fonseh 1. The problem statement, all variables and given/known data For part b , i think one of the angle either $$\theta_s$$ or $$\theta_p$$ is wrong For the second question , what is plane stress ? 2. Relevant equations 3. The attempt at a solution 1.) Because in mohr's circle, the maximum shear stress on the vertical axis , for the maximum normal stress , it's on the horizontal axis , right ? #### Attached Files: • ###### DSC_0038.JPG File size: 55.6 KB Views: 35 Last edited: Dec 14, 2016 2. Dec 14, 2016 ### fonseh I am sorry that the title of the topic is confusing . I think the part a is not related to part b 3. Dec 15, 2016 ### fonseh Bump 4. Dec 15, 2016 ### fonseh Or is my sketch of mohr's circle is wrong ? Draft saved Draft deleted Similar Discussions: Planes stress in a truss
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9694355726242065, "perplexity": 3214.349715756026}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.70/warc/CC-MAIN-20170823000718-20170823020718-00066.warc.gz"}
http://gatehelpline.com/post-gate/gateScoreCard.html
GATE Score Card: After the declaration of the results, candidates can download their GATE 2019 Score Card for the paper (for which he/she has taken the examination). The score cards will be issued to all qualified candidates. The GATE 2019 Committee has the authority to decide the qualifying mark/score for each GATE paper. In case any claim or dispute arises in respect of GATE 2019, it is hereby made absolutely clear that the Courts and Tribunals in Bangalore and Bangalore alone shall have the exclusive jurisdiction to entertain and settle any such dispute or claim. 1. Click on the official website of GATE. 2. Click on the link, after that new web page will be opened. 3. Enter all details such as Registration number and Password and all information’s that are given here. 4. Enter the Submit Button. 5. The results will appear on the computer screen. 6. Download the Score Card and take a printout copy for future use. ### Calculation of GATE 2019 Score In 2019, examination for CE, CS, EC, EE and ME papers is being held in multi-sessions. Hence, for these papers, a suitable normalization is applied to take into account any variation in the difficulty levels of the question papers across different sessions. The normalization is done based on the fundamental assumption that "in all multi-session GATE papers, the distribution of abilities of candidates is the same across all the sessions”. This assumption is justified since the number of candidates appearing in multi-session papers in GATE 2019 is large and the procedure of allocation of the session to candidates is random. Further, it is also ensured that for the same multi-session paper, the number of candidates allotted in each session is of the same order of magnitude. ### How to Calculate Normalized Marks for CE, CS, EC, EE and ME papers Based on the above, and considering various normalization methods, the committee arrived at the following formula for calculating the normalized marks, for CE, CS, EC, EE and ME papers. After the evaluation of the answers, normalized marks based on the above formula will be calculated corresponding to the raw marks obtained by a candidate for CE, CS, EC, EE and ME papers and the GATE 2019 Score will be calculated based on the normalized marks. For all other papers, actual marks obtained will be used for calculating the GATE 2019 Score. #### Calculation of GATE Score for all papers GATE 2019 score will be calculated using the formula; In the GATE 2018 score formula, Mq is usually 25 marks (out of 100) or μ+σ, whichever is larger. Here μ is the mean and σ is the standard deviation of marks of all the candidates who appeared in the paper. After the declaration of the results, GATE Score Cards will be downloadable to (a) All SC/ST/PwD candidates whose marks are greater than or equal to the qualifying mark of SC/ST/PwD candidates in their respective papers, and (b) All other candidates whose marks are greater than or equal to the qualifying mark of OBC (NCL) candidates in their respective papers. There is no provision for the issue of additional GATE Score Cards. The GATE 2019 Committee has the authority to decide the qualifying mark/score for each GATE paper. In case any claim or dispute arises in respect of GATE 2019, it is hereby made absolutely clear that the Courts and Tribunals in Bangalore and Bangalore alone shall have the exclusive jurisdiction to entertain and settle any such dispute or claim.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424068689346313, "perplexity": 1920.530529839027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826715.45/warc/CC-MAIN-20181215035757-20181215061757-00196.warc.gz"}
http://www.researchgate.net/researcher/14411998_Randolph_Q_Hood
# Randolph Q. Hood Lawrence Livermore National Laboratory, Livermore, CA, United States Are you Randolph Q. Hood? ## Publications (51)133.59 Total impact • Source ##### Article: Polymorphism and Melt in High-Pressure Tantalum Justin B. Haskins · John A. Moriarty · Randolph Q. Hood [Hide abstract] ABSTRACT: Recent small-cell (< 150-atom) quantum molecular dynamics (QMD) simulations for Ta based on density functional theory (DFT) have predicted a hexagonal omega (hex-omega) phase more stable than the normal bcc phase at high temperature (T) and pressure (P) above 70 GPa [Burakovsky et al., Phys. Rev. Lett. 104, 255702 (2010)]. Here we examine possible high-polymorphism in Ta with complementary DFT-based model generalized pseudopotential theory (MGPT) multi-ion interatomic potentials, which allow accurate treatment of much larger system sizes (up to ~ 80000 atoms). We focus on candidate bcc, A15, fcc, hcp, and hex-omega phases for the high-phase diagram to 420 GPa, studying the mechanical and relative thermodynamic stability of these phases for both small and large computational cells. Our MGPT potentials fully capture the DFT energetics of these phases, while MGPT-MD simulations demonstrate that the higher-energy fcc, hcp and hex-omega structures are only mechanically stabilized at high temperature by large, size-dependent, anharmonic vibrational effects, with the stability of the hex-omega phase also being found to be a sensitive function of its ratio. Both two-phase and Z-method melting techniques have been used in MGPT-MD simulations to determine relative phase stability and its size dependence. In the large-cell limit, the two-phase method yields accurate equilibrium melt curves for all five phases, with bcc producing the highest melt temperatures at all pressures and hence being the most stable phase of those considered. The two-phase bcc melt curve is also in good agreement with dynamic experimental data as well as with the MGPT melt curve calculated from bcc and liquid free energies. In contrast, we find that the Z method produces only an upper bound to the equilibrium melt curve in the large-cell limit. For the bcc and hex-omega structures, however, this is a close upper bound within 5% of the two-phase results, although for the A15, fcc, and hcp structures, the Z-melt curves are 25-35% higher in temperature than the two-phase results. Nonetheless, the Z method has allowed us to study melt size effects in detail. We find these effects to be either small or modest for the cubic bcc, A15, and fcc structures, but to have a large impact on the hexagonal hcp and hex-omega melt curves, which are dramatically pushed above that of bcc for simulation cells less than 150 atoms. The melt size effects are driven by and closely correlated with similar size effects on the mechanical stability and the vibrational anharmonicity. We further show that for the same simulation cell sizes and choice of c/a ratio, the MGPT-MD bcc and hex-omega melt curves are in good agreement with the QMD results, so the QMD prediction is confirmed in the small-cell limit. But in the large-cell limit, the MGPT-MD hex-omega melt curve is always lowered below that of bcc for any choice of c/a, so bcc is the most stable phase. We conclude that for the non-bcc Ta phases studied, one requires simulation cells of at least 250-500 atoms to be free of size effects impacting mechanical and thermodynamic phase stability. Physical Review B 12/2012; 86(22):224104. DOI:10.1103/PhysRevB.86.224104 · 3.74 Impact Factor • Source ##### Article: Diffusion quantum Monte Carlo study of the equation of state and point defects in aluminum Randolph Q. Hood · P. R. C. Kent · Fernando A. Reboredo [Hide abstract] ABSTRACT: The many-body diffusion quantum Monte Carlo (DMC) method with twist-averaged boundary conditions is used to calculate the ground-state equation of state and the energetics of point defects in fcc aluminum using supercells up to 1331 atoms. The DMC equilibrium lattice constant differs from experiment by 0.008 A, or 0.2%, while the cohesive energy using DMC with backflow wave functions with improved nodal surfaces differs by 27 meV. DMC-calculated defect formation and migration energies agree with available experimental data, except for the nearest-neighbor divacancy, which is found to be energetically unstable, in agreement with previous density functional theory (DFT) calculations. DMC and DFT calculations of vacancy defects are in reasonably close agreement. Self-interstitial formation energies have larger differences between DMC and DFT, of up to 0.33eV, at the tetrahedral site. We also computed formation energies of helium interstitial defects where energies differed by up to 0.34eV, also at the tetrahedral site. The close agreement with available experiments demonstrates that DMC can be used as a predictive method to obtain benchmark energetics of defects in metals. Physical Review B 10/2012; 85(13). DOI:10.1103/PhysRevB.85.134109 · 3.74 Impact Factor • ##### Article: A diffusion Monte Carlo study of sign problems from non-local pseudopotentials [Hide abstract] ABSTRACT: Difficulties can arise in simulating various Hamiltonian operators efficiently in diffusion Monte Carlo (DMC) such as those associated with non-local pseudopotentials which require the introduction of an approximate form. The locality approximation and T-moves are two widely used techniques in fixed-node diffusion Monte Carlo (FN-DMC) that provide a tractable approach for treating non-local pseudopotentials, however their use introduces an uncontrolled approximation. Exact treatment of the non-local pseudopotentials in FN-DMC introduces a sign problem with the associated Green's function matrix elements which take on both positive and negative values. Here we present an analysis of the nature of the sign problem that non-local operators introduce into the Green's function. We then consider the feasibility of running DMC simulations in which the non-local pseudopotentials are treated exactly and demonstrate the algorithm on a few molecular systems. • ##### Article: Tests on novel pseudo-potentials generated from diffusion Monte Carlo data. Fernando Reboredo · Randolph Hood · Michal Bajdich [Hide abstract] ABSTRACT: Since Dmitri Mendeleev developed a table in 1869 to illustrate recurring ("periodic") trends of the elements, it has been understood that most chemical and physical properties can be described by taking into account the outer most electrons of the atoms. These valence electrons are mainly responsible for the chemical bond. In many ab-initio approaches only valence electrons are taken into account and a pseudopotential is used to mimic the response of the core electrons. Typically an all-electron calculation is used to generate a pseudopotential that is used either within density functional theory or quantum chemistry approaches. In this talk we explain and demonstrate a new method to generate pseudopotentials directly from all-electron many-body diffusion Monte Carlo (DMC) calculations and discuss the results of of the transferability of these pseudopotentials. The advantages of incorporating the exchange and correlation directly from DMC into the pseudopotential are also discussed. • Source ##### Article: Quantum-Mechanical Interatomic Potentials with Electron Temperature for Strong-Coupling Transition Metals John A Moriarty · Randolph Q Hood · Lin H Yang [Hide abstract] ABSTRACT: In narrow d-band transition metals, electron temperature T(el) can impact the underlying electronic structure for temperatures near and above melt, strongly coupling the ion- and electron-thermal degrees of freedom and producing T(el)-dependent interatomic forces. Starting from the Mermin formulation of density functional theory, we have extended first-principles generalized pseudopotential theory to finite electron temperature and then developed efficient T(el)-dependent model generalized pseudopotential theory interatomic potentials for a Mo prototype. Unlike potentials based on the T(el)=0 electronic structure, the T(el)-dependent model generalized pseudopotential theory potentials yield a high-pressure Mo melt curve consistent with density functional theory quantum simulations, as well as with dynamic experiments, and also support a rich polymorphism in the high-(T,P) phase diagram. Physical Review Letters 01/2012; 108(3):036401. DOI:10.1103/PhysRevLett.108.036401 · 7.51 Impact Factor • ##### Article: Prospects for release-node quantum Monte Carlo Norm M Tubman · Jonathan L DuBois · Randolph Q Hood · Berni J Alder [Hide abstract] ABSTRACT: We perform release-node quantum Monte Carlo simulations on the first row diatomic molecules in order to assess how accurately their ground-state energies can be obtained. An analysis of the fermion-boson energy difference is shown to be strongly dependent on the nuclear charge, Z, which in turn determines the growth of variance of the release-node energy. It is possible to use maximum entropy analysis to extrapolate to ground-state energies only for the low Z elements. For the higher Z dimers beyond boron, the error growth is too large to allow accurate data for long enough imaginary times. Within the limit of our statistics we were able to estimate, in atomic units, the ground-state energy of Li(2) (-14.9947(1)), Be(2) (-29.3367(7)), and B(2)(-49.410(2)). The Journal of Chemical Physics 11/2011; 135(18):184109. DOI:10.1063/1.3659143 · 3.12 Impact Factor • ##### Article: Release-Node quantum Monte Carlo studies for molecules Norm Tubman · Jonathan Dubois · Randolph Hood · Berni Alder [Hide abstract] ABSTRACT: Release-Node quantum Monte Carlo (RN-QMC) is a method that calculates unbiased ground-state energies of fermionic systems. However, while RN-QMC has been successfully applied to the homogeneous electron gas with more than one hundred electrons, obtaining converged results for molecular systems has proven to be problematic for all but the smallest systems. A promising route to extending the method's success to a wider class of physically interesting Hamiltonians lies in the application of projection techniques such as Maximum Entropy (MaxEnt) which, in principle, allows for extrapolation to the converged ground-state energy. Direct application of MaxEnt to higher Z elements is, however, not entirely straightforward. We propose strategies for optimizing MaxEnt analysis of short time RN-QMC data and demonstrate their effectiveness in obtaining ground state energies for the first row dimers. Attention is given to the determination of statistical errors in the resulting extrapolations as well as an attempt to characterize the minimum decay time required for unbiased results. • ##### Article: Quantum Monte Carlo calculations of defects in aluminum Randolph Q. Hood · Paul R. C. Kent · Fernando A. Reboredo [Hide abstract] ABSTRACT: We use first-principles fixed-node diffusion quantum Monte Carlo to calculate the energetics of point defects in bulk FCC aluminum demonstrating a very high accuracy when compared to experiment. Aluminum has been well studied experimentally as a "simple" metal prototype for investigating the effects of radiation damage such as void formation and helium embrittlement. Often accuracies at the level of milli-electronvolts are required, which is not achieved even for the simple case of pairs of vacancies in aluminum, using common density functionals. Perhaps surprisingly, even single vacancy energies are not reliable in many simple structural materials. Also presented are results for the bulk properties of aluminum - the equilibrium lattice constant, the cohesive energy, and the bulk modulus. These calculations bring a new level of rigor to the study of defects in metals. • Source ##### Article: Systematic Reduction of Sign Errors in Many-Body Calculations of Atoms and Molecules [Hide abstract] ABSTRACT: The self-healing diffusion Monte Carlo algorithm (SHDMC) is shown to be an accurate and robust method for calculating the ground state of atoms and molecules. By direct comparison with accurate configuration interaction results for the oxygen atom, we show that SHDMC converges systematically towards the ground-state wave function. We present results for the challenging N2 molecule, where the binding energies obtained via both energy minimization and SHDMC are near chemical accuracy (1  kcal/mol). Moreover, we demonstrate that SHDMC is robust enough to find the nodal surface for systems at least as large as C20 starting from random coefficients. SHDMC is a linear-scaling method, in the degrees of freedom of the nodes, that systematically reduces the fermion sign problem. Physical Review Letters 05/2010; 104(19):193001. DOI:10.1103/PhysRevLett.104.193001 · 7.51 Impact Factor • ##### Article: Transient quantum Monte Carlo investigations of few-electron systems Norm Tubman · Jonathan Dubois · Randolph Hood · Berni Alder [Hide abstract] ABSTRACT: Diffusion Monte Carlo (DMC) is one of the most accurate methods for calculating electronic structure and can be applied to systems containing thousands of electrons. Typical applications of DMC utilize the fixed-node approximation, in which the nodes are specified using an input trial wave function. Errors in the locations of the nodes lead to systematic errors in DMC energy estimators. Removing this nodal bias can be done using transient quantum Monte Carlo methods, which have previously been applied to the free-electron gas and a handful of other few-electron systems. The drawback in using transient methods is the significant increase in computational cost. We have studied several quantum systems of varying sizes in order to better understand the scaling properties of various transient methods. We have explored techniques for reducing the computational cost such as cancellation and correlated walkers. We have analyzed our data using Bayesian inference. Prepared by LLNL under Contract DE-AC52-07NA27344 • ##### Article: Self-healing diffusion quantum Monte Carlo algorithms: Direct reduction of the fermion sign error in electronic structure calculations F. A. Reboredo · R. Q. Hood · P. R. C. Kent [Hide abstract] ABSTRACT: We develop a formalism and present an algorithm for optimization of the trial wave function used in fixed-node diffusion quantum Monte Carlo (DMC) methods. The formalism is based on the DMC mixed estimator of the ground-state probability density. We take advantage of a basic property of the walker configuration distribution generated in a DMC calculation, to (i) project out a multideterminant expansion of the fixed-node ground-state wave function and (ii) to define a cost function that relates the fixed-node ground-state and the noninteracting trial wave functions. We show that (a) locally smoothing out the kink of the fixed-node ground-state wave function at the node generates a new trial wave function with better nodal structure and (b) we argue that the noise in the fixed-node wave function resulting from finite sampling plays a beneficial role, allowing the nodes to adjust toward the ones of the exact many-body ground state in a simulated annealing-like process. Based on these principles, we propose a method to improve both single determinant and multideterminant expansions of the trial wave function. The method can be generalized to other wave-function forms such as pfaffians. We test the method in a model system where benchmark configuration-interaction calculations can be performed and most components of the Hamiltonian are evaluated analytically. Comparing the DMC calculations with the exact solutions, we find that the trial wave function is systematically improved. The overlap of the optimized trial wave function and the exact ground state converges to 100% even starting from wave functions orthogonal to the exact ground state. Similarly, the DMC total energy and density converges to the exact solutions for the model. In the optimization process we find an optimal noninteracting nodal potential of density-functional-like form whose existence was predicted in a previous publication [ Phys. Rev. B 77 245110 (2008)]. Tests of the method are extended to a model system with a conventional Coulomb interaction where we show we can obtain the exact Kohn-Sham effective potential from the DMC data. Physical Review B 05/2009; 79(19). DOI:10.1103/PhysRevB.79.195117 · 3.74 Impact Factor • ##### Article: Noncovalent hydrogen bonding in metal-organic structures [Hide abstract] ABSTRACT: Transition metal sites in metal-organic frameworks and in doped carbon structures are actively being studied for their binding properties of molecular hydrogen. We present a study of prototypical metal-organic structures that can be used to bind molecular hydrogen non-covalently. Due to the well known limitations of current density functional theory based descriptions of non-covalent hydrogen bonding we have focused our efforts on a consistent many-body approach based on the fixed-node diffusion Monte Carlo method. Accurate studies of binding energies and the effects of multiple hydrogens in these structures are presented. Prepared by LLNL under Contract DE-AC52-07NA27344 • ##### Article: Self-healing diffusion quantum Monte Carlo algorithms: Theory and Applications [Hide abstract] ABSTRACT: We present a method to obtain the fixed node ground state wave function from an importance sampling Diffusion Monte Carlo (DMC) run. The fixed node ground state wave-function is altered to obtain an improved trial wave-function for the next DMC run. The theory behind this approach will be discussed. Two iterative algorithms are presented and validated in a model system by direct comparison with full configuration interaction (CI) wave functions and energies. We find that the trial wave-function is systematically improved. The scalar product of the trial wave-function with the CI result converges to 1 even starting from wave-functions orthogonal to the CI ground state. Similarly, the DMC total energy and density converges to the CI result. In the optimization process we find an optimal non-interacting nodal potential of density-functional-like form. An extension to a model system with full Coulomb interactions demonstrates that we can obtain the exact Kohn-Sham effective potential from the DMC data. Subsequently we apply our method to real molecules such as benzene and find that we can improve the ground state energy as compared with the single determinant result even starting from random wave-functions. Results for other molecular systems and comparison with alternative methods will be presented. • Source ##### Article: Self-healing diffusion quantum Monte Carlo algorithms: methods for direct reduction of the fermion sign error in electronic structure calculations Fernando A. Reboredo · Randolph Q. Hood · Paul R. C. Kent [Hide abstract] ABSTRACT: We develop a formalism and present an algorithm for optimization of the trial wave-function used in fixed-node diffusion quantum Monte Carlo (DMC) methods. We take advantage of a basic property of the walker configuration distribution generated in a DMC calculation, to (i) project-out a multi-determinant expansion of the fixed-node ground-state wave function and (ii) to define a cost function that relates the fixed-node ground-state and the non-interacting trial wave functions. We show that (a) locally smoothing out the kink of the fixed-node ground-state wave function at the node generates a new trial wave-function with better nodal structure and (b) we argue that the noise in the fixed-node wave-function resulting from finite sampling plays a beneficial role, allowing the nodes to adjust towards the ones of the exact many-body ground state in a simulated annealing-like process. We propose a method to improve both single determinant and multi-determinant expansions of the trial wave-function. We test the method in a model system where benchmark configuration interaction calculations can be performed. Comparing the DMC calculations with the exact solutions, we find that the trial wave-function is systematically improved. The overlap of the optimized trial wave function and the exact ground state converges to 100% even starting from wave-functions orthogonal to the exact ground state. In the optimization process we find an optimal non-interacting nodal potential of density-functional-like form whose existence was predicted earlier[Phys.Rev. B {\bf 77}, 245110 (2008)]. We obtain the exact Kohn-Sham effective potential from the DMC data. • Source ##### Article: Neutral and charged excitations in carbon fullerenes from first-principles many-body theories Murilo L Tiago · P R C Kent · Randolph Q Hood · Fernando A Reboredo [Hide abstract] ABSTRACT: We investigate the accuracy of first-principles many-body theories at the nanoscale by comparing the low-energy excitations of the carbon fullerenes C(20), C(24), C(50), C(60), C(70), and C(80) with experiment. Properties are calculated via the GW-Bethe-Salpeter equation and diffusion quantum Monte Carlo methods. We critically compare these theories and assess their accuracy against available photoabsorption and photoelectron spectroscopy data. The first ionization potentials are consistently well reproduced and are similar for all the fullerenes and methods studied. The electron affinities and first triplet excitation energies show substantial method and geometry dependence. These results establish the validity of many-body theories as viable alternative to density-functional theory in describing electronic properties of confined carbon nanostructures. We find a correlation between energy gap and stability of fullerenes. We also find that the electron affinity of fullerenes is very high and size independent, which explains their tendency to form compounds with electron-donor cations. The Journal of Chemical Physics 09/2008; 129(8):084311. DOI:10.1063/1.2973627 · 3.12 Impact Factor • Source ##### Article: Quantum molecular dynamics simulations of uranium at high pressure and temperature Randolph Q. Hood · L. H. Yang · John A. Moriarty [Hide abstract] ABSTRACT: Constant-volume quantum molecular dynamics (QMD) simulations of uranium (U) have been carried out over a range of pressures and temperatures that span the experimentally observed solid orthorhombic α-U, body-centered-cubic (bcc), and liquid phases, using an ab initio plane-wave pseudopotential method within the generalized gradient approximation of density-functional theory. A robust U pseudopotential has been constructed for these simulations that treats the 14 valence and outer-core electrons per atom necessary to calculate accurate structural and thermodynamic properties up to 100 GPa. Its validity has been checked by comparing low-temperature results with experimental data and all-electron full-potential linear-muffin-tin-orbital calculations of several different uranium solid structures. Calculated QMD energies and pressures for the equation of state of uranium in the solid and liquid phases are given, along with results for the Grüneisen parameter and the specific heat. We also present results for the radial distribution function, bond-angle distribution function, electronic density of states, and liquid diffusion coefficient, as well as evidence for short-range order in the liquid. Physical Review B 06/2008; 78(2):024116. DOI:10.1103/PhysRevB.78.024116 · 3.74 Impact Factor • ##### Article: Towards QMC benchmarks for large scale dispersive interactions [Hide abstract] ABSTRACT: Fixed-node quantum Monte-Carlo (QMC) methods are becoming an increasingly attractive approach for the study of large scale problems in electronic structure. Current challenges lie in efficient application of QMC to large (thousands of electrons) systems and removal or amelioration of the uncontrolled approximations inherent in most practical applications of the method. I will present recent progress and address some of the particular challenges associated with the development of exact potential energy surfaces for weakly interacting closed shell carbon complexes within the fixed-node QMC ansatz. In particular, the efficacy / necessity of backflow corrections and multi-determinant expansions as a method for optimizing the nodal surface in these systems will be discussed. • ##### Article: Bethe-Salpeter and Quantum Monte Carlo Calculations of the Optical Properties of Carbon Fullerenes P. R. C. Kent · M. L. Tiago · F. A. Reboredo · Randolph Q. Hood [Hide abstract] ABSTRACT: We have calculated the low energy optical excitations of the carbon fullerenes C20, C24, C50, C60, C70, and C80. Properties are calculated via the GW-Bethe-Salpeter Equation (GW-BSE) and diffusion Quantum Monte Carlo (QMC) methods. We compare these approaches with time dependent density functional results and with experiment. GW-BSE and QMC have previously shown good agreement for small molecules, but this is the first study of these methods for these larger yet prototypical nanostructures. The first ionization potentials are consistently well reproduced and are similar for all the fullerenes and methods studied. However, electron affinities and first triplet exciton show substantial method and geometry dependence. GW-BSE yields triplet energies around 1eV below the QMC results. We discuss the possible reasons for these differences. Research at Oak Ridge National Laboratory performed at the Materials Science and Technology Division, sponsored by the Division of Materials Sciences, and at the Center for Nanophase Materials Sciences, sponsored by the Division of Scientific User Facilities, U.S. Department of Energy. Research at Lawrence Livermore National Laboratory was performed under Contract DE-AC52-07NA27344. • Source ##### Article: Large-scale quantum mechanical simulations of high- Z metals L. H. Yang · Randolph Q. Hood · J. E. Pask · J. E. Klepeis [Hide abstract] ABSTRACT: High-Z metals constitute a particular challenge for large-scale ab initio electronic-structure calculations, as they require high resolution due to the presence of strongly localized states and require many eigenstates to be computed due to the large number of electrons and need to accurately resolve the Fermi surface. Here, we report recent findings on high-Z metals, using an efficient massively parallel planewave implementation on some of the largest computational architectures currently available. We discuss the particular architectures employed and methodological advances required to harness them effectively. We present a pair-correlation function for U, calculated using quantum molecular dynamics, and discuss relaxations of Pu atoms in the vicinity of defects in aged and alloyed Pu. We find that the self-irradiation associated with aging has a negligible effect on the compressibility of Pu relative to other factors such as alloying. Journal of Computer-Aided Materials Design 09/2007; 14(3):337-347. DOI:10.1007/s10820-007-9053-1 · 1.30 Impact Factor • Source ##### Article: Robust quantum-based interatomic potentials for multiscale modeling in transition metals [Hide abstract] ABSTRACT: First-principles generalized pseudopotential theory (GPT) provides a fundamental basis for transferable multi-ion interatomic potentials in transition metals and alloys within density-functional quantum mechanics. In the central body-centered cubic (bcc) metals, where multi-ion angular forces are important to materials properties, simplified model GPT (MGPT) potentials have been developed based on canonical d bands to allow analytic forms and large-scale atomistic simulations. Robust, advanced-generation MGPT potentials have now been obtained for Ta and Mo and successfully applied to a wide range of structural, thermodynamic, defect, and mechanical properties at both ambient and extreme conditions. Selected applications to multiscale modeling discussed here include dislocation core structure and mobility, atomistically informed dislocation dynamics simulations of plasticity, and thermoelasticity and high-pressure strength modeling. Recent algorithm improvements have provided a more general matrix representation of MGPT beyond canonical bands, allowing improved accuracy and extension to f-electron actinide metals, an order of magnitude increase in computational speed for dynamic simulations, and the development of temperature-dependent potentials. 02/2006; 21(03):563 - 573. DOI:10.1557/jmr.2006.0070 #### Publication Stats 985 Citations 133.59 Total Impact Points #### Top co-authors View all • ##### Andrew Williamson   (9) True North Venture Partners #### Institutions • ###### Lawrence Livermore National Laboratory • • Condensed Matter and Materials Division • • Physics Division Livermore, CA, United States • ###### University of Cambridge • Department of Physics: Cavendish Laboratory Cambridge, England, United Kingdom • ###### Georgia Institute of Technology • School of Physics Atlanta, GA, United States
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8331235647201538, "perplexity": 2292.283237810303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064951.43/warc/CC-MAIN-20150827025424-00146-ip-10-171-96-226.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/310291/serres-remark-on-group-algebras-and-related-questions/310302
# Serre's remark on group algebras and related questions I've recently heard about an idea of Serre that for each finite group $G$ there exists a group scheme $X$ such that for each field $K$ the group $X(K)$ is naturally isomorphic to the unit group of $K[G]$. Unfortunately, the article where this fact was mentioned gave no reference, so I ask you if you know how to construct such a scheme. Of course, an interesting question would be: what about a set of $R$-points of $X$, where $R$ is a ring, how is it related to $R[G]$? And can this be generalized somehow to arbitrary groups? In the form given above it sounds not really possible as for $K[\mathbb Z]$ the group of units is isomorphic to $K^*\times \mathbb Z$ and one can hardly imagine a group scheme whose group of $K$-points is isomorphic to $\mathbb Z$. By the way, why there is no group scheme whose group of points is isomorphic to $\mathbb Z$? Or it exists? It's fairly easy to do this for finite groups. In fact, the functor $R \mapsto R[G]$ is naturally representable by a ring scheme: the underlying set functor is represented by $\mathbb A^n$ where $n = |G|$, and the ring structure comes from the functor of points $R \mapsto R[G]$. Write $Y$ for this ring scheme (say over $\operatorname{Spec} \mathbb Z$). Now the unit group can be constructed as the closed subset $V \subseteq Y \times Y$ of pairs $(x,y)$ such that $xy = 1$. It is closed because it is the pullback of the diagram $$\begin{array}{ccc}V & \to & Y \times Y\\\downarrow & & \downarrow \\ 1 & \hookrightarrow & Y\end{array},$$ where the right vertical map is the multiplication morphism on $Y$. This shows that $R \mapsto R[G]^\times$ is representable. It naturally becomes a group scheme, again by the functor of points point of view. $\square$ In the infinite case, this construction doesn't work, because the functor $R \mapsto R[G]$ is not represented by $\mathbb A^G$ (the latter represents the infinite direct product $R \mapsto R^G$, not the direct sum $R \mapsto R^{(G)}$). I have no idea whether the functor $R \mapsto R^{(G)}$ (equivalently, the sheaf $\mathcal O^{(G)}$) is representable, but I think it might not be. On the other hand, in the example you give of $G = \mathbb Z$, the functor on fields $$K \mapsto K[x,x^{-1}]^\times = K^\times \times \mathbb Z$$ is representable by $\coprod_{i \in \mathbb Z} \mathbb G_m$, but this does not represent the functor $R \mapsto R[x,x^{-1}]^\times$ on rings for multiple reasons. Indeed, it is no longer true that $R[x,x^{-1}]^\times = R^\times \times \mathbb Z$ if $R$ is non-reduced, nor does $\coprod \mathbb G_m$ represent $R \mapsto R^\times \times \mathbb Z$ if $\operatorname{Spec} R$ is disconnected. These problems do not cancel out, as can already be seen by taking $R = k[\varepsilon]/(\varepsilon^2)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9861264824867249, "perplexity": 90.18004484946405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878662.15/warc/CC-MAIN-20201021235030-20201022025030-00498.warc.gz"}
http://www.physicsforums.com/showpost.php?p=2766962&postcount=9
View Single Post PF Gold P: 2,906 Related to W in nuclei, I had some intriguing plots. here in this thread: http://www.physicsforums.com/showthread.php?t=227263 About Z, look again at the two dimensional histogram (actually, a contour plot) of http://www.physicsforums.com/attachm...7&d=1207614889 the attachment. The Z line is painted paralell to the W line, a bit hidden because it seems less relevant. Still, the nuclei with a mass slighy greater than the mass of Z seem to be more stable than usual, or at least no so many beta rays are known, for them.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8317849040031433, "perplexity": 1572.6719061907752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823598.56/warc/CC-MAIN-20140820021343-00075-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/80313/are-conical-symplectic-resolutions-mori-dream-spaces
# Are conical symplectic resolutions Mori dream spaces? This is one of these questions where it's tempting to just leave it at the title, but let me try to define the objects in question. A conical symplectic resolution is a projective resolution of singularities $X \to Y$ such that • $X$ is algebraically symplectic, • $Y$ is affine, and • there are compatible $\mathbb{G}_m$-actions on the two varieties which make $Y$ into a cone and act on the symplectic form with positive weight $n$. Examples include the Springer resolution, a minimal resolution of a rational double point, the Hilbert scheme of points in that space (via the Hilbert-Chow resolution), a hypertoric variety or a Nakajima quiver variety. All of these spaces have something in common: they are (relative) Mori dream spaces. (For a definition of "relative Mori dream space," see this paper). Thus, I am inclined to wonder: Are all conical symplectic resolutions relative Mori dream spaces? Or am I just not original enough to come up with counter examples? - Hi Ben, maybe a silly question: in my understanding (and the link you give) a Mori dream space is projective. But your examples are not all projective. What am I missing? – user5117 Nov 7 '11 at 20:48 Artie- I probably should have said "relative Mori dream space." Rather than a projective variety, I'm thinking about the projective map $X\to Y$. – Ben Webster Nov 7 '11 at 21:01 I suspected the answer was something along those lines. Thanks for the clarification. – user5117 Nov 7 '11 at 21:49 In case anyone still cares: I came across this question again because I was reading the paper of Andreatta--Wisniewski referred to by Ben. There they have a theorem (3.2) that all 4-dimensional symplectic contractions X -> Y are relative MDS, but I must admit I find their proof hard to follow. It uses the fact that X is symplectic, but not that much, it seems to me. So it seems as though one might be able to get somewhere with the original question using recent progress in MMP. More specifically, if one can find an effective Q-Cartier divisor on X which is anti-ample over Y, then... – user5117 Feb 7 '13 at 23:22 X/Y is a relative MDS, by BCHM. This is possible if Y is Q-factorial, as explained by Sandor here: mathoverflow.net/questions/86123/…. I don't know how much that helps with the original question, but I thought I'd mention it. – user5117 Feb 7 '13 at 23:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9070004820823669, "perplexity": 495.4702390362308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165070.40/warc/CC-MAIN-20160205193925-00232-ip-10-236-182-209.ec2.internal.warc.gz"}
http://matthematics.com/abstract/exercises-normal.html
$\newcommand{\identity}{\mathrm{id}} \newcommand{\notdivide}{\nmid} \newcommand{\notsubset}{\not\subset} \newcommand{\lcm}{\operatorname{lcm}} \newcommand{\gf}{\operatorname{GF}} \newcommand{\inn}{\operatorname{Inn}} \newcommand{\aut}{\operatorname{Aut}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\cis}{\operatorname{cis}} \newcommand{\chr}{\operatorname{char}} \newcommand{\Null}{\operatorname{Null}} \newcommand{\transpose}{\text{t}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \setcounter{chapter}{-1}$ ## Section9.4Exercises ###### 1 For each of the following groups $G\text{,}$ determine whether $H$ is a normal subgroup of $G\text{.}$ If $H$ is a normal subgroup, write out a Cayley table for the factor group $G/H\text{.}$ 1. $G = S_4$ and $H = A_4$ 2. $G = A_5$ and $H = \{ (1), (123), (132) \}$ 3. $G = S_4$ and $H = D_4$ 4. $G = Q_8$ and $H = \{ 1, -1, I, -I \}$ 5. $G = {\mathbb Z}$ and $H = 5 {\mathbb Z}$ ###### 2 Find all the subgroups of $D_4\text{.}$ Which subgroups are normal? What are all the factor groups of $D_4$ up to isomorphism? ###### 3 Find all the subgroups of the quaternion group, $Q_8\text{.}$ Which subgroups are normal? What are all the factor groups of $Q_8$ up to isomorphism? ###### 4 Let $T$ be the group of nonsingular upper triangular $2 \times 2$ matrices with entries in ${\mathbb R}\text{;}$ that is, matrices of the form \begin{equation*} \begin{pmatrix} a & b \\ 0 & c \end{pmatrix}, \end{equation*} where $a\text{,}$ $b\text{,}$ $c \in {\mathbb R}$ and $ac \neq 0\text{.}$ Let $U$ consist of matrices of the form \begin{equation*} \begin{pmatrix} 1 & x \\ 0 & 1 \end{pmatrix}, \end{equation*} where $x \in {\mathbb R}\text{.}$ 1. Show that $U$ is a subgroup of $T\text{.}$ 2. Prove that $U$ is abelian. 3. Prove that $U$ is normal in $T\text{.}$ 4. Show that $T/U$ is abelian. 5. Is $T$ normal in $GL_2( {\mathbb R})\text{?}$ ###### 5 Show that the intersection of two normal subgroups is a normal subgroup. ###### 6 If $G$ is abelian, prove that $G/H$ must also be abelian. ###### 7 Prove or disprove: If $H$ is a normal subgroup of $G$ such that $H$ and $G/H$ are abelian, then $G$ is abelian. ###### 8 If $G$ is cyclic, prove that $G/H$ must also be cyclic. ###### 9 Prove or disprove: If $H$ and $G/H$ are cyclic, then $G$ is cyclic. ###### 10 Let $H$ be a subgroup of index $2$ of a group $G\text{.}$ Prove that $H$ must be a normal subgroup of $G\text{.}$ Conclude that $S_n$ is not simple for $n \geq 3\text{.}$ ###### 11 If a group $G$ has exactly one subgroup $H$ of order $k\text{,}$ prove that $H$ is normal in $G\text{.}$ ###### 12 Define the centralizer of an element $g$ in a group $G$ to be the set \begin{equation*} C(g) = \{ x \in G : xg = gx \}. \end{equation*} Show that $C(g)$ is a subgroup of $G\text{.}$ If $g$ generates a normal subgroup of $G\text{,}$ prove that $C(g)$ is normal in $G\text{.}$ ###### 13 Recall that the center of a group $G$ is the set \begin{equation*} Z(G) = \{ x \in G : xg = gx \text{ for all } g \in G \}. \end{equation*} 1. Calculate the center of $S_3\text{.}$ 2. Calculate the center of $GL_2 ( {\mathbb R} )\text{.}$ 3. Show that the center of any group $G$ is a normal subgroup of $G\text{.}$ 4. If $G / Z(G)$ is cyclic, show that $G$ is abelian. ###### 14 Let $G$ be a group and let $G' = \langle aba^{- 1} b^{-1} \rangle\text{;}$ that is, $G'$ is the subgroup of all finite products of elements in $G$ of the form $aba^{-1}b^{-1}\text{.}$ The subgroup $G'$ is called the commutator subgroup of $G\text{.}$ 1. Show that $G'$ is a normal subgroup of $G\text{.}$ 2. Let $N$ be a normal subgroup of $G\text{.}$ Prove that $G/N$ is abelian if and only if $N$ contains the commutator subgroup of $G\text{.}$ ###### 15Sage Exercise 1 Build every subgroup of the alternating group on 5 symbols, $A_5\text{,}$ and check that each is not a normal subgroup (except for the two trivial cases). This command might take a couple seconds to run. Compare this with the time needed to run the .is_simple() method and realize that there is a significant amount of theory and cleverness brought to bear in speeding up commands like this. (It is possible that your Sage installation lacks GAP's “Table of Marks” library and you will be unable to compute the list of subgroups.) ###### 16Sage Exercise 2 Consider the quotient group of the group of symmetries of an $8$-gon, formed with the cyclic subgroup of order $4$ generated by a quarter-turn. Use the coset_product function to determine the Cayley table for this quotient group. Use the number of each coset, as produced by the .cosets() method as names for the elements of the quotient group. You will need to build the table “by hand” as there is no easy way to have Sage's Cayley table command do this one for you. You can build a table in the Sage Notebook pop-up editor (shift-click on a blue line) or you might read the documentation of the html.table() method. ###### 17Sage Exercise 3 Consider the cyclic subgroup of order $4$ in the symmetries of an $8$-gon. Verify that the subgroup is normal by first building the raw left and right cosets (without using the .cosets() method) and then checking their equality in Sage, all with a single command that employs sorting with the sorted() command. ###### 18Sage Exercise 4 Again, use the same cyclic subgroup of order $4$ in the group of symmetries of an $8$-gon. Check that the subgroup is normal by using part (2) of Theorem 9.3. Construct a one-line command that does the complete check and returns True. Maybe sort the elements of the subgroup S first, then slowly build up the necessary lists, commands, and conditions in steps. Notice that this check does not require ever building the cosets. ###### 19Sage Exercise 5 Repeat the demonstration from the previous subsection that for the symmetries of a tetrahedron, a cyclic subgroup of order $3$ results in an undefined coset multiplication. Above, the default setting for the .cosets() method builds right cosets — but in this problem, work instead with left cosets. You need to choose two cosets to multiply, and then demonstrate two choices for representatives that lead to different results for the product of the cosets. ###### 20Sage Exercise 6 Construct some dihedral groups of order $2n$ (i.e. symmetries of an $n$-gon, $D_{n}$ in the text, DihedralGroup(n) in Sage). Maybe all of them for $3\leq n \leq 100\text{.}$ For each dihedral group, construct a list of the orders of each of the normal subgroups (so use .normal_subgroups()). You may need to wait ten or twenty seconds for this to finish - be patient. Observe enough examples to hypothesize a pattern to your observations, check your hypothesis against each of your examples and then state your hypothesis clearly. Can you predict how many normal subgroups there are in the dihedral group $D_{470448}$ without using Sage to build all the normal subgroups? Can you describe all of the normal subgroups of a dihedral group in a way that would let us predict all of the normal subgroups of $D_{470448}$ without using Sage?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.906165361404419, "perplexity": 225.15444532757195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662578939.73/warc/CC-MAIN-20220525023952-20220525053952-00276.warc.gz"}
http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=14&l=en&sort_index=date&order_type=asc&document_srl=760826
We consider the spherical spin glass model which is also known as the spherical Sherrington-Kirkpatrick model. With the aid of recent development of random matrix theory, we show that the fluctuation of the free energy converges to a Gaussian distribution at high temperature and to the GOE Tracy-Widom distribution at low temperature. This is a joint work with Jinho Baik.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9914724230766296, "perplexity": 137.83492791274648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00726.warc.gz"}
https://www.physicsforums.com/threads/velocity-of-an-apple-before-strikes-the-surface-of-a-white-dwarf.592512/
# Velocity of an apple before strikes the surface of a white dwarf. 1. Apr 1, 2012 ### ScienceGeek24 1. The problem statement, all variables and given/known data An apple is dropped from a height of 12.8*10^6 m above the surface of the white dwarf. With what speed does the apple strike the surface of the white dwarf? M of white dwarf= 1.99*10^30 gravity of white dwarf= 3.29*10^6 2. Relevant equations Vf^2=Vi^2+2a(deltaX) 3. The attempt at a solution I tried doing this v=srqt( 2(3.29*10^6)(12.8*10^6 ) and my result was far of the real answer i got 9.17*10^6 m/s and the answer sheet showed 5.28 *10^6 m/s. What did I do wrong? 2. Apr 1, 2012 ### tiny-tim Hi ScienceGeek24 (try using the X2 button just above the Reply box ) that's only for constant acceleration a white dwarf is only about the size of the Earth, so 107 m is a long way up isn't there a relation between the radius and mass of a white dwarf? 3. Apr 1, 2012 ### ScienceGeek24 yes a=Gm/r^2 that's how i got the acceleration of the white dwarf The radius is equal of the earth's radius which is 6.37*10^2 and the mass was the same the sun which was 1.99*10^30. However, I still don't understand your question. 4. Apr 1, 2012 ### tiny-tim You'll need to find the gravitational acceleration as a function of distance, and integrate (or use potential energy). 5. Apr 1, 2012 ### SammyS Staff Emeritus tim's suggestion referred to super-scripts, not the whole post, so your post would look like: BTW: Earth's radius is significantly greater than 6.37*102m ! 6. Apr 1, 2012 ### ScienceGeek24 The thing is that i don't think this problem should be base with integrals, it has to be an easier way, sqrt(3.27*106)(12.8*106+6.37*106) and still i don't get the right answer. I mean there has to be another factor that I am not taking into account. I know this problem does not need integrals. Similar Discussions: Velocity of an apple before strikes the surface of a white dwarf.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8549707531929016, "perplexity": 1469.3657514925653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588072.75/warc/CC-MAIN-20171216123525-20171216145525-00622.warc.gz"}
https://www.lessonplanet.com/teachers/connect-the-dots-634682-visual-and-performing-arts-1st-3rd
# Connect the Dots In this connecting the dots worksheet, students connect the dots from 1 to 22 to complete a picture of a billy goat laughing.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8851498365402222, "perplexity": 2740.609365126408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423809.62/warc/CC-MAIN-20170721202430-20170721222430-00255.warc.gz"}
https://mathlake.com/Direction-Cosines
# Direction Cosines In analytical geometry, the directional cosines also known as direction cosine of a vector is defined as the cosines of the angles between the three coordinate axes and the vector.  In this section, we will first learn about the position vector of a point and direction cosines and then finding the angle between two lines. Position Vector If O is taken as reference origin and A is an arbitrary point in space then the vector $$\overrightarrow{OA}$$ is called the position vector of the point. Position vector simply denotes the position or location of a point in the three-dimensional Cartesian system with respect to a reference origin. ## Direction Cosines and Angle Between Two Lines Let us consider a point P lying in space and if its position vector makes positive angles (anticlockwise direction) of α, β and γ with the positive x,y and z-axis respectively, then these angles are known as direction angles and on taking the cosine of these angles we get direction cosines. Taking direction cosines makes it easy to represent the direction of a vector in terms of angles with respect to the reference. The coordinates of the point P may also be expressed as the product of the magnitude of the given vector and the cosines of direction on the three axes, i.e. $$x = l| \vec{r} |$$ $$y = m| \vec{r} |$$ $$z = n| \vec{r} |$$ Where l, m, n represent the direction cosines of the given vector on the axes x, y, z respectively. We can clearly see that lr, mr, nr are in proportion to the direction cosines and these are called the direction ratios and they are denoted by a, b, c. Let L1 and L2 represent two lines having the direction ratios as a1, b1, c1 and a2, b2, c2 respectively such that they are passing through the origin. Let us choose a random point A on line L1 and B on line L2. Considering the directed lines OA and OB as shown in the figure given below, let the angle between these lines be θ. Using the concept of direction cosines and direction ratios, the angle θ between L1 and L2 is given by: In terms of sin θ = $$\sqrt{(1 – cos^2 θ)}$$ Therefore, ### Special Cases • If L1 and L2 having the direction ratios as a1, b1, c1 and a2, b2, c2 respectively are perpendicular to each other, then θ = 900. Therefore, a1a2 + b1b2 + c1c2 = 0 • If L1 and L2 having the direction ratios as a1, b1, c1 and a2, b2, c2 respectively are parallel to each other, then θ = 00. Therefore, $$\frac{a_1}{a_2}$$ = $$\frac{b_1}{b_2}$$ = $$\frac{c_1}{c_2}$$ ### How to Find the Direction Cosine? The direction cosine of the vector can be determined by dividing the corresponding coordinate of a vector by the vector length. The unit vector coordinate is equal to the direction cosine. One such property of the direction cosine is that the addition of the squares of the direction cosines is equivalent to one. We know that the direction cosine is the cosine of the angle subtended by the line with the three coordinate axes, such as x-axis, y-axis and z-axis respectively. If the angles subtended by these three axes are α, β, and γ, then the direction cosines are cos α, cos β, cos γ respectively. The direction cosines are also represented by l, m and n. Thus, the direction cosine of a vector $$\vec{A} = a\hat{i}+b\hat{j}+c\hat{k}$$ is given as: $$cos\alpha = l = \frac{a}{\sqrt{(a)^{2}+(b)^{2}+(c)^{2}}}$$ $$cos\beta = m = \frac{b}{\sqrt{(a)^{2}+(b)^{2}+(c)^{2}}}$$ $$cos\gamma = n = \frac{c}{\sqrt{(a)^{2}+(b)^{2}+(c)^{2}}}$$ ### Direction Cosines Examples Example 1: Determine the direction cosine of a line joining the point (-4, 2, 3) with the origin. Solution: Given that, the line joins the origin (0, 0, 0) and the point (-4, 2, 3). Hence, the direction ratios are -4, 2, 3. Also, the magnitude of a line = √[(-4)2+(2)2+(3)2] = √(16+4+9) = √29. Therefore, the direction cosines are ((-4/√29), (2/√29), (3/√29)). Example 2: Find the direction cosine of a vector joining the points A(1, 2, -3) and B(-1, -2, 1), directed from A to B. Solution: Given that, A(1, 2, -3) and B(-1, -2, 1) $$\overrightarrow{AB} = (-1-1)\hat{i}+(-2-2)\hat{j}+(1-(-3))\hat{k}$$ $$\overrightarrow{AB} = -2\hat{i}-4\hat{j}+4\hat{k}$$ Hence, the direction ratios are -2, -4, 4. Magnitude = √[(-2)2+(-4)2+(4)2] = √(4+16+16) = √36 = 6. Thus, the direction cosines are (-2/6, -4/6, 4/6), which is also equal to (-⅓, -⅔, ⅔) Example 3: Determine the direction cosine of the vector $$1\hat{i}+2\hat{j}+3\hat{k}$$ Solution: Given vector: $$1\hat{i}+2\hat{j}+3\hat{k}$$ Let $$\vec{a}= 1\hat{i}+2\hat{j}+3\hat{k}$$ Thus, the direction ratios are 1, 2, 3. Magnitude of $$\vec{a}$$ = √[(1)2+(2)2+(3)2] = √(1+4+9) = √14. Therefore, the direction cosines are ((1/√14), (2/√14), (3/√14)) Stay tuned with BYJU’S – The Learning App and download the app to learn all Maths-related concepts easily by exploring more videos. ## Frequently Asked Questions on Direction Cosines ### What is meant by direction cosines? The direction cosine of a vector is defined as the cosine of the angle between the vector and the three positive coordinate axes. ### Does the direction cosines of two parallel lines are always the same? Yes, the direction cosines of two parallel lines are always the same. ### If l, m and n are the direction cosines of a line, then what is the relationship between the direction cosines of a line? If l, m and n are the direction cosines of a line, then the relationship between the direction cosines of a line is l2+m2+n2 = 1. ### Does the direction cosine of a line is unique? Yes, the direction cosine of a line is unique. However, there are infinitely many direction ratios, if direction ratios are proportional to the direction cosines. ### What is meant by position vector? Position vector, also known as Euclidean vector represents the position or location of a point in the three-dimensional Cartesian system with respect to a reference origin.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150648713111877, "perplexity": 428.8106028608571}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00823.warc.gz"}
https://www.nature.com/articles/s41467-017-00343-8?error=cookies_not_supported&code=b8b16a6f-b742-4a1d-b091-137411974611
Article | Open Acoustically actuated ultra-compact NEMS magnetoelectric antennas • Nature Communications 8, Article number: 296 (2017) • doi:10.1038/s41467-017-00343-8 Accepted: Published online: Abstract State-of-the-art compact antennas rely on electromagnetic wave resonance, which leads to antenna sizes that are comparable to the electromagnetic wavelength. As a result, antennas typically have a size greater than one-tenth of the wavelength, and further miniaturization of antennas has been an open challenge for decades. Here we report on acoustically actuated nanomechanical magnetoelectric (ME) antennas with a suspended ferromagnetic/piezoelectric thin-film heterostructure. These ME antennas receive and transmit electromagnetic waves through the ME effect at their acoustic resonance frequencies. The bulk acoustic waves in ME antennas stimulate magnetization oscillations of the ferromagnetic thin film, which results in the radiation of electromagnetic waves. Vice versa, these antennas sense the magnetic fields of electromagnetic waves, giving a piezoelectric voltage output. The ME antennas (with sizes as small as one-thousandth of a wavelength) demonstrates 1–2 orders of magnitude miniaturization over state-of-the-art compact antennas without performance degradation. These ME antennas have potential implications for portable wireless communication systems. Introduction Antennas that interconvert between alternating electric currents and electromagnetic (EM) wave radiation, act as an omnipresent critical component in smart phones, tablets, radio frequency identification systems, radars, etc. One of the key challenges on state-of-the-art antennas lies in their size miniaturization1,2,3,4,5,6. Compact antennas rely on an EM wave resonance, and therefore typically have a size of more than λ0/10, that is one-tenth of the EM wavelength λ0. The limitation on antenna size miniaturization has made it very challenging to achieve compact antennas and antenna arrays, particularly at very-high frequency (VHF, 30–300 MHz) and ultra-high frequency (UHF, 0.3–3 GHz) with large λ0, thus putting severe constraints on wireless communication systems and radars on mobile platforms4. New antenna concepts need to be investigated with novel EM waves radiation and reception mechanisms for the reduction of antenna size. On the other hand, strong strain-mediated magnetoelectric (ME) coupling in magnetic/piezoelectric heterostructures has been recently demonstrated which enables efficient energy transfer between magnetism and electricity7,8,9,10,11,12,13,14,15,16,17. The strong ME coupling, if realized dynamically at radio frequencies (RF) in ME heterostructures, could enable voltage induced RF magnetic currents that radiate EM waves, and acoustically actuated nanoscale ME antennas with a new receiving and transmitting mechanism, for EM waves. This concept has recently been theoretically proposed18, 19. However, despite of the moderate interaction between the surface acoustic wave and magnetization20,21,22, strong ME effect has only been demonstrated at kHz frequencies, or in a static or quasi-static process23, 24. Here one question naturally arises: Is it possible to realize efficient energy coupling between bulk acoustic waves and EM waves in ME heterostructures at RF frequencies through ME coupling? Based on our results in this work, we can answer this question affirmatively. Here we demonstrate the nanoelectromechanical system (NEMS) antennas operating at VHF and UHF frequencies based on the strong ME coupling between EM and bulk acoustic waves in the resonant ME heterostructures (ferromagnetic/piezoelectric). These ME antennas have realized acoustic transmitting and receiving mechanisms in nanoplate resonators (NPR) and thin-film bulk acoustic wave resonators (FBAR). During the receiving process, the magnetic layer of ME antennas senses H-components of EM waves, which induces a oscillating strain and a piezoelectric voltage output at the electromechanical resonance frequency. Conversely, during the transmitting process, the ME antennas produces an oscillating mechanical strain under an alternating voltage input, which mechanically excites the magnetic layer and induce a magnetization oscillation, or a magnetic current, that radiates EM waves. Therefore, these ME antennas operate at their acoustic resonance instead of, EM resonance. Since the acoustic wavelength is around five orders of magnitude shorter than the EM wavelength at the same frequency, these ME antennas are expected to have sizes comparable to the acoustic wavelength, thus leading to orders of magnitude reduced antenna size compared to state-of-the-art compact antennas. Large ME coupling coefficient in the NPR device The resonant bodies of the NEMS ME resonators were a 500 nm AlN thin film supporting a [Fe7Ga2B1 (45 nm)/Al2O3 (5 nm)] × 10 (hereafter termed FeGaB) thin-film ME heterostructures fully suspended on a Si substrate, where AlN and FeGaB (see Supplementary Note 1 for magnetic properties characterization) serve as the piezoelectric and magnetostrictive element of the ME heterostructure, respectively. The use of a NEMS resonator with an ultra-thin (thickness, T = 500 nm) AlN thin film enables efficient on-chip acoustic transduction with ultra-low energy dissipation25, 26. In this work, the demonstrated ME antennas span a wide range of frequencies from 60 MHz to 2.5 GHz, which are realized by a geometric design of resonating plates that exhibit different mode of vibrations (Supplementary Note 6). The strong ME coupling at VHF frequencies was demonstrated through a ME NPR with an in-plane contour mode of vibration (by means of d31 piezoelectric coefficient)27. In particular, a perpendicular electric field on the piezoelectric AlN layer induces actuation in the plane of the device. Figure 1a presents the schematic of the measurements and the structure of ME NPR which has a rectangular resonating plate consisting of a single-finger bottom Pt electrode and a thin-film FeGaB/AlN heterostructure. All the NEMS ME resonators in this work were fabricated using CMOS (complementary metal-oxide-semiconductor) compatible microfabrication processes (see Method and Supplementary Note 2). The scanning electron microscopy (SEM) image of the NPR ME resonator is shown in Fig. 1b. The length (L) and width (W) of the FeGaB/AlN active resonant body are 200 and 50 µm, respectively. The ME nanoplate FeGaB/AlN is fully released from the Si substrate but mechanically supported and electrically contacted by the two AlN/Pt anchors for optimized ME coupling with a minimum substrate clamping effect. To study the electromechanical properties of the ME NPR, the electrical admittance curve was characterized by using a network analyzer, as shown in Fig. 1c. The admittance spectrum at resonance can be fitted to the Butterworth–van Dyke model27, which yields an electromechanical resonance frequency (fr,NPR) of 60.68 MHz, a high-quality factor (Q) of 930 and electromechanical coupling coefficient (kt2) of 1.35% indicating a high electromechanical transduction efficiency and low loss (Supplementary Note 3). This fr,NPR corresponds to the contour mode of vibration excited in AlN, which can be analytically expressed as $f r,NPR ∝ 1 2 W 0 E ρ$, where W0 is the width of the resonator pitch, E and ρ are the equivalent Young’s modulus and equivalent density of the FeGaB/AlN resonator, respectively28, 29. Finite element analysis (FEA) on the admittance curve of the device with the same geometry is shown in the Fig. 1d, which is in good agreement with Fig. 1c. At the resonance frequency of 60.56 MHz, the in-plane displacement distribution shown in Fig. 1d inset indicates a contour extensional mode of vibration, in which the bulk of the device structure expands in its plane. It is also notable that the Q-factor of this ME resonator is much higher than the conventional low frequency ME heterostructures in previous reports10, 30,31,32,33. Under the excitation of HRF with an amplitude about 60 nT (provided by a RF coil along the length direction of the resonator, see Supplementary Note 4), the induced ME voltage output of the NPR device was measured by using an UHF lock-in amplifier (UHFLI), as shown in Fig. 1f. A clear resonance peak is shown in the ME voltage spectrum at 60.7 MHz with a peak amplitude (U) of 180 μV. The amplitude of the peak is very sensitive to the excitation frequency exhibiting a Q-factor that is similar to the admittance curve in Fig. 1c. The experimentally measured output ME voltage spectrum (Fig. 1f) agrees well with the FEA results of the ME voltage spectrum with a peak amplitude of 196 μV as shown in Fig. 1g (Method). Figure 1g inset shows the simulated in-plane displacement of the ME resonator excited by the Hrf at its resonance frequency, indicating a contour mode of vibration. The same mode of vibration excited by magnetic field and electric field demonstrates that the strain-mediated ME coupling is dominating. A high ME coupling coefficient of αME = ∂U/(∂Hrf∙T) = 6 kV Oe−1 cm−1 can be obtained at the fr,NPR, where23, 34. It is notable this ME coupling coefficient is obtained without any DC bias magnetic field, and the value is comparable to recent reported values with optimum bias magnetic field at much lower electromechanical resonance frequencies of kHz35. As a comparison, a non-magnetic single-finger NPR has also been tested as a control sample to confirm that the strain-mediated ME coupling is responsible for the observed voltage output under the HRF excitation. For the non-magnetic resonator, a Cu thin film of 500 nm was deposited on AlN plate (Fig. 1e inset) to replace the ferromagnetic FeGaB layer as the top electrode. As shown in Fig. 1e, the Cu/AlN based NPR exhibits a similar admittance behavior (both fr and Q) as the ME NPR (Fig. 1c). Figure 1h shows the HRF induced voltage spectrum of the resonator with a Cu/AlN heterostructure. With the same HRF excitation as the ME resonator (Hrf = 60 nT), the induced voltage of the Cu/AlN resonator at its electromechanical resonance frequency of 64.7 MHz is very low, about two orders of magnitude smaller than the induced voltage in the FeGaB/AlN ME NPR (Fig. 1c). Note that the induced voltage spectrum profile of the Cu/AlN NPR is highly antisymmetric near its resonance frequency, which is totally different from the symmetric ME voltage spectrum (Fig. 1f) but similar to its admittance spectrum (Fig. 1e). This antisymmetric line shape can be attributed to a weak inductive coupling effect between the device ground loop and EM wave, which could also exist in the FeGaB/AlN NPR device. However, the symmetric ME voltage spectrum in the FeGaB/AlN NPR indicates that the inductive coupling effect has an extremely low efficiency compared to the ME coupling. Thus, the strong resonance peak induced by the HRF in FeGaB/AlN NPR device is resulted from the presence of the ME coupling, in which FeGaB films with high-permeability36, 37 couples to RF excitation magnetic field very effectively. The ME NPR with multi-finger interdigitated electrodes, which we have demonstrated recently17, were found to have negligibly small ME voltage in the same measurement setup, that is over three orders of magnitude smaller at the electromechanical resonance compared to the single-plate ME NPR. This phenomenon has been confirmed through COMSOL simulations (Supplementary Note 5). Single-finger ME resonators produces high ME output voltage as the uniform RF excitation magnetic fields can couple strongly to single nanoplate. While the negligibly ME voltage output in multi-finger ME resonators is due to the fact that, the uniform HRF do not couple efficiently to the multi-finger NPRs which produce nonuniform RF strain fields and nonuniform magnetization fields. We further gain insight into the magnetization dependence of the single-finger ME NPR shown in Fig. 1 by examining its ME coupling strength at different bias magnetic fields. The induced ME voltage spectrum was measured with DC bias magnetic fields swept from −5 to 5 mT along the resonator length direction (as shown in the inset of Fig. 2b). Figure 2a shows the αME as a function of the DC bias magnetic field HDC and the frequency of HRF. At zero bias magnetic field μ0HDC = 0, the αME is maximized at the fr,NPR of 60.7 MHz, which is in good agreement with Fig. 1f. At μ0HDC =  ±5 mT, fr,NPR is shifted to 60.72 MHz as shown in the dashed curve of Fig. 2a. This can be attributed to the ΔE effect17, that is the bias magnetic field modifies the Young’s modulus of FeGaB and thus leads to varied fr,NPR of the resonator17, 31, 38. Moreover, a hysteretic behavior of the αME (at fr,NPR) was observed by sweeping the DC magnetic field back and force, with the maximum value of 6 kV cm−1 Oe−1 at ±0.5 mT (Fig. 2b). This is consistent with the strain-mediated ME coupling mechanism and the magnetic hysteresis of the FeGaB/AlN nanoplate (Supplementary Note 1). The magnetic field dependence of αME in the ME NPR provides another direct evidence that the observed interaction between EM wave and acoustic resonance results from the ME coupling. It is important to note that the strong αME at zero bias magnetic field directly leads to robust self-biased ME sensors. This is drastically different from conventional ME heterostructures with electromechanical resonance frequencies in the kilohertz frequency range, which show near zero ME coupling at zero bias magnetic field32, 39,40,41. This difference can be attributed to the edge curling wall42, 43 under self-bias condition for the magnetic/non-magnetic multilayers (FeGaB/Al2O3) used as the magnetostrictive layer in ME antennas. The detection limit of the NPR ME antennas for sensing weak HRF under zero bias magnetic field was also characterized as shown in Fig. 2c, where the induced voltage is plotted as a function of HRF at two different excitation frequencies. At the resonance frequency of 60.7 MHz (red), the linear curve scatters at 40 pT with a limit detection voltage of 0.1 µV, indicating a detection limit of 40 pT for the NPR ME sensor. While at the off-resonance frequency of 1 MHz (blue), the induced voltage randomly distributes around the 0.1 µV, showing no sensitivity to 1 MHz magnetic field excitation with the amplitude of 10−11–10−7 T. It is notable that ME NPR antenna arrays with multiple frequency bands from MHz to GHz can be integrated in one wafer by designing the ME NPR with different lateral dimensions (or W), since the fr,NPR is inversely proportional to W27. This allows the broadband ME NPR antenna arrays on the same wafer, which compensates for the narrowband operation frequencies of ME antennas. The resonance frequencies as well as the Q-factors of various NPR (including FBAR) devices fabricated on one wafer are summarized in Supplementary Note 6 as a function of W. FBAR ME antennas We further designed, fabricated, and tested ME antennas that operate at GHz based on the thickness resonance mode of FeGaB/AlN thin-film FBAR devices. The antenna radiation property of the ME FBAR based antennas was tested in a far-field configuration at GHz range in an anechoic chamber. As shown in Fig. 3a, b, the active element of ME FBAR antenna is a suspended FeGaB/AlN ME circular disk with a diameter of 200 µm. This FBAR ME antenna exhibits a thickness extensional mode of vibration as shown in the schematic representation of Fig. 3a. A calibrated linear polarization standard horn antenna and a ME FBAR based antenna are connected to the port 1 and port 2 of a network analyzer, respectively for antenna gain measurements (see Methods). Different from ME NPR, the electromechanical resonance frequency of the ME FBAR (fr,FBAR) is defined by the thickness of the circular resonating disk and can be expressed by $f r,FBAR ∝ 1 2 T E ρ$. The fr,FBAR was found to be 2.53 GHz by measuring the reflection coefficient (S22) of the FBAR device as shown in Fig. 3c, which also exhibits a peak return loss of 10.26 dB and a Q-factor of 632. Figure 3c inset shows the simulated out-of-plane displacement of the FBAR indicating a thickness extensional mode of vibration (see Supplementary Note 7 for the simulated S22). The receiving and transmitting behavior of ME antennas corresponds to the S21 and S12 parameters, respectively, as shown in Fig. 3d. Clearly S12 and S21 curves nearly overlap with each other. These S-parameters (S21, S12 and S22) for the ME FBAR were obtained at zero bias magnetic field for the ME FBAR. The antenna gain for the ME FBAR is measured to be −18 dBi at fr,FBAR through gain comparison method (Methods). It is not trivial to simulate the ME antenna radiation in the framework of a three-dimensional (3D) device. While by using a 1D model, one may not be able to capture the real physics which contain many boundary conditions and anisotropic materials parameters. For example, the magnetic FeGaB layer in the ME antenna shows a highly anisotropic Young’s modulus with a ΔE effect of 160 GPa along the in-plane magnetic hard axis direction, which is very hard to incorporate into any existing model. A non-magnetic control device with 1000 nm Al/500 nm AlN has also been tested with the same experimental setups in order to rule out any artificial EM coupling to the ground loop of devices. In the non-magnetic control device, 1000 nm Al was used to replace the 500 nm thick FeGaB multilayer for achieving a device resonance frequency near 2.5 GHz. The loss mechanism of ME antennas is dominated by the mechanical resistance Rm related to the different mechanical damping mechanisms of the magnetic and piezoelectric phases, which is much larger than the radiation resistance Rr. The impedance matching is therefore dominated by Rm, not Rr. Therefore, impedance matching is no longer directly related to the radiation efficiency of ME antennas, which is different from conventional antennas. As shown in Fig. 3e, the Al/AlN control device exhibits a similar electromechanical property as FeGaB/AlN FBAR with similar S22 but better impedance matching with an electromechanical resonance frequency of 2.50 GHz. However, no evident S21 and S12 resonance peak can be observed in the horn antenna measurements in Fig. 3f, except a very weak peak at 2.50 GHz with a peak amplitude just above the noise level, similar to the Cu/AlN NPR control sample shown in Fig. 1h. This suggests that the ME coupling effect dominates in the S21 and S12 measurement of the ME FBAR antenna. The radiation behaviors of ME FBAR antenna was also tested by rotating the linearly polarized standard antenna as shown in Fig. 4. The standard antenna can be rotated along one of the three major axes of the ME antenna, the out-of-plane direction (Fig. 4a, b), the in-plane perpendicular to the ME antenna anchor direction (Fig. 4c, d) and the in-plane along the ME antenna anchor direction (Fig. 4e, f). In all the schematics of Fig. 4, the sinusoidal wave along 0° (or 180°) direction denotes the propagating H-field component of the incoming EM wave. All three polar gain charts in Fig. 4a, c, e show the similar shape of a sideways figure eight due to the magnetic anisotropy of the FeGaB/Al2O3 multilayer in the circular resonating disk of the ME FBAR. As shown in Fig. 4a, the ME FBAR antenna has the highest gain when the Hrf is perpendicular to the anchor direction of the antenna, and lowest gain when the Hrf is parallel to the anchor direction. This is because the in-plane magnetic anisotropy of the FeGaB in the circular disk of the FBAR is along the width direction of the ME antenna, and the highest permeability and therefore strongest coupling between Hrf and ME antenna is achieved along 0 or 180° direction in Fig. 4a. The other two rotation test configurations in Fig. 4c, e show similar behavior, in which the antenna gain shows its maximum value at 0° (or 180°).This is related to the shape anisotropy of the thin ferromagnetic layer. All the rotational antenna gain measurements at different configurations demonstrate that the high ME antenna gain originates from the strong magnetic coupling between the magnetic field component of the EM wave and the FeGaB of the FeGaB/AlN heterostructure in ME FBAR antennas. Discussion The mechanism for the ME antenna operation and miniaturization is due to the ME effect at the acoustic resonance or electromechanical resonance. Since the acoustic wavelength is much less that of the EM wave resonance, these ME antennas are much smaller than state-of-the-art compact antennas. Size miniaturization of ME FBAR antennas is not due to the high permeability or high permittivity of the ME antennas, which is different from conventional magnetodielectric antenna approaches. The loss mechanism of ME antennas is also quite different from conventional antennas as the mechanical resistance is dominating the loss of ME antennas. And the mechanical resistance is not directly related to the loss tangent of the piezomagnetic or piezoelectric phases of the ME antennas. The active area of ME FBAR antenna with a resonating ME resonating circular disk discussed above has a diameter of 200 µm or λ0/593, which is 1–2 orders of magnitude smaller than state-of-the-art compact antennas with their sizes over λ0/101. As a comparison, the simulated small loop antenna with the same size as the ground loop of the FBAR ME antenna, shows a resonance frequency fr,loop of 34 GHz (see Supplementary Note 8), and the gain of −68.4 dBi at 2.53 GHz due mainly to the poor impedance match, which is 50 dB lower than that of the same size FBAR ME antenna. Clearly these miniaturized ME antennas have drastically enhanced antenna gain at small size owing to the acoustically actuated ME effect based receiving/transmitting mechanisms at RF frequencies. We note that the demonstrated ME antennas are pure passive devices, no impedance matching circuit, or an external power source was used during the measurement. And its maximum achievable bandwidth is within Chu–Harrington limit (Method)44. In conclusion, we have demonstrated ME antennas based on NPR and FBAR structures with an acoustically actuated receiving and transmitting mechanism, which are one to two orders of magnitude smaller than state-of-the-art compact antennas. These ME antennas are designed to have different modes of vibration for realizing both VHF (60 MHz) and UHF (2.525 GHz) operation frequencies. Moreover, both NPR and FBAR based antennas can be fabricated on the same Si wafer with the same microfabrication process, which allows for the integration of broadband ME antenna arrays from tens of MHz (NPR with large W) to tens of GHz (FBAR with thinner AlN thickness) on one chip by the geometric design of device resonant bodies (Supplementary Note 6). A bank of multi-frequency MEMS resonators can be connected to a CMOS oscillator circuit for the realization of reconfigurable antennas45. These ultra-compact ME antennas are expected to have great impacts on our future antennas and communication systems for internet of things, wearable antennas, bio-implantable and bio-injectable antennas, smart phones, wireless communication systems, etc. Device fabrication High resistivity silicon (Si) wafers (>10,000 Ohm cm) were used as substrates for all ME antenna devices. A 50-nm-thick Pt film was sputter-deposited and patterned by lift-off on top of the Si substrate to define the bottom electrodes. Then, the 500 nm AlN film was sputter-deposited, and the via holes was formed by H3PO4 etching to access the bottom electrodes. After that, the AlN film was etched by inductively coupled plasma etching in Cl2-based chemistry to define the shape of the resonant nanoplate. Next, a 100-nm-thick gold (Au) film was evaporated and patterned to form the top ground. Finally, 500-nm-thick FeGaB/Al2O3 multilayer layer was deposited by a magnetron sputtering and patterned by lift-off process. A 100 Oe in situ magnetic field bias was applied during the magnetron deposition along the width direction of the device to pre-orient the magnetic domains. Then, the structure was released by XeF2 isotropic etching of the Silicon substrate. The details of the fabrication processes and FBAR antenna layout can be found in Supplementary Note 2. Magnetic multilayer deposition The magnetic multilayer with the structure of [FeGaB (45 nm)/Al2O3 (5 nm)] × 10 was sputter-deposited on AlN thin film with a 5 nm Ta seed layer at the Ar atmosphere of 3 mTorr with a background pressure of less than 1 × 10−7 Torr. The Ta seed layer promoted the FeGaB thin-film growth exhibiting narrow resonance linewidth and close-to-bulk magnetic moment. The FeGaB layer was co-sputtered from FeGa (DC sputtering) and B (RF sputtering) targets. The Al2O3 layer was deposited by RF sputtering using an Al2O3 target. The deposition rates are calibrated with X-ray reflectivity. The admittance curve of resonators was characterized by using a network analyzer (Agilent PNA 8350b). The short-open-load calibration was performed prior to the device measurements. The transmission parameter S11 was acquired and converted to admittance amplitude. The available power at the network analyzer port was set to −12 dBm, and the IF bandwidth was 50 Hz. The devices were tested in a RF probe station with a probe with ground-signal-ground configuration. ME voltage measurement The induced ME voltage of the NPR was measured by using an UHF lock-in amplifier (UHFLI). The reference current signal was sent to an RF coil to generate a RF magnetic field Hrf, which has a magnetic field strength simulated by the finite element method. The RF coil is placed 14 mm away from the device under test (see Supplementary Note 4 for the space distribution of HRF). The induced ME voltage spectrum is obtained by sweeping the reference frequency (frequency of Hrf). The ME voltage spectral were also measured under various DC magnetic field. Finite element analysis of electromechanical and magnetoelectrical properties To analyze the response of the ME structures, the coupling between the magnetic, elastic and electric field in the magnetostrictive and piezoelectric heterostucture are taken into account. Simulations with FEM software, COMSOL Multiphysics V5.1, were carried out to investigate the frequency response. The simulation modules include the magnetic fields, solid mechanics and electrostatics modules. The ME composites were constructed into magnetostrictive, piezoelectric phase and air sub-domain. The simulation were performed at the frequency domain in a 3D geometry. The detail of the analysis can be found in Supplementary Note 9. The linear mechanical, electrical and magnetic parameters of the materials used in this work can be found in Supplementary Note 10. In the demonstration of NPR, we excited the device with an RF magnetic field and used a magnetostatic approximation to simulate the induced voltage in COMSOL. In the FBAR section, we demonstrated the resonance mode and displacement instead of the magnetization dynamics. Antenna gain calibration and calculation The antenna gain GFBAR can be calculated by gain-transfer (gain-comparison) method which can be expressed as, $G FBAR = G R +lo g 10 P FBAR ∕ P R$, where G R is the gain of the reference horn antenna, and PFBAR and PR are the radiation power of FBAR and reference horn antenna46. Given $lo g 10 P FBAR ∕ P R$ = $S 21,FBAR - S 21,R$, at the resonance frequency fr,FBAR, we obtain GFBAR = −18 dBi. The ME FBAR antenna is highly anisotropic due to the strong magnetic film shape anisotropy with a high sensitivity for in-plane magnetic fields, and due to the in-plane uniaxial anisotropy with high sensitivity along the magnetic hard axis of the circular resonating magnetic disk. Directivity D of the ME FBAR antenna can therefore be calculated by integrating the magnetic power density as$D= ∫ 0 P ∫ 0 π ∫ 0 π ρ sin θ sin ϕ d θ d ϕ d ρ ∫ 0 P ρ d ρ =6dB$, where P(ρ, ϕ, θ) is the magnetic power density in spherical coordinates. Then the ME FBAR antenna efficiency can be calculated as $ξ rad = G FBAR ∕D=0.403%$ with a high gain of GFBAR = −18 dBi at the resonance frequency fr,FBAR, or $ξ rad,corrected$ = 0.448% with reflection corrected. The FBAR ME antenna also has a fractional bandwidth $FB W FBAR = Δ f f 0 = B W f 0 =0.158%$with the measured 3 dB bandwidth Δf = 4 MHz. The minimum Q-factor of a small antenna is given by $Q= 1 k 0 a 3 + 1 k 0 a =41037$ as dictated by the Chu limit44, where $k 0 = 2 π λ 0$ is the wave number in free space and a is the smallest imaginary sphere of radius enclosed the entire antenna structure. The maximum fractional bandwidth of this antenna of the ME antenna allowed by Chu’s limit is therefore $FB W Chu ≈ V S W R - 1 ξ rad,corrected Q V S W R =0.628%$, which is still larger than the measured$FB W FBAR = Δ f f 0 = B W f 0 =0.158%$. Therefore, the Chu–Harrington limit has not been surpassed by using the magnetoelectrically coupled FBAR structure. We also estimated the radiation power of the FBAR antenna by using a simple magnetic dipole model for a conceptual understanding. The magnetic dipole moment (m0) can be expressed as $m 0 = M s π r 2 T$, where Ms is the saturation magnetization, and r and T are the radius and thickness of the magnetic disk. Assuming that a typical input power of -20 dBm (or 0.01 mW) is needed to completely switch all magnetic dipole moment for radiation, we obtain a radiation power (Pd) to be 2.8 × 10−8 W (or 0.28% efficiency), as $P d = μ 0 ω 4 m 0 2 12 π c 3$, where c is the speed of light in vacuum. This estimation indicates that our experimental results are of the correct order-of-magnitude. Data availability The data that support the findings of this study are available from the corresponding author upon request. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. References 1. 1. Kramer, B. A., Chen, C.-C., Lee, M. & Volakis, J. L. Fundamental limits and design guidelines for miniaturizing ultra-wideband antennas. Antennas Propag. Mag. IEEE 51, 57–69 (2009). 2. 2. Mosallaei, H. & Sarabandi, K. Antenna miniaturization and bandwidth enhancement using a reactive impedance substrate. IEEE Trans. Antennas Propag 52, 2403–2414 (2004). 3. 3. Skrivervik, A. K., Zürcher, J. F., Staub, O. & Mosig, J. R. PCS antenna design: The challenge of miniaturization. IEEE Antennas Propag. Mag. 43, 12–27 (2001). 4. 4. Gianvittorio, J. P. & Rahmat-Samii, Y. Fractal antennas: a novel antenna miniaturization technique, and applications. IEEE Antennas Propag. Mag. 44, 20–36 (2002). 5. 5. Ikonen, P. M. T., Rozanov, K. N., Osipov, A. V., Alitalo, P. & Tretyakov, S. A. Magnetodielectric substrates in antenna miniaturization: potential and limitations. IEEE Trans. Antennas Propag. 54, 3391–3399 (2006). 6. 6. Mosallaei, H. & Sarabandi, K. Magneto-dielectrics in electromagnetics: concept and applications. IEEE Trans. Antennas Propag. 52, 1558–1567 (2004). 7. 7. Bibes, M. Towards a magnetoelectric memory. Nature 7, 425–426 (2008). 8. 8. Brataas, A., Kent, A. D. & Ohno, H. Current-induced torques in magnetic materials. Nat. Mater. 11, 372–381 (2012). 9. 9. Hu, J.-M., Li, Z., Chen, L.-Q. & Nan, C.-W. High-density magnetoresistive random access memory operating at ultralow voltage at room temperature. Nat. Commun. 2, 553 (2011). 10. 10. Dong, S., Zhai, J., Xing, Z., Li, J. F. & Viehland, D. Extremely low frequency response of magnetoelectric multilayer composites. Appl. Phys. Lett. 86, 102901 (2005). 11. 11. Lage, E. et al. Exchange biasing of magnetoelectric composites. Nat. Mater. 11, 523–529 (2012). 12. 12. Hui, Y. et al. High resolution magnetometer based on a high frequency magnetoelectric MEMS-CMOS oscillator. Microelectromech. Syst. J. 24, 134–143 (2014). 13. 13. Srinivasan, G. & Fetisov, Y. K. Ferrite-piezoelectric layered structures: microwave magnetoelectric effects and electric field tunable devices. Ferroelectrics 342, 65–71 (2006). 14. 14. Fetisov, Y. K. & Srinivasan, G. Electric field tuning characteristics of a ferrite-piezoelectric microwave resonator. Appl. Phys. Lett. 88, 143503 (2006). 15. 15. Das, J., Song, Y. Y., Mo, N., Krivosik, P. & Patton, C. E. Electric-field-tunable low loss multiferroic ferrimagnetic-ferroelectric heterostructures. Adv. Mater. 21, 2045–2049 (2009). 16. 16. Sun, N. X. & Srinivasan, G. Voltage control of magnetism in multiferroic heterostructures and devices. Spin 2, 1240004 (2012). 17. 17. Nan, T., Hui, Y., Rinaldi, M. & Sun, N. X. Self-biased 215 mhz magnetoelectric NEMS resonator for ultra-sensitive DC magnetic field detection. Sci. Rep. 3, 1985 (2013). 18. 18. Yao, Z., Wang, Y. E., Keller, S. & Carman, G. P. Bulk acoustic wave-mediated multiferroic antennas: architecture and performance bound. IEEE Trans. Antennas Propag. 63, 3335–3344 (2015). 19. 19. Domann, J. P. & Carman, G. P. Strain powered antennas. J. Appl. Phys. 121, 044905 (2017). 20. 20. Weiler, M. et al. Elastically driven ferromagnetic resonance in nickel thin films. Phys. Rev. Lett. 106, 117601 (2011). 21. 21. Gowtham, P. G., Moriyama, T., Ralph, D. C. & Buhrman, R. A. Traveling surface spin-wave resonance spectroscopy using surface acoustic waves. J. Appl. Phys. 118, 233910 (2015). 22. 22. Labanowski, D., Jung, A. & Salahuddin, S. Power absorption in acoustically driven ferromagnetic resonance. Appl. Phys. Lett. 108, 22905 (2016). 23. 23. Nan, C. W., Bichurin, M. I., Dong, S., Viehland, D. & Srinivasan, G. Multiferroic magnetoelectric composites: historical perspective, status, and future directions. J. Appl. Phys. 103, 031101 (2008). 24. 24. Hu, J.-M., Nan, T., Sun, N. X. & Chen, L.-Q. Multiferroic magnetoelectric nanostructures for novel device applications. MRS. Bull 40, 728–735 (2015). 25. 25. Hui, Y., Gomez-Diaz, J. S., Qian, Z., Alù, A. & Rinaldi, M. Plasmonic piezoelectric nanomechanical resonator for spectrally selective infrared sensing. Nat. Commun. 7, 11249 (2016). 26. 26. Qian, Z., Liu, F., Hui, Y., Kar, S. & Rinaldi, M. Graphene as a massless electrode for ultrahigh-frequency piezoelectric nanoelectromechanical systems. Nano Lett. 15, 4599–4604 (2015). 27. 27. Piazza, G., Stephanou, P. J. & Pisano, A. P. Piezoelectric aluminum nitride vibrating contour-mode MEMS resonator. Microelectromech. Syst. J. 15, 1406–1418 (2006). 28. 28. Rinaldi, M., Zuniga, C., Zuo, C. & Piazza, G. Super-high-frequency two-port AlN Contour- mode resonators for rf applications super-high-frequency two-port AlN applications. IEEE. Trans. Ultrason. Ferroelectr. Freq. Control. 57, 38–45 (2010). 29. 29. Zuniga, C., Rinaldi, M., Khamis, S. M., Johnson, aT. & Piazza, G. Nanoenabled microelectromechanical sensor for volatile organic chemical detection. Appl. Phys. Lett. 94, 223122 (2009). 30. 30. Srinivasan, G. et al. Resonant magnetoelectric coupling in trilayers of ferromagnetic alloys and piezoelectric lead zirconate titanate: the influence of bias magnetic field. Phys. Rev. B 71, 184423 (2005). 31. 31. Greve, H. et al. Low damping resonant magnetoelectric sensors. Appl. Phys. Lett. 97, 152503 (2010). 32. 32. Greve, H., Woltermann, E., Quenzer, H.-J., Wagner, B. & Quandt, E. Giant magnetoelectric coefficients in (Fe90Co10)78Si12B10-AlN thin film composites. Appl. Phys. Lett. 96, 182501 (2010). 33. 33. Jahns, R. et al. Giant magnetoelectric effect in thin-film composites. J. Am. Ceram. Soc. 96, 1673–1681 (2013). 34. 34. Nan, C. W. Magnetoelectric effect in composites of piezoelectric and piezomagnetic phases. Phys. Rev. B 50, 6082 (1994). 35. 35. Marauska, S. et al. MEMS magnetic field sensor based on magnetoelectric composites. J. Micromech. Microeng. 22, 65024 (2012). 36. 36. Lou, J. et al. Soft magnetism, magnetostriction, and microwave properties of FeGaB thin films. Appl. Phys. Lett. 91, 182504 (2007). 37. 37. Lou, J., Liu, M., Reed, D., Ren, Y. & Sun, N. X. Giant electric field tuning of magnetism in novel multiferroic FeGaB/Lead zinc niobate-lead titanate (PZN-PT) heterostructures. Adv. Mater. 21, 4711–4715 (2009). 38. 38. Ludwig, A. & Quandt, E. Optimization of the ΔE effect in thin films and multilayers by magnetic field annealing. IEEE. Trans. Magn. 38, 2829–2831 (2002). 39. 39. Lou, J., Pellegrini, G. N., Liu, M., Mathur, N. D. & Sun, N. X. Equivalence of direct and converse magnetoelectric coefficients in strain-coupled two-phase systems. Appl. Phys. Lett. 100, 102907 (2012). 40. 40. Dong, S., Zhai, J., Bai, F., Li, J. F. & Viehland, D. Push-pull mode magnetostrictive/piezoelectric laminate composite with an enhanced magnetoelectric voltage coefficient. Appl. Phys. Lett. 87, 062502 (2005). 41. 41. Wang, Y. et al. An extremely low equivalent magnetic noise magnetoelectric sensor. Adv. Mater. 23, 4111–4114 (2011). 42. 42. Clow, H. Very low coercive force in nickel-iron films. Nature. 194, 1035–1036 (1962). 43. 43. Slonczewski, J. C., Petek, B. & Argyle, B. E. Micromagnetics of laminated permalloy films. IEEE. Trans. Magn. 24, 2045–2054 (1988). 44. 44. Chu, L. J. Physical limitations of omni-directional antennas. J. Appl. Phys. 19, 1163–1175 (1948). 45. 45. Rinaldi, M., Zuo, C., Van Der Spiegel, J. & Piazza, G. Reconfigurable CMOS oscillator based on multifrequency AlN contour-mode MEMS resonators. IEEE Trans. Electron Dev. 58, 1281–1286 (2011). 46. 46. Kummer, W. H. & Gillespie, E. S. Antenna measurements. Proc. IEEE 66, 483–507 (1978). Acknowledgements We acknowledge J. Hu, M. Liu and Z. Zhou for discussions. T.N. acknowledges L.C. Sun for assistance in graphic design. This work was supported by DARPA through award D15PC00009, the W.M. Keck Foundation, the NSF TANMS ERC Award 1160504, and in part by the AFRL through contract FA8650-14-C-5706. Microfabrication was performed in the George J. Kostas Nanoscale Technology and Manufacturing Research Center. Author notes 1. Tianxiang Nan and Hwaider Lin contributed equally to this work. Affiliations 1. W.M. Keck Laboratory for Integrated Ferroics, and Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, 02115, USA • Tianxiang Nan • , Hwaider Lin • , Yuan Gao • , Alexei Matyushov • , Guoliang Yu • , Huaihao Chen • , Neville Sun • , Shengjun Wei • , Zhiguang Wang • , Menghui Li • , Xinjun Wang • , Amine Belkessam • , Rongdi Guo • , Brian Chen • , James Zhou • , Zhenyun Qian • , Yu Hui • , Matteo Rinaldi •  & Nian Xiang Sun • Brian Chen • James Zhou 4. Materials and Manufacturing Directorate, Air Force Research Laboratory, Wright-Patterson Air Force Base, Dayton, OH, 45433, USA • Michael E. McConney • , Brandon M. Howe • , Zhongqiang Hu • , John G. Jones •  & Gail J. Brown Contributions T.N. and H.L. initiated the original idea and led all the device modeling, design and measurements with the supervision of N.X.S.; Z.Q., Y.H., Y.G, H.C. fabricated the devices with the supervision of M.R. G.Y., S.W., M.E.M., B.M.H., Z.H., J.G.J. and G.J.B. assisted the simulations. Z.W. and A.M. annealed the samples. A.B. assisted the SEM measurement. N.S., M. L., X.W., R.G., B.C. and J.Z. helped with the test setup. T.N. and H.L. analyzed the data and prepared the manuscript. All authors discussed the results. Competing interests N.S. and Northeastern University (NU) have research-related financial interests in Winchester Technologies, LLC. The remaining authors declare no competing financial interests. Corresponding author Correspondence to Nian Xiang Sun. 1. 1. 2. 2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8466392159461975, "perplexity": 4460.794875155603}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687642.30/warc/CC-MAIN-20170921044627-20170921064627-00032.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-11-rational-expressions-and-functions-11-5-solving-rational-equations-practice-and-problem-solving-exercises-page-695/9
## Algebra 1: Common Core (15th Edition) $p=3$ Given : $\frac{5p+2}{p} = \frac{17}{p}$ This becomes : $5p+2=17$ (After multiplying both sides of the equation with the least common denominator $p$) Thus, $5p=17-2=15$ $p=15\div5=3$ Putting $p=3$ in the initial equation, Left Hand Side, $LHS=\frac{5(3)+2}{2}=\frac{17}{3}$ Right Hand Side, $RHS=\frac{17}{3}$ Hence, $LHS=RHS$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9917628169059753, "perplexity": 307.2700106781116}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670535.9/warc/CC-MAIN-20191120083921-20191120111921-00152.warc.gz"}
http://science.sciencemag.org/content/341/6153/1489
Report # In Situ Observations of Interstellar Plasma with Voyager 1 + See all authors and affiliations Science  27 Sep 2013: Vol. 341, Issue 6153, pp. 1489-1492 DOI: 10.1126/science.1241681 ## Finally Out Last summer, it was not clear if the Voyager 1 spacecraft had finally crossed the heliopause—the boundary between the heliosphere and interstellar space. Gurnett et al. (p. 1489, published online 12 September) present results from the Plasma Wave instrument on Voyager 1 that provide evidence that the spacecraft was in the interstellar plasma during two periods, October to November 2012 and April to May 2013, and very likely in the interstellar plasma continuously since the series of boundary crossings that occurred in July to August 2012. ## Abstract Launched over 35 years ago, Voyagers 1 and 2 are on an epic journey outward from the Sun to reach the boundary between the solar plasma and the much cooler interstellar medium. The boundary, called the heliopause, is expected to be marked by a large increase in plasma density, from about 0.002 per cubic centimeter (cm−3) in the outer heliosphere, to about 0.1 cm−3 in the interstellar medium. On 9 April 2013, the Voyager 1 plasma wave instrument began detecting locally generated electron plasma oscillations at a frequency of about 2.6 kilohertz. This oscillation frequency corresponds to an electron density of about 0.08 cm−3, very close to the value expected in the interstellar medium. These and other observations provide strong evidence that Voyager 1 has crossed the heliopause into the nearby interstellar plasma. View Full Text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892916202545166, "perplexity": 1628.595226823749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320736.82/warc/CC-MAIN-20170626115614-20170626135614-00134.warc.gz"}
https://www.vttoth.com/CMS/cosmology-calculator
This is a simple cosmology calculator. It can be used to compute the age of the universe for a given set of parameters, as well as the comoving distance and light travel time for a given redshift. Parameter Symbol Value Unit Hubble constant H0 km/s/Mpc Matter density Ωm Dark energy density ΩΛ —  Spatial curvature Ωk Age of the universe t0   Gyr Redshift z Age at redshift t   Gyr Light travel time     Gyr Comoving distance d   Mpc The age of the universe at redshift $z$ is calculated using [1]: $$t(z)=\frac{1}{H_0}\int_0^{1/(1+z)}\frac{dx}{x\sqrt{\Omega_\Lambda+\Omega_kx^2+\Omega_mx^3}}.$$ The comoving distance, in turn, is calculated as $$d(z)=\frac{c}{H_0}\int_0^z\frac{dx}{\sqrt{\Omega_\Lambda+\Omega_k(1+x)^2+\Omega_m(1+x)^3}}.$$ In all these calculations, we assume $\Omega_\gamma=0$, i.e., the density parameter for radiation is assumed to be negligible. Results may differ from those calculated using other cosmology calculators because of different rounding (in numerical integration, in particular) and, well, because this one is brand spanking new and may have bugs! [1] Weinberg, S: Cosmology, Oxford U. Press (2008)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059102535247803, "perplexity": 2436.862813412462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00687.warc.gz"}
https://math.washington.edu/events/2014-10-31/christopher-d-hacon-university-utah
# Christopher D. Hacon from The University of Utah Friday, October 31, 2014 - 2:30pm SIG 225 ## Which Powers Of A Holomorphic Function Are Integrable? #### Christopher D. Hacon from The University of Utah Let f = f(z1, . . . , zn) be a holomorphic function defined on an open subset P ∈ U ⊂ Cn. The log canonical threshold of f at P is the largest s ∈ R such that |f|s is locally integrable at P. This invariant gives a sophisticated measure of the singularities of the set defined by the zero locus of f which is of importance in a variety of contexts (such as the minimal model program and the existence of Kähler-Einstein metrics in the negatively curved case). In this talk we will discuss recent results on the remarkable structure enjoyed by these invariants. Event Type: Event Subcalendar:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8176528215408325, "perplexity": 491.9122930444746}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886706.29/warc/CC-MAIN-20200704201650-20200704231650-00464.warc.gz"}
http://math.stackexchange.com/questions/23399/the-ratio-in-terms-of-sets
# The ratio in terms of sets The recurrence $a_{n+1}=a_n(n-1/2)$ is related to $\Gamma(n+1/2)$ ( not difficult to prove) and it could be represented in a way like $\frac {(2n-1)!!} {2^n}$ Also I know that $(2n-1)!!$ is the number of permutations of 2n whose cycle type consists of n parts equal to 2; these are the involutions without fixed points (A). Also, for each $n \in N$, let $f(n)$ is the number of subsets of set $[n]=\ {1,2,...,n}$. Then $f(n)=2^n$ (B) I wonder about the understanding of the meaning ( sense) of the ration: A/B? What could be the meaning of $\frac {(2n-1)!!} {2^n}$ in terms of sets? - Can you clarify your question? Your first question asks about A/B, which is then $\frac{(2n-1)!!}{4^n}$, and you second asks about A which you seem to have answered already. –  Mitch Feb 23 '11 at 23:56 I changed the tags. –  JDH Feb 24 '11 at 5:20 I suspect that your looking for a combinatorial interpretation to the formula $\frac{\left(2n-1\right)!!}{2^n}$. Since $\mbox{gcd}\left(2^n,\left(2n-1\right)!!\right) = 1$ for all $n \geq 1$, this formula cannot be interpreted as enumerating the points in some specified finite set. Since $2^n < \left(2n-1\right)!!$ for all $n\geq 3$, this formula cannot be interpreted as a probability of some kind. The formula $\frac{\left(2n-1\right)!!}{n!2^n}$ can be interpreted as giving the probability that if two people each flip two separate fair coins $n$ times, then each person gets heads the same number of times. I worked this out by unraveling $\frac{A}{B}$ using binomial identities until I got something that looked like the probability of some easily described random event. Thank you, it's very helpful. Actually, there is one problem that the real recurrence is $a(n+1)=a(n)(n-1/2)+o(1/n). Do you know a way to select an interval when your interpretation works for the slightly changed recurrence? – Mikhail G Feb 24 '11 at 19:41 I don't know of any ways to modify this game with respect to small changes in your recurrence in such a way that the new game has the same relationship to the new recurrence as the old game had to the original recurrence. I'm not even sure what I meant by "same relationship" in the previous sentence. This doesn't mean that it's not possible to do such a thing though. – Albert Steppi Feb 26 '11 at 23:14 @Albert: The recurrence$M(n+1)/M(n)=n-1/2+o(1/n)$is related to Kendall-Mann property http://oeis.org/A181609 Could you look at the answer from Moron please Recurrence representation(s):$a(n+1)=a(n)(n-1/2)+o(1/n)$and$a(n+1)=a(n)(n-1/2+o(1/n))$It seems to me that your game is a good interpretation, am I right? - I'm glad I could be of help before, but I don't think I know enough to add anything more. It seems that$M\left(n\right)$will tend to$C\frac{\left(2n-1\right)!!}{2^n}$, for some constant$C$, but I don't know how to connect the above game to the Mahonian distribution. It might have something to do with the Mahonian and Binomial distributions both being asymptotically normal, but I don't know much about this stuff, and you probably shouldn't listen to me. – Albert Steppi Feb 28 '11 at 16:32 Well, I just know that "Mixing of Diffusing Particles" tends to be Mahonian arxiv.org/abs/1010.2563. But I do not know any answer about the meaning for the large number n in the process. It looks like you jump from one level to another($M(n+1)/M(n)\$) and this produce some results. Also, it has some relation to Markov chains. All in one, everything is unclear now to me. –  Mikhail G Feb 28 '11 at 17:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9505547881126404, "perplexity": 295.42081692675237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775083.81/warc/CC-MAIN-20141217075255-00108-ip-10-231-17-201.ec2.internal.warc.gz"}
https://dimtion.fr/wiki/math/
# Math ## Topology ### Distances #### Hamming distance The hamming distance of two vectors $a, b \in \mathrm{F}$ is the number of elements of $a$ that differ from $b$. In Python it can be calculated: def hamming_distance(a, b): if len(a) != len(b): raise ValueError("a must be the same length than b") return sum(x != y for x, y in zip(a, b)) ## Information theory and statistics ### Probability mass function Supose $X: \Omega \to A \subseteq \R$ a discrete random variable, We call $P$ the mass function of $X$ defined by: P(x) = P[X = x] The mass function is the analog of the density function $X$ were continuous random variable. ### Entropy Entropy in information theory (also called Shanon Entropy) is a generalization of Thermodynamics Entropy (Boltzmann Entropy). The entropy $H$ of a discrete random variable $X$ with possible values $\{ x_{1}, x_{2}, \dots, x_{n} \}$, with a probability mass function $P$ is defined by: H(X) = E(-\log(P(X))) More explicitly: H(X) = - \sum_{i=1}^{n} P(x_{i}) \log (P(x_{i}))
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9509965777397156, "perplexity": 1900.1610774548863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00105.warc.gz"}
http://tex.stackexchange.com/questions/33046/typeset-a-solidus-operator-free-variable-substitution
# Typeset a solidus operator (free variable substitution) I would like to typeset an operator like the one used to specify substitutions of variables with values in computer science. This is a sketch, just keep in mind that all this should span about on a single line, not two: v / / x v1, v_2 / / x1, x2 Obviously a simple v/x does not solve my problem, since v and x are written on the same exact line while I would like to have them smaller, with the v part aligned to the top of the / and the x aligned to the bottom. Can you help me solving my problem please? - Can you point to an actual printed version of this notation? If simple v/x doesn't work, but the expression has to occupy a single line what would that look like? – Alan Munn Oct 29 '11 at 15:10 Could you explain what is wrong with $v/x$? – Seamus Oct 29 '11 at 15:11 @AlanMunn, Seamus: Werner already answered my question, thanks you however. I wanted v and x printed on a slightly different heights. – Riccardo Oct 29 '11 at 15:29 I fixed my question in order to clarify this a little bit. – Riccardo Oct 29 '11 at 15:31 @Riccardo The symbol is called a solidus – Yiannis Lazarides Oct 29 '11 at 15:32 It seems like you might be after so-called "vulgar fractions". One such package that provides this is xfrac by means of \sfrac{<num>}{<denom>}. A similar functionality is provided by nicefrac that supplies an analogous \nicefrac{<num>}{<denom>}. With package options one is also able to choose between "ugly" and "nice" (default) fractions. And finally there's faktor that produces similar-style fractions using \faktor{<num>}{<denom>} (it requires the amssymb package though). Here are some comparisons: \documentclass{article} \usepackage{xfrac}% http://ctan.org/pkg/xfrac \usepackage{nicefrac}% http://ctan.org/pkg/nicefrac \usepackage{faktor}% http://ctan.org/pkg/faktor \usepackage{amssymb}% http://ctan.org/pkg/amssymb \usepackage{lmodern}% http://ctan.org/pkg/lmodern \begin{document} \renewcommand{\arraystretch}{1.5} \begin{tabular}{lll} \verb!\xfrac! & $\sfrac{\mathbf{v}}{x}$ & $\sfrac{\mathbf{v}_1,\mathbf{v}_2}{x_1,x_2}$ \\ \verb!\nicefrac! & $\nicefrac{\mathbf{v}}{x}$ & $\nicefrac{\mathbf{v}_1,\mathbf{v}_2}{x_1,x_2}$ \\ \verb!\faktor! & $\faktor{\mathbf{v}}{x}$ & $\faktor{\mathbf{v}_1,\mathbf{v}_2}{x_1,x_2}$ \end{tabular} \end{document} The choice of lmodern was because of minor font substitutions when it comes to typesetting the denominator & numerator. It is also possible to write a macro that would typeset these respective entries differently, if needed. My choice of \mathbf{...} for the numerator was just a style choice. - Vulgar fractions... it's really difficult to search for the right packages when you don't know the correct keywords. This is exactly what I was looking for, thank you so much :) – Riccardo Oct 29 '11 at 15:28 @Werner Have a look at Algebra and coalgebra in computer science, I am not sure that the correct symbol is used this way. – Yiannis Lazarides Oct 29 '11 at 16:04 @YiannisLazarides: Either way, this may be personal preference. I am not familiar with fundamental computer science symbols and representations. – Werner Oct 29 '11 at 16:06 @Werner I am also not very familiar, especially with the new computer science, in my time it would have been := which we used generally as the assignment variable. – Yiannis Lazarides Oct 29 '11 at 16:35 @YiannisLazarides: Well, nowadays we still use := sometimes if we want to define things. The syntax I'm referring to has a different meaning. Suppose you have a formula P in some calculus, where some free variables appear. You write {v/x}P or [v/x] (you can see both, depending on the conventions chosen by the authors) to mean the formula resulting from substituting every free occurrence of x in P with v. – Riccardo Oct 29 '11 at 22:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9000382423400879, "perplexity": 925.3207226170313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00139-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.electrical-installation.org/enwiki/Definition_of_reactive_power
# Definition of reactive power For most electrical loads like motors, the current I is lagging behind the voltage V by an angle φ. If currents and voltages are perfectly sinusoidal signals, a vector diagram can be used for representation. In this vector diagram, the current vector can be split into two components: one in phase with the voltage vector (component Ia), one in quadrature (lagging by 90 degrees) with the voltage vector (component Ir). See Fig. L1. Ia is called the active component of the current. Ir is called the reactive component of the current. Fig. L1 – Current vector diagram The previous diagram drawn up for currents also applies to powers, by multiplying each current by the common voltage V. See Fig. L2. We thus define: • Apparent power: S = V x I (kVA) • Active power: P = V x Ia (kW) • Reactive power: Q = V x Ir (kvar) Fig. L2 – Power vector diagram In this diagram, we can see that: • Power Factor: P/S = cos φ This formula is applicable for sinusoidal voltage and current. This is why the Power Factor is then designated as "Displacement Power Factor". • Q/S = sinφ • Q/P = tanφ A simple formula is obtained, linking apparent, active and reactive power: ${\displaystyle S^{2}=P^{2}+Q^{2}}$ A power factor close to unity means that the apparent power S is minimal. This means that the electrical equipment rating is minimal for the transmission of a given active power P to the load. The reactive power is then small compared with the active power. A low value of power factor indicates the opposite condition. Useful formulae (for balanced and near-balanced loads on 4-wire systems): • Active power P (in kW) • Single phase (1 phase and neutral): P = V.I.cos φ • Single phase (phase to phase): P = U.I.cos φ • Three phase (3 wires or 3 wires + neutral): P = √3.U.I.cos φ • Reactive power Q (in kvar) • Single phase (1 phase and neutral): Q = V.I.sin φ • Single phase (phase to phase): Q = U.I.sin φ • Three phase (3 wires or 3 wires + neutral): Q = √3.U.I.sin φ • Apparent power S (in kVA) • Single phase (1 phase and neutral): S = V.I • Single phase (phase to phase): S = U.I • Three phase (3 wires or 3 wires + neutral): S = √3.U.I where: V = Voltage between phase and neutral U = Voltage between phases I = Line current φ = Phase angle between vectors V and I. ## An example of power calculations (see Fig. L3) Fig. L3 – Example in the calculation of active and reactive power Type of circuit Apparent power S (kVA) Active power P (kW) Reactive power Q (kvar) Single-phase (phase and neutral) S = VI P = VI cos φ Q = VI sin φ Single-phase (phase to phase) S = UI P = UI cos φ Q = UI sin φ Example: 5 kW of load cos φ = 0.5 10 kVA 5 kW 8.7 kvar Three phase 3-wires or 3-wires + neutral S = ${\displaystyle {\sqrt {3}}}$ UI P = ${\displaystyle {\sqrt {3}}}$ UI cos φ Q = ${\displaystyle {\sqrt {3}}}$ UI sin φ Example Motor Pn = 51 kW 65 kVA 56 kW 33 kvar cos φ= 0.86 ρ= 0.91 (motor efficiency) The calculations for the three-phase example above are as follows: Pn = delivered shaft power = 51 kW P = active power consumed ${\displaystyle P={\frac {Pn}{\rho }}={\frac {51}{0.91}}=56\,kW}$ S = apparent power ${\displaystyle S={\frac {P}{cos\varphi }}={\frac {56}{0.86}}=65\,kVA}$ So that, on referring to Figure L16 or using a pocket calculator, the value of tan φ corresponding to a cos φ of 0.86 is found to be 0.59 Q = P tan φ = 56 x 0.59 = 33 kvar (see Figure L4). Alternatively: ${\displaystyle Q={\sqrt {S^{2}-P^{2}}}={\sqrt {65^{2}-56^{2}}}=33\,kvar}$ Fig. L4 – Calculation power diagram
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8880814909934998, "perplexity": 4816.723756812096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710503.24/warc/CC-MAIN-20221128102824-20221128132824-00208.warc.gz"}
http://mathoverflow.net/questions/44804/a-diophantine-problem-related-to-egyptian-fractions/44826
# A Diophantine problem related to egyptian fractions Consider the following system of equations: $$\sum_{i=1}^{2n}a_i=0$$ $$\sum_{i=1}^{2n}\frac{1}{a_i}=0$$ Where for each $i$ $a_i$ is an odd integer and the $a_i$ are not necessarly distinct. A solution $(a_1,\dots,a_{2n})$ is trivial if (after some permutation of the coefficients) for each $i$ we have $$a_i=-a_{n+i}$$. I know that if $n>2$ there exist non trivial solutions. My questions are: • What is the minimum number of variables for which there exist non trivial solutions ? • Can you exhibit a minimal solution or at least a solution you think could be minimal ? - There is no solution for n=2. Your equations are $w+x+y+z=0$ and $\frac{1}{w}+\frac{1}{x}+\frac{1}{y}+\frac{1}{z}=0$. Take two variables of the same sign, say $x$ and $y$, and regard them as parameters. Then we have $zw\frac{x+y}{xy}+(x+y)=0$ or $zw=xy$ and $z+w=-(x+y)$. This has only the trivial solution. So the minimum n is 3.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487553238868713, "perplexity": 124.20643956909635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928831.69/warc/CC-MAIN-20150521113208-00281-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/conservation-and-w-boson.947386/
# I Conservation and W boson 1. May 14, 2018 ### Kiley Is this true?: During beta decay a quarks' spin is changed and the mass/ energy difference is converted to a W boson which quickly decays into an electron/positron and an anti neutrino/neutrino. The mass/ energy is conserved through E=mc^2. 2. May 14, 2018 ### Orodruin Staff Emeritus You really cannot say that the W exists in any meaningful manner. The energies involved in beta decay are far below that necessary to create a real W boson. However, the dominant contribution to beta decay is the exchange of a virtual W boson. 3. May 14, 2018 ### Kiley Thanks for your reply, would the other parts of my explanation be correct? 4. May 14, 2018 ### Staff: Mentor The type of quark is changed, not its spin. The isospin changes (which is a fancy name for saying the type changes). 5. May 15, 2018
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9233877658843994, "perplexity": 2321.4745762165535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583875448.71/warc/CC-MAIN-20190122223011-20190123005011-00037.warc.gz"}
https://tex.stackexchange.com/questions/191614/adding-a-frame-footnote-in-beamerposter-creates-blank-page-at-beginning
# Adding a frame footnote in beamerposter creates blank page at beginning If I use a footnote within a column, that footnote is in an inconvenient place (at the bottom of that column), so I solved that by using the [frame] option, which puts the note at the bottom of the frame (the poster). Here it is with \footnote{a footnote}: And here it is with \footnote[frame]{a footnote} which looks as I wish it to, but with the extra blank page at the beginning: This code produces the above documents: \documentclass[10pt]{beamer} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage[orientation=portrait, size=custom, width=80, height=40, scale=2]{beamerposter} \begin{document} \begin{frame}{} \begin{block}{\veryHuge A title for my poster} \end{block} \begin{columns}[T] \column{.5\linewidth} some text in a column \column{.5\linewidth} %some more text in another\footnote{a footnote} column some more text in another\footnote[frame]{a footnote} column \end{columns} \end{frame} \end{document} Any ideas how to avoid this? I'd rather not manually place them if possible. • For just getting rid of the first page (without worrying about the cause), issue \usepackage{atbegshi} \AtBeginDocument{\AtBeginShipoutNext{\AtBeginShipoutDiscard}} in your preamble (following the advice in How to remove a blank page “before” the title page). – Werner Jul 15 '14 at 19:17 Your problem is caused by beamerposter. As far as I can see, you just take it to change the page size. If you do this manually, there is no additional page: \documentclass[10pt]{beamer} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{lmodern} \setlength{\paperwidth}{16cm} \setlength{\paperheight}{8cm} \begin{document} \begin{frame}{} \begin{block}{\Huge A title for my poster}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8041089177131653, "perplexity": 2292.3404704504824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572471.35/warc/CC-MAIN-20190916015552-20190916041552-00508.warc.gz"}
http://www.newton.ac.uk/event/gmrw09/seminars
skip to content # Seminars (GMRW09) Videos and presentation materials from other INI events are also available. Search seminar archive Event When Speaker Title Presentation Material GMRW09 21st November 2005 10:00 to 11:00 LJ Mason The twistor theory of the Ernst Equation GMRW09 21st November 2005 11:30 to 12:30 Integrable reductions of Einstein's field equations: monodromy transform and the linear integral equation methods For each of the known today integrable reductions of Einstein's field equations for space-times with two commuting isometries, the monodromy transform (similarly to the well known Inverse Scattering Transform applied successfully for many other completely integrable equations) provides us with a unified and convenient mapping of the complete space of local solutions of the symmetry reduced field equations in terms of a finite set of unconstrained coordinate-independent functions of the spectral parameter (analogous to the scattering data). These set of functions arises as the monodromy data for the fundamental solution of associated linear systems (spectral problems'') and they can serve as free independent coordinates'' in the infinite dimensional space of the local solutions. The direct and inverse problems of such coordinate transformation'', (monodromy transform), i.e. the problems of calculation of the monodromy data for given solution of the field equations and of calculation of the solution, corresponding to given monodromy data, possess unique solutions. In principle, the monodromy data functions can be calcul ated also from some boundary, or initial, or characteristic initial data for the fields, and many physical properties of solutions are simply encoded'' in the analytical structures of these functions. However, to find the solutions of the mentioned above direct and inverse problems, we have to solve explicitly the systems of ordinary differential and linear singular integral equations respectively, that can occur a difficult problems in many cases. In the introduction we give a short survey of various integrable symmetry reductions of Einstein's field equations and mention some interrelationships between various developed linear integral equation methods. We describe also in a unified manner the common structure of various integrable reductions of Einstein's field equations -- the (generalized) hyperbolic and elliptic Ernst equations for vacuum and electrovacuum space-times, for Einstein - Maxwell - Weyl fields, for stiff matter fluids as well as their matrix generalizations for some string gravity models with coupled gravity and dilaton, axion and Abelian vector fields. The structure of the direct problem of the monodromy transform and general construction of the linear singular integral equation solving the inverse problem will be considered and some applications of this approach for construction of infinite hierarchies of exact solutions will be presented. In this context we present also another linear integral equation forms of integrable hyperbolic symmetry reductions of Einstein's field equations which provides a solution (viz. linearization) of the characteristic initial value problems for colliding waves and for evolution of inhomogeneous cosmological models. GMRW09 21st November 2005 14:30 to 15:30 Quasi-stationary routes to the Kerr black hole In this talk I shall discuss quasi-stationary transitions from rotating equilibrium configurations of normal matter to rotating black holes via the extreme Kerr metric. For the idealized model of a rotating disc of dust, rigorous results derived by means of the 'inverse scattering method' are available. They are supplemented by numerical results for rotating fluid rings with various equations of state. References: gr-qc/0205127, gr-qc/0405074, gr-qc/0506130 GMRW09 22nd November 2005 11:30 to 12:30 Isomonodromic tau-functions on Hurwitz spaces and their applications We discuss Jimbo-Miwa tau-functions corresponding to Riemann-Hilbert problems with quasi-permutation monodromy groups; these tau-functions are sections of certain line bundles on Hurwitz spaces. We show how to compute these tau-functons explicitly in terms of theta-functions and discuss their applications in several areas: large N expansion in Hermitian matrix models, Frobenius manifolds, determinants of laplacians over Riemann surfaces and conformal factor of Ernst equation. GMRW09 22nd November 2005 14:30 to 15:30 Periodic instantons \& monopoles in gauge theory (and gravity) GMRW09 22nd November 2005 16:00 to 17:00 Hydrodynamic reductions of multi-dimensional dispersionless PDEs: the test for integrability A (d+1)-dimensional dispersionless PDE is said to be integrable if it possesses infinitely many n-component hydrodynamic reductions parametrized by (d-1)n arbitrary functions of one variable. Among the most important examples one should primarily mention the three-dimensional dKP and the Boyer-Finley equations, as well as the four-dimensional heavenly equation descriptive of self-dual Ricci-flat metrics. It was observed that the integrability in the sense of hydrodynamic reductions is equivalent to the existence of a scalar pseudopotential playing the role of dispersionless Lax pair. Lax pairs of this type constitute a basis of the dispersionless d-bar and twistor approaches to multi-dimensional equations. GMRW09 23rd November 2005 16:00 to 17:00 Anti-self-dual conformal structures with null Killing vectors
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9278038144111633, "perplexity": 1226.8213083903731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357929.4/warc/CC-MAIN-20210226145416-20210226175416-00296.warc.gz"}
https://www.middleprofessor.com/files/applied-biostatistics_bookdown/_book/variability-and-uncertainty-standard-deviations-standard-errors-confidence-intervals-uncertainty.html
# Chapter 5 Variability and Uncertainty (Standard Deviations, Standard Errors, Confidence Intervals) (#uncertainty) Uncertainty is the stuff of science. A major goal of statistics is measuring uncertainty. What do we mean by uncertainty? Uncertainty is the error in estimating a parameter, such as the mean of a sample, or the difference in means between two experimental treatments, or the predicted response given a certain change in conditions. Uncertainty is measured with a variance or its square root, which is a standard deviation. The standard deviation of a statistic is also (and more commonly) called a standard error. Uncertainty emerges because of variability. In any introductory statistics class, students are introduced to two measures of variability, the “standard deviation” and the “standard error.” These terms are absolutely fundamental to statistics – they are the start of everything else. Yet, many biology researchers confuse these terms and certainly, introductory students do too. When a research biologist uses the term “standard deviation,” they are probably referring to the sample standard deviation which is a measure of the variability of a sample. When a research biologist uses the term “standard error,” they are probably referring to the standard error of a mean, but it could be the standard error of another statistics, such as a difference between means or a regression slope. An important point to remember and understand is that all standard errors are standard deviations. This will make more sense soon. ## 5.1 The sample standard deviation vs. the standard error of the mean A standard deviation is the square root of the sampling variance. ### 5.1.1 Sample standard deviation The sample standard deviation is a measure of the variability of a sample. For example, were we to look at a histological section of skeletal muscle we would see that the diameter of the fibers (the muscle cells) is variable. We could use imaging software to measure the diameter of a sample of 100 cells and get a distribution like this The mean of this sample is 69.4µm and the standard deviation is 2.8 µm. The standard deviation is the square root of the variance, and so computed by $$$s_y = \sqrt{\frac{\sum_{i=1}^n{(y_i - \overline{y})^2}}{n-1}} \tag{5.1}$$$ Memorize this equation. To understand the logic of this measure of variability, note that $$y_i - \overline{y}$$ is the deviation of the $$i$$th value from the sample mean, so the numerator is the sum of squared deviations. The numerator is a sum over $$n$$ items and the denominator is $$n-1$$ so the variance is (almost!) an averaged squared deviation. More variable samples will have bigger deviations and, therefore, bigger average squared deviations. Since the standard deviation is the square root of the variance, a standard deviation is the square root of an average squared deviation. This makes it similar in value to the averaged deviation (or average of the absolute values of the deviations since the average deviation is, by definition of a mean, zero). #### 5.1.1.1 Notes on the variance and standard deviation 1. Variances are additive but standard deviations are not. This means that the variance of the sum of two independent (uncorrelated) random variables is simply the sum of the variances of each of the variables. This is important for many statistical analyses. 2. The units of variance are the square of the original units, which is awkward for interpretation. The units of a standard deviation is the same as that of the original variable, and so is much easier to interpet. 3. For variables that are approximately normally distributed, we can map the standard deviation to the quantiles of the distribution. For example, 68% of the values are within one standard deviation of the mean, 95% of the values are within two standard deviations, and 99% of the values are within three standard deviations. ### 5.1.2 Standard error of the mean A standard error of a statistic is a measure of the precision of the statistic. The standard error of the mean is a measure of the precision of the estimate of the mean. The standard error of a difference in means is a measure of the precision of the estimate of the difference in means. The smaller the standard error, the more precise the estimate. The standard error of the mean (SEM) is computed as $$$SEM = \frac{s_y}{\sqrt{n}} \tag{5.2}$$$ The SEM is often denoted $$s_{\bar{y}}$$ to indicate that it is a standard deviation of the mean ($$\bar{y}$$). #### 5.1.2.1 The standard error of the mean can be thought of as a standard deviation of an infinitely long column of re-sampled means In what sense is a standard error a standard deviation? This is kinda weird. If we sample 100 cells in the slide of muscle tissue and compute the mean diameter, how can the mean have a standard deviation? There is only one value! To understand how the SEM is a standard deviation, imagine that we sample $$n$$ values from $$N(\mu, \sigma^2)$$ (a normal distribution with mean $$\mu$$ and variance $$\sigma^2$$. The mean of our sample is an estimate of $$\mu$$ the standard deviation of sample is an estimate of $$\sigma$$) an infinite number of times and each time, we write down the mean of the new sample. The standard deviation of this infinitely long column of means is the standard error of the mean. Our observed SEM is an estimate of this true value because our observed standard deviation is an estimate of $$\sigma$$. #### 5.1.2.2 A standard deviation can be computed for any statistic – these are all standard errors. The SEM is only one kind of standard error. A standard deviation can be computed for any statistic – these are all standard errors. For some statistics, such as the mean, the standard error can be computed directly using an equation, such as that for the SEM (equation (5.2)). For other statistics, a computer intensive method known as the bootstrap is necessary to compute a standard error. We will return to the bootstrap in Section 5.4. #### 5.1.2.3 Notes on standard errors 1. The units of a standard error are the units of the measured variable. 2. A standard error is proportional to sample variability (the sample standard deviation, $$s_y$$) and inversely proportional to sample size ($$n$$). Sample variability is a function of both natural variation (there really is variation in diameter among fibers in the quadriceps muscle) and measurement error (imaging software with higher resolution can measure a diameter with less error). Since the SEM is a measure of the precision of estimating a mean, this means this precision will increase (or the SEM will decrease) if 1) an investigator uses methods that reduce measurement error and 2) an investigator computes the mean from a larger sample. 3. This last point (the SEM decreases with sample size) seems obvious when looking at equation (5.2), since $$n$$ is in the denominator. Of course $$n$$ is also in the denominator of equation (5.1) for the sample standard deviation but the standard deviation does not decrease as sample size increases. First this wouldn’t make any sense – variability is variability. A sample of 10,000 cell diameters should be no more variable than a sample of 100 cell diameters (think about if you agree with this or not). Second, this should also be obvious from equation (5.1). The standard deviation is the square root of an average and averages don’t increase with the number of things summed since both the the numerator (a sum) and denominator increase with $$n$$. ## 5.2 Using Google Sheets to generate fake data to explore the standard error In statistics we are interested in estimated parameters of a population using measures from a sample. The goal in this section is to use Google Sheets (or Microsoft Excel) to use fake data to discover the behavior of sampling and to gain some intuition about uncertainty using standard errors. ### 5.2.1 Steps 2. In cell A1 type “mu”. mu is the greek letter $$\mu$$ and is very common notation for the poplation value (the TRUE value!) of the mean of some hypothetical measure. In cell B1, insert some number as the value of $$\mu$$. Any number! It can be negative or positive. 3. In cell A2 type “sigma”. sigma is the greek letter $$\sigma$$. $$\sigma^2$$ is very common (universal!) notation for the population (TRUE) variance of some measure or parameter. Notice that the true (population) values of the mean and variance are greek letters. This is pretty standard in statistics. In cell B2, insert some positive number (standard deviations are the positive square roots of the variance). 4. In cell A8 type the number 1 5. In cell A9 insert the equation “=A8 + 1”. What is this equation doing? It is adding the number 1 to to the value in the cell above, so the resulting value should be 2. 6. In Cell B8, insert the equation "=normsinv(rand())*$B$2 + $B$1". The first part of the equation creates a random normal variable with mean 0 and standard deviation 1. multiplication and addition transform this to a random normal variable with mean $$\mu$$ and standard deviation $$\sigma$$ (the values you set in cells B1 and B2). 7. copy cell B8 and paste into cell B9. Now Higlight cells A9:B9 and copy the equations down to row 107. You now have 100 random variables sampled from a infinite population with mean $$\mu$$ and standard deviation $$\sigma$$. 8. In cell A4 write “mean 10”. In cell B4 insert the equation “=average(B8:B17)”. The resulting value is the sample mean of the first 10 random variables you created. Is the mean close to $$\mu$$? 9. In cell A5 write “sd 10”. In cell B5 insert the equation “stdev(B8:B17)”. The result is the sample standard deviation of the first 10 random variables. Is this close to $$\sigma$$? 10. In cell A6 write “mean 100”. In cell B6 insert the equation “=average(B8:B107)”. The resulting value is the sample mean of the all 100 random variables you created. Is this mean closer to $$\mu$$ than mean 10? 11. In cell A7 write “sd 100”. In cell B7 insert the equation “=stdev(B8:B107)”. The resulting value is the sample standard deviation of the all 100 random variables you created. Is this SD closer to $$\sigma$$ than sd 10? The sample standard deviation is a measure of the variability of the sample. The more spread out the sample (the further each value is from the mean), the bigger the sample standard deviation. The sample standard deviation is most often simply known as “The” standard deviation, which is a bit misleading since there are many kinds of standard deviations! Remember that your computed mean and standard deviations are estimates computed from a sample. They are estimates of the true values $$\mu$$ and $$\sigma$$. Explore the behavior of the sample mean and standard deviation by re-calculating the spreadsheet. In Excel, a spreadsheet is re-calculated by simultaneously pressing the command and equal key. In Google, command-R recalculates but is painfully slow. Instead, if using Google Sheets, just type the number 1 into a blank cell, and the sheet recalculates quickly. Do it again. And again. Each time you re-calculate, a new set of random numbers are generated and the new means and standard deviations are computed. Compare mean 10 and mean 100 each re-calculation. Notice that these estimates are variable. They change with each re-calculation. How variable is mean 10 compared to mean 100? The variability of the estimate of the mean is a measure of uncertainty in the estimate. Are we more uncertain with mean 10 or with mean 100? This variability is measured by a standard deviation. This standard deviation of the mean is also called the standard error of the mean. Many researchers are loose with terms and use “The” standard error to mean the standard error of the mean, even though there are many kinds of standard errors. In general, “standard error”" is abbreviated as “SE.” Sometimes “standard error of the mean” is specifically abbreviated to “SEM.” The standard error of the mean is a measure of the precision in estimating the mean. The smaller the value the more precise the estimate. The standard error of the mean is a standard deviation of the mean. This is kinda weird. If we sample a population one time and compute a mean, how can the mean have a standard deviation? There is only one value! And we compute this value using the sample standard deviation: $$SEM = \frac{SD}{\sqrt{N}}$$. To understand how the SEM is a standard deviation, Imagine recalculating the spread sheet an infinite number of times and each time, you write down the newly computed mean. The standard error of the mean is the standard deviation of this infinitely long column of means. ## 5.3 Using R to generate fake data to explore the standard error note that I use “standard deviation” to refer to the sample standard deviation and “standard error” to refer to the standard error of the mean (again, we can compute standard errors as a standard deviation of any kind of estimate) ### 5.3.1 part I In the exercise above, you used Google Sheets to generate $$p$$ columns of fake data. Each column had $$n$$ elements, so the matrix of fake data was $$n \times m$$ (it is standard in most fields to specify a matrix as rows by columns). This is much easier to do in R and how much grows exponentially as the size of the matrix grows. To start, we just generate a $$n \times p$$ matrix of normal random numbers. # R script to gain some intuition about standard deviation (sd) and standard error (se) # you will probably need to install ggplot2 using library(ggplot2) n <- 6 # sample size p <- 100 # number of columns of fake data to generate fake_data <- matrix(rnorm(n*p, mean=0, sd=1), nrow=n, ncol=p) # create a matrix the 3rd line is the cool thing about R. In one line I’m creating a dataset with $$n$$ rows and $$p$$ columns. Each column is a sample of the standard normal distribution which by definition has mean zero and standard deviation of 1. But, and this is important, any sample from this distribution will not have exactly mean zero and standard deviation of 1, because it’s a sample, the mean and standard deviation will have some small errror from the truth. The line has two parts to it: first I’m using the function “rnorm” (for random normal) to create a vector of n*m random, normal deviates (draws from the random normal distribution) and then I’m organizing these into a matrix (using the function “matrix”) To compute the vector of means, standard deviations, and standard errors for each column of fake_data, use the apply() function. means <- apply(fake_data,2,mean) # the apply function is super useful sds <- apply(fake_data,2,sd) sems <- sds/sqrt(n) apply() is a workhorse in many R scripts and is often used in R scripts in place of a for-loop (see below) because it takes fewer lines of code. The SEM is the standard deviation of the mean, so let’s see if the standard deviation of the means is close to the true standard error. We sampled from a normal distribution with SD=1 so the true standard is 1/sqrt(n) ## [1] 0.4082483 and the standard deviation of the $$p$$ means is sd(means) ## [1] 0.3731974 Questions 1. how close is sd(means) to the true SE? 2. change p above to 1000. Now how close is sd(means) to the true SE? 3. change p above to 10,000. Now how close is sd(means) to the true SE? ### 5.3.2 part II - means This is a visualization of the spread, or variability, of the sampled means qplot(means) Compute the mean of the means mean(means) ## [1] -0.039961 Questions 1. Remember that the true mean is zero. How close, in general, are the sampled means to the true mean. How variable are the means? How is this quantified? 2. change n to 100, then replot. Are the means, in general, closer to the true mean? How variable are the means now? 3. Is the mean estimated with $$n=100$$ closer to the truth, in general, then the mean estimated with $$n=6$$? 4. Redo with $$n=10000$$ ### 5.3.3 part III - how do SD and SE change as sample size (n) increases? mean(sds) ## [1] 1.017144 Questions 1. what is the mean of the standard deviations when n=6 (set p=1000) 2. what is the mean of the standard deviations when n=100 (set p=1000) 3. when n = 1000? (set p=1000) 4. when n = 10000? (set p=1000) 5. how does the mean of the standard deviations change as n increases (does it get smaller? or stay about the same size) 6. repeat the above with SEM mean(sems) ## [1] 0.4152472 Congratulations, you have just done a Monte Carlo simulation! ### 5.3.4 Part IV – Generating fake data with for-loops A for-loop is used to iterate a computation. n <- 6 # sample size n_iter <- 10^5 # number of iterations of loop (equivalent to p) means <- numeric(n_iter) sds <- numeric(n_iter) sems <- numeric(n_iter) for(i in 1:n_iter){ y <- rnorm(n) # mean=0 and sd=1 are default so not necessary to specify means[i] <- mean(y) sds[i] <- sd(y) sems[i] <- sd(y)/sqrt(n) } sd(means) ## [1] 0.4090702 mean(sems) ## [1] 0.3883867 Questions 1. What do sd(means) and mean(sems) converge to as n_iter is increased from 100 to 1000 to 10,000? 2. Do they converge to the same number? 3. Should they? 4. What is the correct number? Question number 4 is asking what is E(SEM), the “expected standard error of the mean”. There is a very easy formula to compute this. What is it? ## 5.4 Bootstrapped standard errors The bootstrap is certainly one of the most valuable tools invented in modern statistics. But, it’s not only a useful tool for applied statistics, it’s a useful tool for understanding statistics. Playing with a parametric bootstrap will almost certainly induce an “aha, so that’s what statisticians mean by …” moment. To understand the bootstrap, let’s review a standard error. A parametric standard error of a mean is the expected standard deviation of an infinite number of means. A standard error of any statistic is the expected standard deviation of that statistic. I highlight expected to emphasize that parametric standard errors assume a certain distribution (not necessarily a Normal distribution, although the equation for the SEM in Equation (5.2) assumes a normal distribution if the standard deviation is computed as in Equation (??)). A bootstrapped standard error of a statistic is the empirical standard deviation of the statistic from a finite number of samples. The basic algorithm for a bootstrap is (here “the statistic” is the mean of the sample) 1. sample $$n$$ values from a probability distribution 2. compute the mean 3. repeat step 1 and 2 many times 4. for a bootstrapped standard error, compute the standard deviation of the set of means saved from each iteration of steps 1 and 2. The probability distribution can come from two sources: 1. A parametric bootstrap uses samples from a parametric probability distribution, such as a Normal distribution or a poisson distribution (remember, these are “parametric” because the distribution is completely described by a set of parameters). A good question is why bother? In general, one would use a parametric bootstrap for a statistic for which there is no formula for the standard error, but the underlying data come from a parametric probability distribution. 2. A non-parametric bootstrap uses resamples from the data. The data are resampled with replacement. “Resample with replacement” means to sample $$n$$ times from the full set of observed values. If we were to do this manually, we would i) write down each value of the original sample on its own piece of paper and throw all pieces into a hat. ii) pick a paper from the hat, add its value to sample $$i$$, and return the paper to the hat. iii) repeat step ii $$n$$ times, where $$n$$ is the original sample size. The new sample contains some values multiple times (papers that were picked out of the hat more than once) and is missing some values (papers that were not picked out in any of the $$n$$ picks). A good question is, why bother? A non-parametric bootstrap assumes no specific parametric probability distribution but it does assume the distributio of the observed sample is a good approximation of the true population distribution (in which case, the probability of picking a certain value is a good approximation to the true probability). ### 5.4.1 An example of bootstrapped standard errors using vole data Let’s use the vole data to explore the bootstrap and “resampling”. The data are archived at Dryad Repository. Use the script in Section ?? to wrangle the data into a usable format. 2. file: RSBL-2013-0432 vole data.xlsx 3. sheet: COLD VOLES LIFESPAN The data are the measured lifespans of the short-tailed field vole (Microtus agrestis) under three different experimental treatments: vitamin E supplementation, vitamin C supplementation, and control (no vitamin supplementation). Vitamins C and E are antioxidants, which are thought to be protective of basic cell function since they bind to the cell-damaging reactive oxygen species that result from cell metabolism. Let’s compute the standard error of the mean of the control group lifespan using both a parametric and a nonparametric bootstrap. To implement the algorithm above using easy-to-understand code, I’ll first extract the set of lifespan values for the control group and assign it to its own variable. control_voles <- vole[treatment=="control", lifespan] [treatment=="control", ] indexes the rows (that is, returns the row numbers) that satisfy the condtion treatment = "control". Or, put another way, it selects the subset of rows that contain the value “control” in the column “treatment”. [, lifespan] indexes the column labeled “lifespan”. Combined, these two indices extract the values of the column “lifespan” in the subset of rows that contain the value “control” in the column “treatment”. The resulting vector of values is assigned to the variable “control_voles”. #### 5.4.1.1 parametric bootstrap # we'll use these as parameters for parametric bootstrap n <- length(control_voles) mu <- mean(control_voles) sigma <- sd(control_voles) n_iter <- 1000 # number of bootstrap iterations, or p means <- numeric(n_iter) # we will save the means each iteration to this for(iter in 1:n_iter){ # this line sets up the number of iterations, p fake_sample <- rnorm(n, mean=mu, sd=sigma) means[iter] <- mean(fake_sample) } (se_para_boot <- sd(means)) ## [1] 30.49765 #### 5.4.1.2 non-parametric bootstrap n_iter <- 1000 # number of bootstrap iterations, or p means <- numeric(n_iter) # we will save the means each iteration to this inc <- 1:n # inc indexes the elements to sample. By setting inc to 1:n prior to the loop, the first mean that is computed is the observed mean for(iter in 1:n_iter){ # this line sets up the number of iterations, p means[iter] <- mean(control_voles[inc]) # inc is the set of rows to include in the computation of the mean. inc <- sample(1:n, replace=TRUE) # re-sample for the next iteration } (se_np_boot <- sd(means)) ## [1] 32.47356 The parametric bootstrapped SEM is 30.5. The non-parametric bootstrapped SEM is 32.47. Run these several times to get a sense how much variation there is in the bootstrapped estimate of the SEM given the number of iterations. Compute the parametric standard error using equation (5.2) and compare to the bootstrapped values. ## 5.5 Confidence Interval Here I introduce a confidence interval of a sample mean but the concept is easily generalized to any parameter. The mean of the Control voles is 503.4 and the SE of the mean is 31.61. The SE is used to construct the lower and upper boundary of a “1 - $$\alpha$$” confidence interval using lower <- mean(x) + qt(alpha/2, df = n-1)*se(x) and upper <- mean(x) + qt(1-(alpha/2), df = n-1)*se(x). (lower <- mean(control_voles) + qt(0.05/2, df=(n-1))*sd(control_voles)/sqrt(n)) ## [1] 440.0464 (upper <- mean(control_voles) + qt(1 - 0.05/2, df=(n-1))*sd(control_voles)/sqrt(n)) ## [1] 566.7393 The function qt maps a probability to a t-value – this is the opposite of a t test, which maps a t-value to a probability. Sending $$\alpha/2$$ and $$1 - \alpha/2$$ to qt returns the bounds of the confidence interval on a standardized scale. Multiplying these bounds by the standard error of the control vole lifespan pops the bounds onto the scale of the control vole lifespans. We can check our manual computation with the linear model confint(lm(control_voles ~ 1)) ## 2.5 % 97.5 % ## (Intercept) 440.0464 566.7393 ### 5.5.1 Interpretation of a confidence interval Okay, so what is a confidence interval? A confidence interval of the mean is a measure of the uncertainty in the estimate of the mean. A 95% confidence interval has a 95% probability (in the sense of long-run frequency) of containing the true mean. It is not correct to state that “there is a 95% probability that the true mean lies within the interval”. These sound the same but they are two different probabilities. The first (correct interpretation) is a probability of a procedure – if we re-do this procedure (sample data, compute the mean, and compute a 95% CI), 95% of these CIs will contain the true mean. The second (incorrect interpretation) is a probability that a parameter ($$\mu$$, the true mean) lies within some range. The second (incorrect) interepretation of the CI is correct only if we also assume that any value of the mean is equally probable (Greenland xxx), an assumption that is absurd for almost any data. Perhaps a more useful interpretation of a confidence interval is, a confidence interval contains the range of true means that are compatible with the data, in the sense that a $$t$$-test would not reject the null hypothesis of a difference between the estimate and any value within the interval (this interpretation does not imply anything about the true value) (Greenland xxx). The “compatibility” interpretation is very useful because it implies that values outside of the interval are less compatible with the data. Let’s look at the confidence intervals of all three vole groups in light of the “compatibility” interpretation. vole_ci <- vole[, .(lifespan = mean(lifespan), lo = mean(lifespan) + sd(lifespan)/sqrt(.N)*qt(.025, (.N-1)), up = mean(lifespan) + sd(lifespan)/sqrt(.N)*qt(.975, (.N-1)), N = .N), by = .(treatment)] ggplot(data=vole_ci, aes(x=treatment, y=lifespan)) + geom_point() + geom_errorbar(aes(x=treatment, ymin=lo, ymax=up), width=0.1) + NULL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9649513959884644, "perplexity": 786.155796488466}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00560.warc.gz"}
http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000270
Research Article # Coordinated Concentration Changes of Transcripts and Metabolites in Saccharomyces cerevisiae • Affiliations: Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey, United States of America, Department of Molecular Biology, Princeton University, Princeton, New Jersey, United States of America X • Affiliations: Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey, United States of America, Department of Molecular Biology, Princeton University, Princeton, New Jersey, United States of America Current address: Genentech, South San Francisco, California, United States of America X • [email protected] (JDR); [email protected] (OGT) Affiliations: Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey, United States of America, Department of Chemistry, Princeton University, Princeton, New Jersey, United States of America X • [email protected] (JDR); [email protected] (OGT) Affiliations: Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey, United States of America, Department of Computer Science, Princeton University, Princeton, New Jersey, United States of America X • Published: January 30, 2009 • DOI: 10.1371/journal.pcbi.1000270 ## Abstract Metabolite concentrations can regulate gene expression, which can in turn regulate metabolic activity. The extent to which functionally related transcripts and metabolites show similar patterns of concentration changes, however, remains unestablished. We measure and analyze the metabolomic and transcriptional responses of Saccharomyces cerevisiae to carbon and nitrogen starvation. Our analysis demonstrates that transcripts and metabolites show coordinated response dynamics. Furthermore, metabolites and gene products whose concentration profiles are alike tend to participate in related biological processes. To identify specific, functionally related genes and metabolites, we develop an approach based on Bayesian integration of the joint metabolomic and transcriptomic data. This algorithm finds interactions by evaluating transcript–metabolite correlations in light of the experimental context in which they occur and the class of metabolite involved. It effectively predicts known enzymatic and regulatory relationships, including a gene–metabolite interaction central to the glycolytic–gluconeogenetic switch. This work provides quantitative evidence that functionally related metabolites and transcripts show coherent patterns of behavior on the genome scale and lays the groundwork for building gene–metabolite interaction networks directly from systems-level data. ## Author Summary Metabolism is the process of converting nutrients into usable energy and the building blocks of cellular structures. Although the biochemical reactions of metabolism are well characterized, the ways in which metabolism is regulated and regulates other biological processes remain incompletely understood. In particular, the extent to which metabolite concentrations are related to the production of gene products is an open question. To address this question, we have measured the dynamics of both metabolites and gene products in yeast in response to two different environmental stresses. We find a strong coordination of the responses of metabolites and functionally related gene products. The nature of this correlation (e.g., whether it is direct or inverse) depends on the type of metabolite (e.g., amino acid versus glycolytic compound) and the kind of stress to which the cells were subjected. We have used our observations of these dependencies to design a Bayesian algorithm that predicts functional relationships between metabolites and genes directly from experimental data. This approach lays the groundwork for a systems-level understanding of metabolism and its regulation by (and of) gene product levels. Such an understanding would be valuable for metabolic engineering and for understanding and treating metabolic diseases. ### Introduction Cellular metabolism—the process by which nutrients are converted into energy, macromolecular building blocks, and other small organic compounds—depends upon the expression of genes encoding enzymes and their regulators. Well-characterized transcriptional regulatory circuits such as the lac and trp operons in E. coli and the galactose utilization system in S. cerevisiae illustrate how the concentration of metabolites such as tryptophan or galactose can modulate gene expression. In addition, changes in gene expression can lead to increases or decreases in the concentrations of enzymes and regulatory proteins, thereby affecting concentrations of intracellular metabolites. While individual cases of mutual regulation by metabolites and gene products have been and continue to be described, identifying the full scope of these interactions is important for improving rational control of metabolism to meet therapeutic and bioengineering objectives. Clinical scientists, for instance, may be interested in developing novel treatments that control blood glucose levels in diabetic patients, or that fight cancer by disrupting metabolism in tumor cells. This line of inquiry is also relevant to bioengineers seeking to increase the production of small molecules (such as biofuels or flavor molecules) by knocking out or overexpressing individual genes. The simultaneous measurement of metabolite and transcript concentrations is one method that has begun to show promise for identifying gene products and small molecules involved in the same biological processes [1]. A number of studies [2][6] have followed the behavior of specific secondary metabolites of interest such as volatile signaling molecules [4] or compounds with pharmaceutical properties [3], as well as transcripts, in response to genetic or biochemical perturbations. The further refinement of high-throughput experimental technologies such as mass spectrometry has enabled recent studies to measure many functional classes of metabolites together with a large proportion of the transcriptome [7][14]. For example, one recent ground-breaking study collected extensive data on metabolite, protein, and transcript levels in E. coli following the disruption of genes in primary carbon metabolism or changes in growth rate, and concluded that metabolite concentrations tended to be stable with respect to these perturbations [15]. Another study [12] compared transcript and metabolite concentrations in S. cerevisiae under two different growth conditions, and using a novel computational method in which known metabolic pathways were divided into smaller pathways termed “reporter reactions,” the authors observed that when two different growth conditions were compared, the majority of the reporter reactions showed changes in transcript concentrations, with fewer revealing significant alterations in metabolite levels. Such methods, which make inferences based on comprehensive reconstructions of biochemical pathways in an organism, represent valuable tools for analyzing metabolomic and transcriptional data together. However, there is still a need for approaches that are designed to answer the problem of identifying novel interactions between specific gene products and metabolites that include both enzymatic and regulatory relationships. Of prime importance to the problem of finding gene–metabolite relationships from data is the question of whether functionally-related metabolites and transcripts do indeed show coherent patterns of concentration changes that can be used to make valid predictions. Studies aimed at addressing this question have relied on computing correlation coefficients between profiles of transcript and metabolite concentrations, which can then be ranked [7] or used to co-cluster the metabolomic and transcriptomic data [8]. However, it is possible that other types of regulation, such as post-translational protein modifications and feedback inhibition, could be more predominant in the aggregate than transcriptional regulation [11]. Accordingly, a major limitation with these computational techniques is that the extent to which transcripts and metabolites are co-regulated is not known. The proportion of strong gene–metabolite correlations that are due to chance or indirect effects, as opposed to enzymatic or regulatory relationships, has also not been determined by previous investigations. In part due to these concerns, previous work has come to contradictory conclusions about the extent of coordination between metabolite and transcript concentrations. Some qualitative evidence has been provided for the claim that transcripts and metabolites are substantially co-regulated [8],[9],[16], including the comparison of clustering patterns in each data set [8], and examples of coherent correlations between biosynthetic enzymes and their products [9]. In contrast, other studies contend that transcript and metabolite profiles tend to behave differently [10], and some have argued that correlative approaches are not specific enough to draw conclusions about which genes and metabolites are functionally related (such that the expression of a gene product controls the concentration of a metabolite, or vice versa) [11],[17]. Indeed, observed correlations within metabolic networks often confound straightforward interpretations. Metabolic networks, unlike transcriptional or protein-interaction networks, consist of molecular species which chemically interconvert. As a result, metabolites that are only distantly related in terms of the underlying pathways can show high levels of correlation [18]. This is especially true in the case of global perturbations (e.g., nutrient starvation, diurnal cycles) which affect many different branches of metabolism at once [19]. It is therefore likely that the interpretation of correlations between transcript and metabolite concentrations will depend on contextual factors, such as the branch of metabolism being studied or the experimental perturbation under which the correlations were observed. In order to examine these questions further, we conducted a systems-level investigation of the metabolome and transcriptome of S. cerevisiae, in which we measure the dynamic responses of metabolites and transcripts to two nutrient deprivations. We examine whether transcripts and metabolites are co-regulated in general, and demonstrate the existence of a strong trend for correlated genes and metabolites to participate in related biological processes. We also demonstrate that the correlations observed for related gene–metabolite pairs are dramatically different depending on the type of metabolite and the perturbation to which the cells are subjected, and we develop a Bayesian algorithm capable of accounting for these dependencies. When applied to our experimental data, this algorithm makes gene–metabolite interaction predictions that are significantly more precise and complete than those made by correlation alone. ### Results Transcript levels (Dataset S1, GEO accession number GSE11754) were measured via microarray following the induction of carbon starvation (glucose removal) or nitrogen starvation (ammonium removal) at 0, 10, 30, 60, 120, 240, and 480 minutes post-induction. These data complement a previously-published study that measured metabolites in Saccharomyces cerevisiae using liquid chromatography–tandem mass-spectrometry (LC-MS/MS), under the same experimental conditions [20]. Both metabolite and transcript samples were collected utilizing a filter-culture approach, which allows the rapid modification of the extracellular environment and fast quenching of intracellular metabolism and transcription 20,21. #### Singular Value Decomposition and Enrichment Analysis Reveal Substantial Coregulation between Transcription and Metabolism The extent to which transcripts and metabolites show coordinated behavior in response to environmental perturbations remains an open question. It has been observed that metabolite data and transcript data cluster in similar ways [8], yet other studies have noted marked differences in the temporal dynamics of the metabolic and transcriptional responses [10]. Previous systems-level analyses have not presented quantitative evidence either for or against the similarity of the transcriptional and metabolic responses as a whole. To investigate this question, we used singular value decomposition to mathematically extract the signals in the transcriptional and in the metabolic data, and then tested how well these signals were correlated to each other. Singular value decomposition (SVD) of the gene expression data and of the metabolite data shows that the dominant metabolite abundance patterns are closely aligned with the corresponding transcript abundance patterns (Figure 1). The first eigenvector for both genes and metabolites corresponds to a roughly monotonic starvation response that is similar across carbon and nitrogen deprivation. The second eigenvector is consistent with a nutrient-specific response, and exhibits opposite directionality between carbon and nitrogen deprivation. The third eigenvector represents a difference in dynamics between carbon and nitrogen starvation. Since neither the eigenvectors found by SVD nor the correlation analysis is sensitive to the absolute scale of each pattern in the transcript and metabolite data, the similarities described above are due to the response dynamics, and do not imply similar magnitudes of the responses. However, the magnitudes of the responses that we observed in the transcriptional and the metabolomic data appear to be comparable: the root-mean-squared fold change from the zero timepoint was 3.3-fold for metabolites and 3.1-fold for transcripts. This analysis supports the conclusion that metabolite and transcript concentrations change in quantitatively similar manners following nutrient starvation. Thus, although metabolism and transcription operate on different time scales, in the present study these two processes can be directly compared without explicitly accounting for such a temporal difference. This result enabled us to ask whether the transcripts and metabolites that show similar dynamics tend to be biologically related: although instances of relationships between the concentrations of metabolites and related biosynthetic enzymes have been described [9],[16], other systems-level studies have noted that the majority of individual gene–metabolite correlations that they observed had no direct interpretation [11]. In order to investigate whether a trend in fact exists for metabolites involved in a certain biological process to show coordinated response patterns with related genes, we conducted a statistical enrichment analysis covering multiple metabolite classes (Materials and Methods). In this analysis, the metabolites that we measured were divided into four broad classes according to their functional role: (a) glycolysis and pentose-phosphate pathway compounds, (b) TCA cycle compounds, (c) amino acids, and (d) biosynthetic intermediates. For each of these classes, a list of associated genes was assembled, such that if a gene was significantly correlated to a metabolite belonging to a particular class, then that gene was considered to be associated with that metabolic class. Significance of correlation was assessed empirically via permutation test and corrected for multiple hypotheses, setting the false discovery rate at 0.01. To find which functions were statistically over-represented in these lists of associated genes, we then performed Gene Ontology term enrichment analysis, using the hypergeometric distribution to obtain p-values which were then Bonferroni-corrected. We selected the Gene Ontology (GO) to perform this enrichment since it has annotations for S. cerevisiae that encompass not only enzymes but also regulatory proteins, and since the ontology extends beyond metabolism to cover a wide range of other biological processes such as protein translation and the cell cycle. The full list of enriched biological processes is shown in Table 1. Despite the complexity of the interplay between metabolism and transcription and complicating factors such as post-translational regulation, we found a strikingly logical and biologically relevant relationship between classes of metabolites and the types of gene products to which they were highly correlated. For example, the single significant enrichment result for TCA cycle compounds is the biological process “tricarboxylic acid cycle intermediate metabolism” (). Additionally, the gene products correlated to the amino acid metabolite category are enriched for “amino acid metabolism” () and “tRNA aminoacylation” (). Transcripts correlating with biosynthetic intermediates are enriched for “biosynthesis” (), among other processes, and the glycolysis and pentose-phosphate pathway compounds are enriched for “protein amino acid N-linked glycosylation” (). Not all terms show a direct relationship to the metabolite class for which they are enriched: except for the TCA cycle compounds, the profiles of metabolites in every class appear to be correlated to transcripts involved in lipid, ergosterol, and steroid metabolism, a result whose functional relevance has yet to be determined. Additionally, the profiles of the glycolysis and pentose-phosphate pathway compounds also tend to be highly correlated to the expression of genes involved in mitosis and the cell cycle. This enrichment may relate to the fact that, while yeast cells deprived of nitrogen continue to proliferate and divide over the course of an eight-hour experiment, presumably by catabolizing intracellular nitrogen sources, yeast cells starved for glucose arrest and enter stationary phase almost immediately [20]. #### Patterns of Correlation between Genes and Metabolites Depend on the Experimental Condition and the Type of Metabolite While the above approach is adequate to reveal an overall trend for co-regulation of functionally related genes and metabolites, the nature of the co-regulation could vary depending on the experimental condition and the functional role of the metabolite involved. Furthermore, correlations between genes and metabolites can be of varying strengths, ranging from no correlation to a perfectly linear relationship between transcript concentration and metabolite concentration. These different strengths of correlation can be more or less informative about a gene–metabolite relationship, depending on the circumstance under which they are observed. For example, since amino acids and the enzymes involved in their biosynthesis and catabolism are both likely to be strongly affected by a lack of ammonium, it could be the case that instances of co-regulation between genes and amino acids under nitrogen starvation would be more meaningful than correlations of the same strength observed under carbon starvation. In addition, correlation can be either positive (as the concentration of the gene rises, the concentration of the metabolite also rises) or negative (“inverse”—as the concentration of one rises, the other falls). The levels of related genes and metabolites could exhibit a positive correlation under one condition while having an inverse relationship or no relationship under another, due to condition-specific differences in regulation. For example, 3-phosphoglycerate (3PG) and phosphoenolpyruvate (PEP) are important in both ATP production and biosynthesis (in which they provide carbon skeletons). 3PG and PEP are known to accumulate during carbon starvation via an allosteric regulatory mechanism that prepares the cell for gluconeogenesis and the metabolism of alternate carbon sources; conversely, their abundances fall under nitrogen starvation [20]. However, many of the enzymes that use the metabolites of lower glycolysis as biosynthetic precursors are repressed under both starvation conditions, perhaps to avoid wasting limited resources. These enzymes include ILV2 (acetolactate synthase, which catalyzes the first step in isoleucine and valine biosynthesis from pyruvate) and ARO3 (which catalyzes the first step in aromatic amino acid biosynthesis from PEP and erythrose-4-phosphate). Calculating the correlations of 3PG or phosphoenolpyruvate with genes like ILV2 or ARO3 over both experimental conditions would, in effect, average two opposite behaviors: anti-correlation in carbon starvation and positive correlation in nitrogen starvation. There would be no overall correlation, although the behavior could well be consistent with a functional gene–metabolite relationship. Condition-specific behavior is indeed observed for these gene–metabolite pairs, as well as for the pairs “ALD6 to phosphoenolpyruvate,” “GLK1 to hexose phosphate,” and “PGM2 to hexose phosphate” (Figure 2A–E, in which the concentrations of metabolites belonging to the “glycolysis and pentose-phosphate pathway” class and the concentrations of functionally related gene products are plotted against each other). Ald6p oxidizes acetaldehyde to acetate, and in addition to its key role in redox metabolism [22],[23], is involved in the production of acetyl-CoA from glycolytic end products [24][27]. The enzyme Glk1p phosphorylates glucose to glucose-6-phosphate in the first irreversible step of glycolysis [28], and Pgm2p catalyzes the conversion of glucose-1-phosphate to glucose-6-phosphate during the metabolism of alternative carbon sources such as galactose [29]. The metabolite “hexose phosphate” refers to glucose-6-phosphate as well as its isomers (e.g., fructose-6-phosphate, with which glucose-6-phosphate is interconverted), which were not distinguishable in the present LC-MS/MS analysis. Overall, these glycolytic and pentose-phosphate pathway metabolites show positive correlations () with a number of related genes under nitrogen starvation but negative correlations () under carbon starvation (Figure 2A–E; representations of the nitrogen starvation data and best-fit lines using expanded x-axes can be found in Figure S2). Computing correlation across both conditions would lead to the erroneous conclusion that no relationship exists between these genes and metabolites (). In contrast, for metabolites belonging to the “amino acids” category (Figure 2F–H), related metabolites and genes tend to show strong positive correlations under both conditions: histidine and HTS1 (the histidine tRNA synthetase), methionine and MET6 (methionine synthase), and threonine and THR4 (threonine synthase) exemplify this behavior (). We have therefore developed a Bayesian algorithm capable of automatically learning and exploiting the way in which different signs and strengths of correlation can be suggestive of a functional relationship, depending on the experimental condition and the metabolite class. #### Bayesian Analysis Captures Context-Dependent Patterns of Correlation between Genes and Metabolites Bayesian networks [30][32] are a general class of graphical probabilistic models. Because they allow the specification of dependencies between quantities of interest, such as relationships observed between genes and metabolites under different conditions, Bayesian networks are well-suited for leveraging such dependencies in order to make specific predictions. In these networks, variables, or “nodes,” are connected by arrows, or “edges,” indicating which variables depend on which others. Each node is parametrized by a conditional probability distribution (CPD), which describes the probability of observing the variable in a certain state, given the states of the variables on which it is dependent (for example, in Figure 3B, “gene–metabolite correlation observed under carbon starvation” is dependent on both “metabolite class” and whether a “gene–metabolite functional relationship” exists). Our objective in constructing this Bayesian network was to formalize the concept that the strength and direction of correlation observed between a certain gene and metabolite is particular to the experimental perturbation, and depends on the functional class to which the metabolite in question belongs. We also expect to observe different correlations for metabolites and genes that are truly related than we would observe for random, unrelated gene–metabolite pairs. The Bayesian network that we constructed (Figure 3B) therefore consists of four nodes. Two of these nodes correspond to observed correlations calculated from LC-MS/MS and microarray data (“gene–metabolite correlation observed under nitrogen starvation” and “gene–metabolite correlation observed under carbon starvation”); each of these nodes can take one of five different values, depending on the strength and sign of correlation. The other two nodes (“gene–metabolite functional relationship,” which can be yes or no, and “metabolite class,” which can be any of the four metabolite classes enumerated above) correspond to intrinsic attributes of the gene–metabolite pair. To represent the dependencies described above, edges have been drawn from the node representing “functional relationship” and from the node representing “metabolite class” to both of the nodes representing gene–metabolite correlations observed under a specific experimental condition. Given a set of positive and negative examples, the conditional probability distributions that constitute the parameters of our model can be automatically learned. These distributions are given by and , where refers to the correlation of gene and metabolite under carbon starvation, to correlation under nitrogen starvation, to whether or not a functional relationship exists between gene and metabolite , and to the class of metabolite . By Bayes' theorem, these class-specific conditional probability distributions (CPDs) are equivalent to the probability that a pair is functionally related given a certain observed level of correlation, normalized by 1) whether that level of correlation is rare or common overall and by 2) whether functional relationships are rare or common overall (i.e., and ). To learn these parameters, we calculated how often different correlations were observed for a set of gene–metabolite pairs known to be either functionally related or unrelated. Positive examples were drawn from genes and metabolites belonging to the same pathway in the Kyoto Encyclopedia of Genes and Genomes (KEGG [33]); negative examples were random gene–metabolite pairs that were not in the positive example set (see Materials and Methods for details). A key advantage of Bayesian networks, compared to other machine-learning techniques, is that since the parameters are probability distributions, they have a direct meaning which can be informative about the system being modeled. With this in mind, the parameters and are shown in Figure 3C, for two of the metabolite classes (“glycolysis and pentose-phosphate pathway” and “amino acids”) and all possible values of , , and . Intuitively, these probabilities capture how likely an observed gene–metabolite correlation would be if the gene–metabolite pair were either related (dark grey) or unrelated (light grey). For example, in the plots on the right-hand side of Figure 3C (nitrogen deprivation data), the distribution for functionally-related pairs is shifted substantially to the right: this indicates that functionally related gene–metabolite pairs tend to be positively correlated under nitrogen starvation. Another visualization of these conditional probability distributions is shown in Figure 3D. Here, the CPDs are collapsed into a single bar chart for each metabolite class and environmental condition by taking the log-ratio of the CPDs represented by the light and dark lines in Figure 3C. These log-odds scores are given mathematically by and . This visualization is particularly useful because it clarifies whether a particular level of correlation is more likely to be observed for a related gene–metabolite pair (above zero) or for an unrelated pair (below zero). For instance, this figure shows that for amino acids (second row), negative correlations under either condition are more likely to be observed for unrelated gene–metabolite pairs than for pairs where a functional relationship exists. The magnitude of each bar corresponds to how much more probable a particular correlation is for either related or unrelated pairs. For example, in the case of the amino acids, while a positive correlation under either experimental condition suggests a functional gene–metabolite relationship, positive correlation is more informative under nitrogen starvation than it is under carbon starvation. The values that the network learned for these parameters indicate that the magnitude and direction of correlation between a given gene and metabolite do in fact depend strongly on that metabolite's class, as suggested by Figure 2. For instance, the amino acid methionine and the biosynthetic gene MET6, which converts homocysteine to methionine, have a clear functional relationship. Consistent with the parameters learned, methionine and MET6 exhibit a strong positive correlation under both conditions, especially nitrogen starvation (Figure 2G). In contrast, for glycolysis and pentose-phosphate pathway compounds, while related gene–metabolite pairs do exhibit positive correlations under nitrogen starvation, interacting pairs actually tend to be inversely correlated under carbon starvation. This relationship is typified by GLK1 and hexose-phosphate (Figure 2D). Additionally, when hexose-phosphate concentrations are plotted against GLK1 transcript concentrations, it is readily apparent that because hexose-phosphate and GLK1 are positively correlated under nitrogen starvation but inversely correlated under carbon starvation, they exhibit a very weak relationship when Pearson correlation is computed across both conditions (). This pattern of positive correlation under nitrogen starvation and inverse correlation under carbon starvation is also observed for a number of other gene–metabolite pairs in our standard of examples (Figure 2A–E), including phosphoenolpyruvate (PEP) and ALD6 (Figure 2C). In terms of chemical steps, PEP is linked to ALD6 indirectly (being first converted to pyruvate by CDC19 and then to acetaldehyde via pyruvate decarboxylase, the major isozyme of which is PDC1). However, PEP, like ALD6, is predominantly cytoplasmic, whereas the intermediate species pyruvate and acetaldehyde exist in both cytoplasmic and mitochondrial pools, which could be regulated differently. This suggests that the total cellular concentrations of PEP might be more strongly related to ALD6 concentrations than would those of the other intermediate species, and furthermore that gene–metabolite pairs that are not directly linked by a single biochemical reaction may still have important functional relationships. This type of Bayesian integration does not attempt to infer causality between changes in gene and metabolite levels. In certain cases, however, we do have a prior expectation that can explain some of the learned parameters. For example, lack of ammonium under nitrogen starvation likely leads directly to falling amino acid concentrations. Nitrogen starvation also leads to decreased activity of the transcription factor GCN4 and thus reduced expression of amino acid biosynthetic genes. Although the mechanism is not fully understood, there is evidence that the TOR pathway, which is believed to sense intracellular concentrations of glutamine [34], is responsible for causing reduced translation of GCN4 via the protein Eap1p [35]. Under carbon starvation, many transcripts may be induced or repressed by a combination of extracellular pathways for the sensing of glucose (via Ras/PKA and Snf3p) and intracellular sensing of hexose-phosphate (potentially mediated by HXK2) [36]. While these pathways are elaborate and involve many layers of regulation, it has been observed that during growth without glucose, repression involving HXK2 and MIG1 is relieved [37]. In the absence of glucose, we would expect glucose-6-phosphate, fructose-6-phosphate, and FBP levels to drop: since HXK1 and GLK1 have been shown to be under the control of HXK2-dependent glucose repression [38], this would explain the inverse correlation observed between, for example, GLK1 and hexose-phosphate. #### Bayesian Integration Finds Specific Gene–Metabolite Interactions outside Our Standard of Examples Following parameter learning, we performed inference using the Bayesian network, which assigned to each gene–metabolite pair a confidence score. This score is equal to the posterior probability of a functional relationship, given the metabolite class and the correlations observed in the data (i.e., ). Since this value is continuous between 0 and 1, different cutoffs can be chosen depending on whether a certain application requires more precision (the fraction of pairs above the cutoff that are true positives) or more recall (fraction of total true positives with a score above the cutoff). One way to assess performance that takes this trade-off into account is to plot precision against recall for every possible cutoff, yielding a precision-recall curve (PRC). The same type of PRC can also be generated using the Pearson correlation between metabolite and gene concentrations instead of the gene–metabolite confidence score. We have employed these PRCs to compare the performance of our method relative to simply computing correlation across both experiments (Figure 4). Given the differences between the parameters learned for distinct perturbations and metabolic classes, we expected that many physiologically relevant, specific gene–metabolite interactions that can be discovered by this Bayesian analysis would be missed by looking only at overall correlation. In agreement with this expectation, when evaluated against our set of known gene–metabolite interactions (using three-fold cross validation to avoid overfitting) and compared to Pearson correlation, Bayesian integration performs significantly better (Figure 4). It is more precise than correlation overall, and reaches twice the precision at the most stringent cut-off (the leftmost end of the curve), which corresponds to the most confidently-predicted gene–metabolite interactions. To investigate the potential of the Bayesian network to find biologically relevant interactions beyond the set of examples, we searched for support in the scientific literature for the most confident predictions of our network (764 predicted gene–metabolite interactions, excluding those belonging to the example set derived from KEGG), as well as for 250 random gene–metabolite pairs. While many true predictions could be novel and thus unsupported in the literature, we still expect that accurate predictions would be enriched for pairs supported by existing published evidence. Each gene–metabolite pair was scored on four specific criteria (see Materials and Methods). The evaluation was performed blind to whether gene–metabolite pairs were predicted or randomly picked. Of the random pairs, only 1.2% received literature support. In contrast, 9.4% of the highly-predicted pairs were supported by at least one piece of literature evidence, an enrichment of 7.8-fold ( by Fisher's exact test; for contingency table, see Table 2). Whereas no random pair satisfied all four evidence criteria, three predicted pairs did: methionine-MET3, methionine-MET22, and methionine-MET10. These three pairs were not in our gold standard because they participate in the assimilation of sulfur into homocysteine, and although homocysteine is converted in one step to L-methionine, in the KEGG database “sulfur metabolism” does not contain the molecular species “methionine” and is a separate pathway from “methionine metabolism.” Nevertheless, MET3, MET10, and MET22 are essential for methionine biosynthesis and the knockouts are methionine auxotrophs. We also found a variety of other genes and metabolites for which there was substantial evidence: e.g., valine-PDC5 (PDC5 is involved in the catabolism of valine to isobutyl alcohol [39]), and methionine-MIS1 (MIS1 is required for the formylation of the mitochondrial initiator [40]). The full results can be found in Dataset S3. These results suggest that, despite the limited scale of the present work, our approach is capable of generalizing from our training set to find other biologically relevant gene–metabolite interactions. A further example of the potential utility of the Bayesian approach is illustrated in Figure 5, in which we describe an interaction identified by Bayesian integration between a metabolite and a protein that regulates enzyme concentrations. This regulatory protein functions as an important part of the system that S. cerevisiae has evolved to face a fundamental metabolic challenge: namely, the diauxic shift, during which the cell changes from fermentative to respirative growth. In the first phase of growth on fermentable sugars, S. cerevisiae cultures initially grow quickly, metabolizing all the available glucose to ethanol (high ethanol concentrations are toxic to many other microbes, giving S. cerevisiae a competitive advantage). This fermentative phase is followed by a second phase of growth in which yeast cells use ethanol as a substrate, and perform oxidative respiration. The switch between these two states involves extensive metabolic and transcriptional remodeling [41]. Chief among the changes induced by the diauxic shift is the shift from using glucose to generate ATP (glycolysis), to using ethanol and ATP to make glucose and the carbon skeletons necessary for biosynthesis (gluconeogenesis). Many of the steps in both glycolysis and gluconeogenesis are readily reversible and are therefore catalyzed by the same enzymes. For this reason, it is imperative that the cell be able to commit to one pathway or the other by controlling the enzymes that are unique to each pathway, as otherwise the cell would waste energy through futile cycles. Accordingly, S. cerevisiae has evolved extensive regulatory machinery at the metabolic, transcriptional, and post-transcriptional levels that allows it to successfully negotiate this transition. One of the key steps of glycolysis is the irreversible conversion of fructose-6-phosphate (F6P) to fructose-1,6-bisphosphate (FBP), catalyzed by phosphofructokinase (the genes PFK1 and PFK2); in gluconeogenesis, the opposite reaction is catalyzed by a separate enzyme, fructose-1,6-bisphosphatase (Fbp1p). A schematic of this pathway is given in Figure 5A. One of the top predictions made by the Bayesian network for the metabolite fructose-1,6-bisphosphate (FBP) was the gene VID24, which was not in our gold standard of examples. However, VID24 is known to play an important regulatory role in governing the gluconeogenetic enzyme Fbp1p: during the switch from gluconeogenesis to glycolysis, Fbp1p is specifically targeted to and degraded in the vacuole in a way that is dependent on VID24 [42]. This example highlights the promise of Bayesian integration to find relationships that correlation alone would miss. Pearson correlation calculated between VID24 and FBP across both conditions yields equal to just 0.03. However, as shown in Figure 4B, VID24 and FBP exhibit an inverse correlation under carbon starvation () and a strong positive correlation under nitrogen starvation (). According to the parameters learned by the Bayesian network for the “glycolysis and pentose-phosphate pathway” metabolite class, this behavior is indicative of a gene–metabolite functional relationship with a high likelihood. It is important to note that this interaction was found despite the fact that our study did not explicitly target the diauxic shift, suggesting the capacity of this method to recover diverse functional signals in the data. Moreover, this example shows that interactions can be found not only between genes encoding enzymes and the metabolites they act on, but also between metabolites and proteins that play roles in metabolic regulation. ### Discussion We have generated paired transcriptional and metabolomic data that capture the dynamic responses to two perturbations over time, and find substantial evidence for the co-regulation of transcripts and metabolites. At a general level, singular value decomposition reveals that the dominant dynamic patterns exhibited by transcript and metabolite concentrations are closely aligned. Functional enrichment demonstrates that metabolites tend to show significant correlations to genes that play roles in related biological processes. Finally, using a Bayesian framework, we are able to find patterns of co-regulation between genes and metabolites that take into account both the experimental context where the correlations are observed, and the functional classification of the metabolite in question. By analyzing metabolite and transcript data within this framework, we can identify new interactions and detect both direct and indirect regulatory relationships between a broad range of genes and small molecules. For example, we identified a regulatory link between FBP and VID24, although they exhibit almost no net correlation across both of the environmental perturbations tested (Figure 5). Additionally, our top predictions also include gene–metabolite relationships that connect metabolism to other key biological processes: for instance, methionine is known to play a unique and important role in the initiation of translation, and indeed two of our top predictions link methionine to FUN12 and GCN3, which are both involved in the formation of the 80S initiation complex that includes [43][45]. This type of Bayesian integration has been shown to outperform conventional correlation-based analyses (Figure 4), and the literature study suggests that we are able to find true gene–metabolite interactions outside of the gold standard. Furthermore, our ability to verify via literature search a substantial fraction of the predicted gene–metabolite pairs (Table 2) implies that Figure 4 markedly underestimates the precision of the Bayesian approach: many of the apparent “false positives” reflect real interactions that were not included in the limited set of positive examples selected from KEGG. These include both real interactions that are already known in the literature, and novel interactions to be verified in follow-up efforts. Identifying such novel gene–metabolite relationships could be used not only to drive further experimentation, but also to contribute to other modeling approaches that rely on extensive knowledge about cellular metabolic networks and their connectivity. Despite this progress, there is undoubtedly still room for advances to be made in the accuracy of the predicted gene–metabolite interactions. For instance, advances in analytical techniques continue to allow the measurement of larger numbers of known compounds. Although the metabolite classes that we describe in the present work are broadly applicable and cover the majority of primary metabolism, they could also be extended to cover biomolecules that were not measured in the current study, such as lipids or secondary metabolites. Additionally, an increase in measured compounds could allow broader classes such as “biosynthetic intermediates” to be divided into smaller groups like amino acid intermediates or nucleotides, allowing more specific predictions to be made without the risk of overfitting based on a small number of examples. Using a larger number of classes could also help to avoid situations in which a small number of metabolites in a particular class exhibit different behavior from the majority, potentially leading to incorrect predictions for those outlier metabolites. Another area for future development is the gold standard itself, which, although certainly sufficient to make valid predictions, is still incomplete, as shown by the literature study. The gold standard could productively be combined with an extensive curation of the yeast metabolism literature, so that known regulatory as well as enzymatic interactions between genes and metabolites would then be included. It should also be noted that the current predictions were made on the basis of only two experimental conditions. As interest in the measurement of multiple biomolecule types grows, more paired gene–metabolite data of the type presented here will continue to be published, and we imagine that these data will prove a valuable resource for integration efforts like the present work. Selected data sets that could prove particularly illuminating include metabolome and transcriptome sampling under other elemental starvations, such as phosphate and sulfur. Additionally, since prototrophic yeasts are capable of growth on a variety of carbon and nitrogen sources, monitoring gene and metabolite concentrations under these conditions could be illuminating with respect to both general (e.g., preferred vs. non-preferred nutrient sources, such as ammonium vs. proline) and specific gene–metabolite interactions (e.g., repression or activation of the GAL pathway by galactose). As compounds from more branches of metabolism can be measured, and as data sets that track multiple biomolecule types in response to perturbations become available for more experimental conditions, analyses that are sensitive to biochemical context are likely to become increasingly critical. This work represents proof-of-concept of the potential of context-sensitive approaches for building networks relating metabolic activity and gene expression directly from experimental data. ### Materials and Methods #### Limitation Experiments on Filters Cultures of FY4 (a prototrophic, Mata derivative of S288C [46], Princeton strain DBY11069) were grown overnight in liquid minimal media (YNB, see below). After these overnight cultures were set back, 10 mL of early exponential phase culture (Klett 60, 1.5×106 cells/mL) was filtered onto pore-size nitrocellulose filters (82 mm in diameter). The cells (1.5×107 cells with diameter ) covered 5% of the filter surface. The filters were then placed on minimal media-agarose plates and allowed to grow for 3 h at , or approximately one doubling on the filter. To initiate the starvation time-course, the filters were transferred from the minimal plates to plates made with media lacking either ammonium (YNB-N, nitrogen deprivation) or D-glucose (YNB-C, carbon deprivation). The filter-culture approach, which allows for both rapid modification of the extracellular environment and rapid quenching of metabolism, is described in detail in previous work [20],[21]. The transcriptome and metabolome were sampled during exponential growth (before switching) and at 10, 30, 60, 120, 240, and 480 minutes following the switch to nitrogen-free or carbon-free media. Measurements of both metabolites and transcripts were collected in parallel. The metabolite measurements and extraction procedures have been previously published [20]. The observed quantitative metabolite concentration changes were verified by an independent experiment that included isotopically-labeled standards of 34 metabolites during the measurement process. This validation demonstrates that the metabolite measurements are robust to potential ion suppression artifacts and experimental noise (see Figure S1 and Brauer et al. [20]). Experimental controls also demonstrate that the presented metabolomic and transcriptomic data are dominated by biological signal and not by noise. Raw LC-MS/MS data ( transformed ion counts) for two independent replicates of exponentially growing yeast are plotted in Figure S3. The agreement between the two samples was found to be high (y = 1.03x. ). The Lin's concordance coefficient [47], a normalized measure of the distance from the 45° line representing , where 0 is non-reproducible and 1 is perfectly reproducible, was 0.98, indicating very high reproducibility. For the transcriptomic data, two additional negative control replicates were collected by extracting RNA from filter cultures moved to plates containing both a carbon and a nitrogen source (i.e., the same nutrient conditions as before the switch). The median standard deviation in transcript measurements collected from these replicates was found to be 0.099 ( units). In contrast, the median standard deviation for the carbon starvation timecourse was 0.45, and for the nitrogen starvation timecourse 0.46. This demonstrates that the primary source of variability in the presented data is not due to technical or biological noise. #### Transcriptome Sampling In order to take timepoints of transcription, the filter cultures were submerged in liquid nitrogen and stored at . Yeast cells were washed from filters with 10 mL of lysis buffer, and RNA was subsequently extracted using a Qiagen RNEasy kit (QIAGEN, Valencia, CA). Oligo(dT) resin from an Oligotex midi kit (QIAGEN, Valencia, CA) was used to purify the poly(A)+ fraction from the total extracted RNA. cRNA labeled with cyanine (Cy)5- (experimental) or Cy3-dCTP (reference) was then synthesized from 1 to of the poly(A)+ RNA. The transcriptional profiles of yeast cultures at the time of harvest were measured by hybridization of the Cy3- and Cy5-labeled cRNA to an Agilent Yeast Oligo Microarray (V2). The reference sample was the zero-timepoint (taken during exponential growth prior to media switching). #### Normalization of Transcriptome and Metabolome Data Metabolite levels were normalized by cell dry weight to account for cell growth and division during the time course. As metabolites were roughly evenly-distributed between increasing and decreasing in response to nutrient starvation, no normalization for total metabolome size or total LC-MS/MS signal was required. For transcript levels, cell growth and division was accounted for by loading an approximately equivalent amount of reference and experimental RNA onto each array. This loading also normalized for decreases in total RNA pool size induced by nutrient starvation. Such normalization is useful to enable the identification of specific transcriptional regulatory events, as opposed to changes in the overall level of transcription. To correct for biases in hybridization efficiency between the Cy3 and Cy5-labeled RNA, microarray chip scans were normalized so that the total intensities across all probes in the red and in the green channels were equal. For the purposes of our analyses, both metabolite and transcript levels were expressed as log base 2 ratios of the zero timepoint. Missing values for the transcriptional data were imputed using KNNimpute [48] with 10 neighbors, discarding any genes having more than 30% missing values; metabolites with missing values were discarded. #### Singular Value Decomposition Singular value decomposition (SVD) is a process used to elucidate predominant patterns in large data matrices; its applications include image compression and noise reduction. SVD transforms a single data matrix into three matrices: these correspond to (i) the characteristic patterns, or “eigenvectors”; (ii) the amount of information each pattern contributes to the original data set as a whole; and (iii) the weight of each pattern for individual variables. Alter et al. [49] contains a more detailed treatment. Singular value decomposition was performed in MATLAB using the svd command. To determine the extent of coordination between metabolism and transcription under the conditions tested, we computed the Pearson correlation between the most informative gene patterns (top eigenvectors) and corresponding metabolite patterns. We found that each of the first three gene patterns correlated significantly with the corresponding metabolite patterns, suggesting that similar overall trends were exhibited in both types of data. Significance was established via t-test (). The root-mean-squared fold change () for each of the data types was computed according to the following formula: where is the number of timepoints (12 for either data type), is the number of molecular species measured (5373 for transcripts and 61 for metabolites), and is a particular abundance level observed at timepoint for gene or metabolite expressed as a log2 ratio to time 0. The root-mean-squared fold change was 3.1 for transcripts and 3.3 for metabolites. #### Gene Ontology Enrichment Analysis For each metabolite and gene measured, we calculated the Pearson correlation between them: where and correspond to the sample variance of and , and is the sample size (i.e., total number of observations) of (or ). We then conducted a permutation test, rearranging the columns (i.e., experimental conditions) of the metabolite data matrix 104 times to get bootstrapped p-values for these correlation values, which were then corrected using a false discovery rate of according to the procedure described by Benjamini and Hochberg [50]. The significantly-correlated genes for each metabolite were assembled into lists. We combined all the gene lists for every metabolite in a particular class (TCA cycle, glycolysis and pentose-phosphate pathway, amino acids, or biosynthetic intermediates), yielding four larger gene lists, one for each metabolite class. The Gene Ontology (GO) [51] is a hierarchical categorization scheme for genes in several organisms, including S. cerevisiae. There are three top-level nodes, or “terms,” namely, “molecular function,” “cellular component,” and “biological process”; the majority of gene products in yeast are annotated to more specific (i.e., descendant) terms. We calculated the enrichment of these per-metabolite-class gene lists for all possible GO “biological process” terms using the hypergeometric distribution. Let be the number of class-associated genes, the number of genes in the genome, the number of genes in a GO term, and the number of class-associated genes that are also in the GO term. The p-value is then given by: where iterates from 0 to . This equation therefore yields one minus the probability of observing or fewer class-associated genes belonging to a given GO term, or equivalently, the probability of observing or greater class-associated genes belonging to that GO term. These enrichment p-values were then Bonferroni corrected (i.e., , where is the number of tests). Since only terms containing at least one of the significantly-correlated genes were tested for enrichment, the number of hypotheses tested was 138 for the “TCA cycle” class, 611 for the “amino acids” class, 468 for the “glycolysis and pentose-phosphate pathway” class, and 620 for the “biosynthetic intermediates” class. All significant () enrichments are given in Table 1. #### Gold Standard Construction We assembled a “gold standard,” or a set of positive and negative examples of gene–metabolite interactions, from the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database [33]. In order to find positive examples for the metabolite classes “amino acids” and “biosynthetic intermediates,” for each distinct pathway (e.g., “arginine and proline metabolism”) as defined by KEGG, the set of reactions in a pathway was collected and then matched to the enzymes that catalyzed these reactions. To generate gene–metabolite pairs, every measured metabolite that appeared in that pathway was then paired with this set of enzymes. For example, in the pathway “arginine and proline metabolism,” arginine, ornithine, and proline are all paired with all of the enzymes involved in the catabolism and biosynthesis of arginine and proline, including arginase (CAR1), ornithine-oxo-acid transaminase (CAR2), and proline oxidase (PUT1). In the case of the “TCA cycle” and “glycolysis and pentose-phosphate pathway” metabolite classes, a similar procedure was used. However, compounds from both of these classes are used as carbon skeletons for a wide variety of metabolites. Therefore, to improve the specificity of these positive examples, positive examples for the “TCA cycle” class were drawn only from the list of reactions in the “TCA cycle” pathway, and positive examples for the “glycolysis and pentose-phosphate pathway” class were drawn only from “glycolysis and gluconeogenesis” and “pentose-phosphate pathway.” Additionally, to properly capture the structure of glycolysis and the pentose-phosphate pathway, each of these KEGG pathways was divided into two separate subpathways: these subpathways were upper and lower glycolysis (genes and metabolites upstream and downstream of fructose-1,6-bisphosphate, respectively, with FBP itself belonging to upper glycolysis), and the oxidative and non-oxidative branches of the pentose-phosphate pathway. Matching of metabolites within a pathway to reactions and to enzymes was performed in the same way as above (because of the structure of KEGG, this included certain enzymes outside the pathway that directly acted on one of the metabolites in these pathways, such as ILV6). Certain “distributor” metabolites (2-oxoglutarate, acetyl-CoA, ADP, AMP, ATP, L-glutamate, L-aspartate, L-glutamine, NAD+, and NADP+) were excluded from the gold standard because they are common reactants or products in a very large number of reactions. For each metabolite class, 50 times as many random gene–metabolite pairs (drawn from outside the positive example set for all metabolite classes) were picked as negative examples, so that the final gold standard was 1.96% positives and 98.04% negatives (Dataset S2). #### Data Processing In order to perform Bayesian integration, we first calculate the Pearson correlation of every metabolite and gene separately for each experimental perturbation. In order to ensure that these correlations are comparable between conditions, we enforce normality on the observed correlations by applying a Fisher transform: The resulting distribution is then centered by the mean and divided by the standard deviation (). This process transforms the correlation distributions observed under nitrogen and under carbon starvation to be approximately equal to a normal distribution centered around zero, with a standard deviation of one. The Z-scores are then discretized into five bins; bin edges were , so that the “strong inverse” bin contained Z-scores more than 1.5 standard deviations below the mean, the “weak inverse” bin contained Z-scores from 0.5 to 1.5 standard deviations below the mean, the “no relationship” bin contained Z-scores 0.5 standard deviations above or below the mean, and so forth. These discretized data become the input for the Bayesian networks described below. #### Bayesian Network Training and Evaluation The algorithm for finding gene–metabolite interactions is based on the Bayesian network shown in Figure 3. This network, whose structure is depicted in Figure 3B, relates the correlations observed between a gene and metabolite under each condition to (1) whether the gene and metabolite are related and (2) the class of the metabolite. More rigorously, this network specifies that, for a given gene and metabolite , the discretized correlations observed under nitrogen starvation () and under carbon starvation () are dependent on the class () of the metabolite and whether or not the gene and metabolite are functionally related (). This network is therefore parametrized by the conditional probability distributions and , along with the prior probability of a gene–metabolite relationship , which simply reflects the proportion of positive and negative examples in our gold standard for each metabolite class (see above). The conditional probability distributions and were calculated from the data using maximum likelihood [52]. In each of our examples, the value of every node is known, so this calculation reduced to counting the examples falling into each bin of correlation under nitrogen or carbon starvation for each possible value of and . These counts were then divided by the total number of observations satisfying those values of and to yield probability distributions summing to one for and . After learning the parameters for this Bayesian network (shown in Figure 3C and 3D), we calculated the probability that a gene and metabolite were actually related given the observed correlations and the metabolite class, or . In our network, exact inference can be used to calculate : The numerator can be calculated directly from the learned parameters, and the denominator can be obtained by marginalization over . We assessed this algorithm by generating a precision-recall curve, employing three-fold cross-validation to ensure unbiased evaluations. The gold standard was divided into random thirds; the network was then trained on two-thirds of the examples and evaluated on the remainder. This training was repeated three times, each time holding out a different third of the gold standard. Histograms of the confidence scores received by the positive and negative examples in the Bayesian integration process reveal that the positive examples from our gold standard indeed have significantly higher scores ( by Kolmogorov-Smirnov test), and can be found throughout the top predictions (Figure S1). Our Bayesian network was trained and evaluated using the Bayes Net Toolbox for MATLAB [53]. #### Literature-Based Evaluation of Top Predictions The top 788 gene–metabolite pairs were predicted to be related by the Bayesian network with equal confidence. These top predictions were compiled and the pairs in the gold standard were removed; this yielded 764 predicted pairs. We then added 250 random gene–metabolite pairs, and analyzed the random and predicted sets together. This analysis was performed blind to whether pairs were predicted by the algorithm or randomly selected. The predictions were evaluated based on four categories: 1. Specific GO function. A pair received a point if a GO term to which the gene was annotated contained the name of the metabolite or metabolite class, and this GO term was more specific than a term in the GO Functional Slim [54] list. 2. Specific TF target. If the gene in question had an upstream binding site (according to Harbison et al. [55] or Tachibana et al. [56]) for a transcription factor known to regulate a specific branch of metabolism (for example, methionine and a gene with a MET4 site, or a sulfur-containing amino acid and a gene with a CBF3 site), then the gene–metabolite pair received a point for each binding site. 3. Specific documented interaction. A pair received a point if a Pubmed search with each member of the pair as a search term was able to reveal a confirmed interaction between the two, as in FBP and VID24. 4. Relevant knockout phenotype. A pair received a point if there was a documented knockout (KO) phenotype for the gene in question listed on the Saccharomyces Genome Database (SGD) [57] that related to the metabolite in question, such as failure of the knockout to grow in media not supplemented with that metabolite. Since relatively few genes and metabolites have been studied for interactions, we expect that the gene–metabolite pairs scored according to this evaluation will contain many false negatives, or gene–metabolite pairs for which there is no evidence simply because the relationship between those particular genes and metabolites have not yet been studied, despite the presence of a functional interaction. #### Media Composition “YNB” minimal media consisted of 6.7 g yeast nitrogen base without amino acids and 20 g D-glucose per 1 L. “YNB-C” carbon starvation media consisted of 6.7 g yeast nitrogen base without amino acids per 1 L, with no glucose. “YNB-N” minimal media consisted of 6.7 g yeast nitrogen base without amino acids and without ammonium sulfate and 20 g D-glucose per 1 L. 30 g of three-times-washed ultrapure agarose was added per 1 L to make agarose plates. ### Supporting Information Dataset S1. Transcript data. Transcriptional data, expressed as log2 ratios to time zero, for 10, 30, 60, 120, 240, and 480 minutes post-induction of nitrogen starvation (removal of ammonium) or carbon starvation (removal of glucose). doi:10.1371/journal.pcbi.1000270.s001 (2.20 MB XLS) Dataset S2. Gold standard. Set of positive and negative examples of gene-metabolite interactions used to train the Bayesian network, assembled from the KEGG (Kyoto Encyclopedia of Genes and Genomes) Pathway database. doi:10.1371/journal.pcbi.1000270.s002 (0.21 MB XLS) Dataset S3. Literature study results. This table is a representation of the described blind literature study, in which the 764 top predictions (that were not in the gold standard) were scored together with 250 random gene-metabolite pairs. doi:10.1371/journal.pcbi.1000270.s003 (0.25 MB XLS) Figure S1. Distribution of prediction scores. This figure shows histograms of the confidence scores (x-axis) from the Bayesian integration procedure for negative (dashed light gray) and positive (solid dark gray) examples in the gold standard. The plot reveals that the distribution of positive pairs shows a propensity for higher scores (p = 1.1×10−39, by Kolmogorov-Smirnov test) and that the distribution of positive pairs is smooth. doi:10.1371/journal.pcbi.1000270.s004 (0.02 MB PDF) Figure S2. Enlarged plots of selected metabolite versus gene concentrations under nitrogen starvation. Because concentrations of the glycolytic metabolites hexose-phosphate and phosphoenolpyruvate had a smaller dynamic range under nitrogen starvation than under carbon starvation, the first five examples of metabolite vs. transcript concentration plots in the nitrogen starvation condition from Figure 2 have been plotted with an expanded x-axis. doi:10.1371/journal.pcbi.1000270.s005 (0.01 MB PDF) Figure S3. Comparison of zero timepoints from metabolomic data shows robustness to biological and technical variation. Since we have two independent measurements of metabolite counts in unperturbed cells (the zero timepoints in the carbon starvation and in the nitrogen starvation experiments), these measurements can be compared to assess the technical and biological reproducibility. The agreement between the time points is very high (y = 1.03×, R2 = 0.998). We also calculated Lin's concordance coefficient, which is a normalized measure of the distance from the 45° line through the origin y = x, where a score of 0 would be totally non-reproducible and a score of 1 would be identical; this value was calculated to be 0.98, indicating very high reproducibility. doi:10.1371/journal.pcbi.1000270.s006 (0.02 MB PDF) ### Acknowledgments The authors would like to acknowledge Ned Wingreen, the other members of the Troyanskaya and Rabinowitz labs, members of the Botstein lab, and the reviewers for valuable feedback. ### Author Contributions Analyzed the data: PHB. Wrote the paper: PHB JDR OGT. Performed the literature study: PHB. Performed the microarray experiments: MJB. Conceived and designed the study: PHB MJB JDR OGT. ### References 1. 1. Sauer U, Heinemann M, Zamboni N (2007) Genetics. Getting closer to the whole picture. Science 316: 550–551. 2. 2. Guterman I, Shalit M, Menda N, Piestun D, Dafny-Yelin M, et al. (2002) Rose scent: genomics approach to discovering novel floral fragrance-related genes. Plant Cell 14: 2325–2338. 3. 3. Askenazi M, Driggers EM, Holtzman DA, Norman TC, Iverson S, et al. (2003) Integrating transcriptional and metabolite profiles to direct the engineering of lovastatin-producing fungal strains. Nat Biotechnol 21: 150–156. 4. 4. Mercke P, Kappers IF, Verstappen FWA, Vorst O, Dicke M, et al. (2004) Combined transcript and metabolite analysis reveals genes involved in spider mite induced volatile formation in cucumber plants. Plant Physiol 135: 2012–2024. 5. 5. Suzuki H, Reddy MSS, Naoumkina M, Aziz N, May GD, et al. (2005) Methyl jasmonate and yeast elicitor induce differential transcriptional and metabolic re-programming in cell suspension cultures of the model legume Medicago truncatula. Planta 220: 696–707. 6. 6. Rischer H, Oresic M, Seppänen-Laakso T, Katajamaa M, Lammertyn F, et al. (2006) Gene-to-metabolite networks for terpenoid indole alkaloid biosynthesis in Catharanthus roseus cells. Proc Natl Acad Sci U S A 103: 5614–5619. 7. 7. Urbanczyk-Wochniak E, Luedemann A, Kopka J, Selbig J, Roessner-Tunali U, et al. (2003) Parallel analysis of transcript and metabolic profiles: a new approach in systems biology. EMBO Rep 4: 989–993. 8. 8. Hirai MY, Yano M, Goodenowe DB, Kanaya S, Kimura T, et al. (2004) Integration of transcriptomics and metabolomics for understanding of global responses to nutritional stresses in Arabidopsis thaliana. Proc Natl Acad Sci U S A 101: 10205–10210. 9. 9. Hirai MY, Klein M, Fujikawa Y, Yano M, Goodenowe DB, et al. (2005) Elucidation of gene-to-gene and metabolite-to-gene networks in Arabidopsis by integration of metabolomics and transcriptomics. J Biol Chem 280: 25590–25595. 10. 10. Gibon Y, Usadel B, Blaesing OE, Kamlage B, Hoehne M, et al. (2006) Integration of metabolite with transcript and enzyme activity profiling during diurnal cycles in Arabidopsis rosettes. Genome Biol 7: R76. 11. 11. Carrari F, Baxter C, Usadel B, Urbanczyk-Wochniak E, Zanor MI, et al. (2006) Integrated analysis of metabolite and transcript levels reveals the metabolic shifts that underlie tomato fruit development and highlight regulatory aspects of metabolic network behavior. Plant Physiol 142: 1380–1396. 12. 12. Çakir T, Patil KR, Önsan ZI, Ülgen KO, Kirdar B, et al. (2006) Integration of metabolome data with metabolic networks reveals reporter reactions. Mol Syst Biol 2: 50. 13. 13. Murray DB, Beckmann M, Kitano H (2007) Regulation of yeast oscillatory dynamics. Proc Natl Acad Sci U S A 104: 2241–2246. 14. 14. Kresnowati MTAP, van Winden WA, Almering MJH, ten Pierick A, Ras C, et al. (2006) When transcriptome meets metabolome: fast cellular responses of yeast to sudden relief of glucose limitation. Mol Syst Biol 2: 49. 15. 15. Ishii N, Nakahigashi K, Baba T, Robert M, Soga T, et al. (2007) Multiple high-throughput analyses monitor the response of E. coli to perturbations. Science 316: 593–597. 16. 16. Nikiforova VJ, Daub CO, Hesse H, Willmitzer L, Hoefgen R (2005) Integrative gene-metabolite network with implemented causality deciphers informational fluxes of sulphur stress response. J Exp Bot 56: 1887–1896. 17. 17. Urbanczyk-Wochniak E, Baxter C, Kolbe A, Kopka J, Sweetlove LJ, et al. (2005) Profiling of diurnal patterns of metabolite and transcript abundance in potato (Solanum tuberosum) leaves. Planta 221: 891–903. 18. 18. Steuer R, Kurths J, Fiehn O, Weckwerth W (2003) Observing and interpreting correlations in metabolomic networks. Bioinformatics 19: 1019–1026. 19. 19. Steuer R (2006) Review: on the analysis and interpretation of correlations in metabolomic data. Brief Bioinform 7: 151–158. 20. 20. Brauer MJ, Yuan J, Bennett BD, Lu W, Kimball E, et al. (2006) Conservation of the metabolomic response to starvation across two divergent microbes. Proc Natl Acad Sci U S A 103: 19302–19307. 21. 21. Yuan J, Fowler WU, Kimball E, Lu W, Rabinowitz JD (2006) Kinetic flux profiling of nitrogen assimilation in Escherichia coli. Nat Chem Biol 2: 529–530. 22. 22. Bro C, Regenberg B, Nielsen J (2004) Genome-wide transcriptional response of Saccharomyces cerevisiae strain with an altered redox metabolism. Biotechnol Bioeng 85: 269–276. 23. 23. Grabowska D, Chelstowska A (2003) The ALD6 gene product is indispensable for providing NADPH in yeast cells lacking glucose-6-phosphate dehydrogenase activity. J Biol Chem 278: 13984–13988. 24. 24. Wang X, Mann CJ, Bai Y, Ni L, Weiner H (1998) Molecular cloning, characterization, and potential roles of cytosolic and mitochondrial aldehyde dehydrogenases in ethanol metabolism in Saccharomyces cerevisiae. J Bacteriol 180: 822–830. 25. 25. Remize F, Andrieu E, Dequin S (2000) Engineering of the pyruvate dehydrogenase bypass in Saccharomyces cerevisiae: role of the cytosolic Mg2+ and mitochondrial K+ acetaldehyde dehydrogenases Ald6p and Ald4p in acetate formation during alcoholic fermentation. Appl Environ Microbiol 66: 3151–3159. 26. 26. Saint-Prix F, Bönquist L, Dequin S (2004) Functional analysis of the ALD gene family of Saccharomyces cerevisiae during anaerobic growth on glucose: the NADP+-dependent Ald6p and Ald5p isoforms play a major role in acetate formation. Microbiology 150: 2209–2220. 27. 27. Shiba Y, Paradise E, Kirby J, Ro D, Keasling J (2006) Engineering of the pyruvate dehydrogenase bypass in Saccharomyces cerevisiae for high-level production of isoprenoids. Metab Eng 9: 160–168. 28. 28. Maitra PK (1970) A glucokinase from Saccharomyces cerevisiae. J Biol Chem 245: 2423–2431. 29. 29. Boles E, Liebetrau W, Hofmann M, Zimmermann FK (1994) A family of hexosephosphate mutases in Saccharomyces cerevisiae. Eur J Biochem 220: 83–96. 30. 30. Jansen R, Yu H, Greenbaum D, Kluger Y, Krogan NJ, et al. (2003) A Bayesian networks approach for predicting protein-protein interactions from genomic data. Science 302: 449–453. 31. 31. Troyanskaya OG, Dolinski K, Owen AB, Altman RB, Botstein D (2003) A Bayesian framework for combining heterogeneous data sources for gene function prediction (in Saccharomyces cerevisiae). Proc Natl Acad Sci U S A 100: 8348–8353. 32. 32. Lee I, Date SV, Adai AT, Marcotte EM (2004) A probabilistic functional network of yeast genes. Science 306: 1555–1558. 33. 33. Wixon J, Kell D (2000) The Kyoto encyclopedia of genes and genomes–KEGG. Yeast 17: 48–55. 34. 34. Crespo JL, Powers T, Fowler B, Hall MN (2002) The TOR-controlled transcription activators GLN3, RTG1, and RTG3 are regulated in response to intracellular levels of glutamine. Proc Natl Acad Sci U S A 99: 6784–6789. 35. 35. Matsuo R, Kubota H, Obata T, Kito K, Ota K, et al. (2005) The yeast eIF4E-associated protein Eap1p attenuates GCN4 translation upon TOR-inactivation. FEBS Lett 579: 2433–2438. 36. 36. Santangelo GM (2006) Glucose signaling in Saccharomyces cerevisiae. Microbiol Mol Biol Rev 70: 253–282. 37. 37. Moreno F, Ahuatzi D, Riera A, Palomino CA, Herrero P (2005) Glucose sensing through the HXK2-dependent signalling pathway. Biochem Soc Trans 33: 265–268. 38. 38. Rodríguez A, Cera TDL, Herrero P, Moreno F (2001) The hexokinase 2 protein regulates the expression of the GLK1, HXK1 and HXK2 genes of Saccharomyces cerevisiae. Biochem J 355: 625–631. 39. 39. Dickinson JR, Harrison SJ, Hewlins MJ (1998) An investigation of the metabolism of valine to isobutyl alcohol in Saccharomyces cerevisiae. J Biol Chem 273: 25751–25756. 40. 40. Li Y, Holmes WB, Appling DR, RajBhandary UL (2000) Initiation of protein synthesis in Saccharomyces cerevisiae mitochondria without formylation of the initiator tRNA. J Bacteriol 182: 2886–2892. 41. 41. DeRisi JL, Iyer VR, Brown PO (1997) Exploring the metabolic and genetic control of gene expression on a genomic scale. Science 278: 680–686. 42. 42. Chiang MC, Chiang HL (1998) Vid24p, a novel protein localized to the fructose-1,6-bisphosphatasecontaining vesicles, regulates targeting of fructose-1,6-bisphosphatase from the vesicles to the vacuole for degradation. J Cell Biol 140: 1347–1356. 43. 43. Choi SK, Lee JH, Zoll WL, Merrick WC, Dever TE (1998) Promotion of Met-tRNAiMet binding to ribosomes by yIF2, a bacterial IF2 homolog in yeast. Science 280: 1757–1760. 44. 44. Cigan AM, Bushman JL, Boal TR, Hinnebusch AG (1993) A protein complex of translational regulators of GCN4 mRNA is the guanine nucleotide-exchange factor for translation initiation factor 2 in yeast. Proc Natl Acad Sci U S A 90: 5350–5354. 45. 45. Asano K, Phan L, Valásek L, Schoenfeld LW, Shalev A, et al. (2001) A multifactor complex of eif1, eIF2, eIF3, eIF5, and tRNAiMet promotes initiation complex assembly and couples gtp hydrolysis to aug recognition. Cold Spring Harb Symp Quant Biol 66: 403–416. 46. 46. Winston F, Dollard C, Ricupero-Hovasse SL (1995) Construction of a set of convenient Saccharomyces cerevisiae strains that are isogenic to S288C. Yeast 11: 53–55. 47. 47. Lin LI (1989) A concordance correlation coefficient to evaluate reproducibility. Biometrics 45: 255–268. 48. 48. Troyanskaya O, Cantor M, Sherlock G, Brown P, Hastie T, et al. (2001) Missing value estimation methods for DNA microarrays. Bioinformatics 17: 520–525. 49. 49. Alter O, Brown PO, Botstein D (2000) Singular value decomposition for genome-wide expression data processing and modeling. Proc Natl Acad Sci U S A 97: 10101–10106. 50. 50. Benjamini Y, Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B 1: 289–300. 51. 51. Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, et al. (2000) Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nat Genet 25: 25–29. 52. 52. Russell S, Norvig P (2003) Artificial Intelligence: A Modern Approach. Upper Saddle River (New Jersey): Pearson Education. pp. 716–718. Chapter 20. 53. 53. Murphy KP (2001) The Bayes Net Toolbox for MATLAB. Comput Sci Stat 33: 1024–1034. 54. 54. Myers CL, Barrett DR, Hibbs MA, Huttenhower C, Troyanskaya OG (2006) Finding function: evaluation methods for functional genomic data. BMC Genomics 7: 187. 55. 55. Harbison CT, Gordon DB, Lee TI, Rinaldi NJ, Macisaac KD, et al. (2004) Transcriptional regulatory code of a eukaryotic genome. Nature 431: 99–104. 56. 56. Tachibana C, Yoo JY, Tagne JB, Kacherovsky N, Lee TI, et al. (2005) Combined global localization analysis and transcriptome data identify genes that are directly coregulated by Adr1 and Cat8. Mol Cell Biol 25: 2138–2146. 57. 57. Cherry JM, Adler C, Ball C, Chervitz SA, Dwight SS, et al. (1998) SGD: Saccharomyces Genome Database. Nucleic Acids Res 26: 73–79. Ambra 2.10.7 Managed Colocation provided by Internet Systems Consortium.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8167874217033386, "perplexity": 4765.355000883007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646352.2/warc/CC-MAIN-20141024030046-00180-ip-10-16-133-185.ec2.internal.warc.gz"}
http://maths.jped.uk/category/number-theory/
## Rabbits, Matrices and the Golden Section – nth term of the Fibonacci Sequence by diagonalising a matrix ### Fibonacci Sequence Leonardo Pisano, or Leonardo Fibonacci, studied rabbit populations in 1202 in the following way. Rabbit couples (male and female) inhabit an island.  Each rabbit couple becomes fertile 2 months after being born and then begets a male-female pair every month thereafter.  If the population of the island starts with one couple, how many couples, $f_{n}$, are there after $n$ months? To work this out one needs to add the number of rabbit couples alive after $n-1$ months, $f_{n-1}$ (since there are no deaths), with the new-born couples.  The number of new-born couples is equal to the number of  fertile rabbit couples, which is just the number of rabbit couples alive two months previously, $f_{n-2}$.  Hence, $f_{n}=f_{n-1}+f_{n-2}$, resulting in the numbers, 0, 1, 1, 2, 3, 5, 8, … . Some quote the first term as 1, but let’s say that $f_{0}=0$ and start from there instead. ### Matrices After teaching matrices as a Further Mathematics topic for many years I had always concentrated on geometric interpretations to illustrate the topic.  The topic of diagonalisation, was restricted to symmetric matrices, which produce mutually perpendicular eigen-vectors.  The denationalization process could be visualised as a rotation to a new set of axes, a readable transformation (stretch etc.) followed by a rotation back the original basis. Recently, whilst reading a a text book on Number Theory and Cryptography (Baldoni,Ciliberto,Piacentini Cattaneo) I came across the following example, which should be within the reach of Further Mathematics students. ### Fibonacci Sequence by Matrices The Fibonacci Sequence can be expressed in matrices: $A=\begin{pmatrix} 0 & 1\\ 1 & 1 \end{pmatrix}$ then, $A \begin{pmatrix} f_{n-2} \\f_{n-1} \end{pmatrix}= \begin{pmatrix} f_{n-1} \\f_{n-2} +f_{n-1}\end{pmatrix}=\begin{pmatrix} f_{n-1} \\f_{n} \end{pmatrix}$. This is a recursive definition.  A good questions is: Is there a formula for of $f_{n}$, which does not involve calculating intermediate values? Well, each stage of this calculation involves a matrix multiplication by $A$, thus $A \begin{pmatrix} f_{n-2} \\f_{n-1} \end{pmatrix}=A^{n} \begin{pmatrix} f_{0} \\f_{1} \end{pmatrix}$ and all that is needed is to calculate $A^{n}$. A little bit of matrix multiplication yields the following matrix power series for $A$: $A^{1}=\begin{pmatrix} 0 & 1 \\ 1 & 1 \end{pmatrix}$ $A^{2}=\begin{pmatrix} 1 & 1 \\ 1 & 2 \end{pmatrix}$ $A^{3}=\begin{pmatrix} 1 & 2 \\ 2 & 3 \end{pmatrix}$ $A^{4}=\begin{pmatrix} 2 & 3 \\ 3 & 5 \end{pmatrix}$ and so on, where the Fibonacci numbers appear as entries in successive matrices. Interesting, but finding a formula for the $n^{th}$ Fibonacci number looks to be no closer. Had the matrix $A$ been a diagonal matrix, things would have been different because if $B=\begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix}$ then, $B^{n}=\begin{pmatrix} a^{n} & 0 \\ 0 & b^{n}\end{pmatrix}$. However, it is possible to diagonalise the matrix $A$. An $n \times n$ matrix can be diagonalised if and only if it has $n$ distinct eigen-values. Eigen values are given by the characteristic equation, $\begin{vmatrix} 0-t & 1 \\ 1 & 1-t \end{vmatrix}=0$ that is, $-t(1-t)-1=0$ or $t^{2}-t-1=0$. The solution to this quadratic is the Golden Ratio $\Phi$, $t=\Phi=\dfrac{ 1+ \sqrt{5}}{2}$ and $\dfrac{ 1- \sqrt{5}}{2}$ As it has distinct roots, $A$ can be diagonalised using it’s eigen-vectors, $\begin{pmatrix} \frac{-1+\sqrt{5}}{2} \\ 1\end{pmatrix}$ and $\begin{pmatrix} \frac{-1-\sqrt{5}}{2} \\ 1\end{pmatrix}$, to get matrix $C$, $C=\begin{pmatrix} \frac{-1-\sqrt{5}}{2} & \frac{-1+\sqrt{5}}{2} \\ 1 & 1 \end{pmatrix}$, in which case, $C^{-1}AC=D=\begin{pmatrix} \frac{1-\sqrt{5}}{2} & 0 \\ 0 & \frac{1+\sqrt{5}}{2} \end{pmatrix}$ or $A=CDC^{-1}$. It is now an easy matter to find successive powers of $A$: $A^{n}=(CDC^{-1})^{n}=CD^{n}C^{-1}$. Hence, $\begin{pmatrix} f_{n-1} \\f_{n} \end{pmatrix}=A \begin{pmatrix} f_{n-2} \\f_{n-1} \end{pmatrix}=A^{n} \begin{pmatrix} f_{0} \\f_{1} \end{pmatrix}= CD^{n}C^{-1}\begin{pmatrix} f_{0} \\f_{1} \end{pmatrix}$ where, $\begin{pmatrix} f_{n-1} \\f_{n} \end{pmatrix}=A^{n} \begin{pmatrix} f_{0} \\f_{1} \end{pmatrix}=\begin{pmatrix} \frac{-1-\sqrt{5}}{2} & \frac{-1+\sqrt{5}}{2} \\ 1 & 1 \end{pmatrix} \begin{pmatrix} \left(\frac{1-\sqrt{5}}{2}\right)^{n} & 0 \\ 0 & \left(\frac{1+\sqrt{5}}{2}\right)^{n} \end{pmatrix}\begin{pmatrix} -\frac{1}{\sqrt{5}} & \frac{5-\sqrt{5}}{10}\\\frac{1}{\sqrt{5}} & \frac{5+\sqrt{5}}{10}\end{pmatrix}\begin{pmatrix} f_{0} \\f_{1} \end{pmatrix}$ thus, $\begin{pmatrix} f_{n-1} \\f_{n} \end{pmatrix}=\begin{pmatrix} \frac{1}{\sqrt{5}} \left[ \left( \frac{1+\sqrt{5}}{2} \right) ^{n-1}- \left( \frac{1-\sqrt{5}}{2} \right) ^{n-1}\right] \\ \frac{1}{\sqrt{5}} \left[ \left( \frac{1+\sqrt{5}}{2} \right) ^{n}- \left( \frac{1-\sqrt{5}}{2} \right) ^{n}\right]\end{pmatrix}$, and hence the formula for the $n^{th}$ Fibonacci number is, $f_{n}=\frac{1}{\sqrt{5}} \left[ \left( \frac{1+\sqrt{5}}{2} \right) ^{n}- \left( \frac{1-\sqrt{5}}{2} \right) ^{n}\right]$. ### Matrices and Wolfram Alpha Tricky calculations are need to verify the above by hand.  Help is at hand from the Wolfram Alpha website.  Other computational engines and environments exist but this is free and readily available. Encoding matrix $A$ as [[0,1],[1,1]] etc. a long string of characters can be prepared separately and then pasted into the command line as follows.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 46, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8718873262405396, "perplexity": 885.7828636917613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407289.35/warc/CC-MAIN-20200530040743-20200530070743-00195.warc.gz"}
https://www.physicsforums.com/threads/abstract-algebra-field-and-polynomial-root-problem.858451/
[Abstract Algebra] Field and Polynomial Root problem 1. Feb 20, 2016 RJLiberator 1. The problem statement, all variables and given/known data Suppose a field F has n elements and $F=(a_1,a_2,...,a_n)$. Show that the polynomial $w(x)=(x-a_1)(x-a_2)...(x-a_n)+1_F$ has no roots in F, where $1_f$ denotes the multiplicative identity in F. 2. Relevant equations 3. The attempt at a solution Strategy: We have this polynomial: $w(x)=(x-a_1)(x-a_2)...(x-a_n)+1_F$ When we set it to 0 we see that the (x-a) terms need to equal -1 for w(x) to have a root. But here's my problem. I was thinking of trying to use induction on this proof. But if n = 0, then we have x+1 = 0 and x = i^2 which is in a field, the complex field. That has a root. But ok, let's say n = 1. Then we have x-a+1=0 in which case x = 3, a=4 would be a root. Must I assume that there is more then n=1 elements? 2. Feb 20, 2016 Staff: Mentor The point is that you already know all possible values for $x$. What happens if you try them all one by one? 3. Feb 20, 2016 RJLiberator I'm not quite sure what you mean by this. We know all possible values of x? Sure, but couldn't that be from -infinity to +infinity? How does that help? There must be a deeper understanding to what you are alluding at, but my initial thoughts are that it is confusing. We know all possible values for x? 4. Feb 20, 2016 Staff: Mentor You are looking for a root of $w(x)$, i.e. an element $a ∈ \mathbf{F}$ such that $w(a) = 0$. Why don't you try all possible $a$ and see what $w(a)$ will get you? There are only finitely many of them, nothing with infinity. 5. Feb 20, 2016 RJLiberator Okay, I am starting to see where you are going with this. w(x) = (x-a)+1 = 0 w(x)= (x-a1)(x-a2)+1 = 0 $x^2-x*a_2-x*a_1+a_1*a_2+1= 0$ Couldn't we have a1 = -1 and a2 = 1 and x = 0 ? 6. Feb 20, 2016 Staff: Mentor I think you don't see the forest for all the trees. (Don't know whether this could be said in English, too.) Again: What is $w(a)$? (Forget the $x$ for a second.) 7. Feb 20, 2016 RJLiberator w(a) = (a-a)+1 = 1. So clearly that cannot be a root. So if the product of the (x-a) terms are equal to 0, the +1 disallows it from being a root. This also works for more terms then just one. As (a-a) = 0, we see (a-a)(a-a1)(a-a2)....(a-an)+1 = 1. 8. Feb 20, 2016 Staff: Mentor Yes. $w(a) = (a-a_1) ... (a-a_n) +1_F$ but $a \in \mathbf{F} = \{a_1, ... , a_n\}$. Therefore $w(a) = 1_F$ and $1_F \neq 0_F$ because $\mathbf{F}$ is a field. 9. Feb 20, 2016 RJLiberator Ugh, I can't believe I didn't see this obvious result from the start. I'm so out of it lately. That is completely obvious :p. Indeed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410987854003906, "perplexity": 588.8996674193206}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645830.10/warc/CC-MAIN-20180318165408-20180318185408-00112.warc.gz"}
http://math.stackexchange.com/questions/141990/what-does-an-inverse-matrix-abstracts
What does an inverse matrix abstracts? I am trying to understand inverse matrixes more in depth. I took the simplest example: 2 points in a 2d space and put it into a matrix. $$\begin{pmatrix}5&7\\-2&3\end{pmatrix}$$ Calculating the inverse, we would get another matrix and another 2 points. Where could this inverse be used / and for what purposes? Can someone provide me with a trivial example (preferably in 2d)? - A $2\times 2$ matrix corresponds to a map of the plane to itself. Not surprisingly, the inverse matrix of that matrix corresponds to the inverse map. For instance, the matrix $$\begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \phantom-\cos \theta \\ \end{bmatrix}$$ corresponds to a rotation of angle $\theta$ around the origin. The inverse matrix is $$\begin{bmatrix} \phantom-\cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \\ \end{bmatrix}$$ and corresponds to a rotation of angle $-\theta$ around the origin. - The correspondence being that the matrix $A$ corresponds to the map which takes $x$ in the plane to $Ax$. –  Gerry Myerson May 7 '12 at 0:26 If you multiply a number by 3, then multiplying by $3^{-1}$ "undoes" the multiplication by 3. Similarly, if you multiply by some matrix $M$, then multiplying by $M^{-1}$ "undoes" that multiplication. Another way of saying this is that $3 \times 3^{-1} = 1$, and $M \times M^{-1} = I$, the identity matrix. Like the number 1, the matrix $I$ "does nothing". Note that not every matrix has an inverse. It has to be a square matrix, for a start. You can think about it this way: If I multiply $x$ by $0$, then I get $0$, and there is nothing I can possibly multiply that which will give me $x$ back. Likewise, there are matricies which cannot be "undone". More concretely, you can think of a matrix as a coordinate transformation. So if $M$ is a matrix representing a rotation 60 degrees clockwise, then $M^{-1}$ would be a rotation 60 degrees anti-clockwise. And so on. - The answers already posted are quite nice, and do a good job of answering your general question of what a matrix inverse really represents. Let me look at your specific example in more depth, though, because the column-vector interpretation of matrices is sometimes useful. Let's say you picked two points in the plane, $p = (5,-2)$ and $q = (7,3)$, and stuck them together as columns of a $2\times2$ matrix $A = \begin{bmatrix}5 & 7 \\ -2 & 3\end{bmatrix}$. What this matrix represents is the unique linear transformation that maps the unit vectors $\hat x = (1,0)$ and $\hat y = (0,1)$ to $p$ and $q$ respectively. (Try it out yourself: calculate $A\hat x$ and $A\hat y$ and see what happens.) So what does $A^{-1} \approx \begin{bmatrix}0.103 & -0.241 \\ 0.069 & 0.172\end{bmatrix}$ represent? As others have said, it represents the transformation that undoes the transformation caused by $A$: it maps $p$ and $q$ back to $\hat x$ and $\hat y$. As it turns out, this is also the transformation that is undone by $A$. So another way of looking at it is that its columns represent the points, $r = (0.103, 0.069)$ and $s = (-0.241, 0.172)$, that $A$ maps to $\hat x$ and $\hat y$ respectively. (This is because, for example, $AA^{-1}\hat x = \hat x$, but $A^{-1}\hat x$ = $r$; plug that in and you get $Ar = \hat x$, or in other words, that $A$ maps $r$ to $\hat x$.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9471138715744019, "perplexity": 140.89816954184752}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257063.58/warc/CC-MAIN-20150827031417-00088-ip-10-171-96-226.ec2.internal.warc.gz"}
https://hepweb.ucsd.edu/ph110b/110b_notes/node50.html
## Rotations'' in 4 Dimensions Clearly the symmetry transformation in the ( a boost) is not identical to that in the plane ( a rotation) because there is some difference in the geometry, but they are closely related. Lets try to put in the hyperbolic functions by setting as the off diagonal terms in the matrix would indicate. So we see that and the matrix becomes something very similar to the rotation, where the rapidity plays a role similar to an angle. Like an angle when two subsequent rotations are made in the same plane, the rapidity just adds if two boosts along the same direction are made. This can be easily demonstrated by multiplying the two matrices and using the identities for hyperbolic sine and cosine. This gives us our simplest calculation of the velocity addition formula. This never becomes bigger than one and therefore no velocity can exceed the speed of light. The velocity addition formula can also be derived by considering the derivative of the position vector with respect to proper time, which is time in the rest frame. This derivative is a 4-vector while is not a 4-vector. Subsections Jim Branson 2012-10-21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9095081686973572, "perplexity": 308.6595250220943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899209.48/warc/CC-MAIN-20200709065456-20200709095456-00219.warc.gz"}
http://mathoverflow.net/questions/94862/inclusions-of-linear-colimits-into-smooth-manifolds
# inclusions of linear colimits into smooth manifolds Let $V$ be the category of finite dimensional vector spaces and $M$ the category of smooth finite dimensional Hausdorff manifolds. Now suppose any finite dimensional vector space is equipped with a smooth structure in such a way that any $n$-dimensional vector space is diffeomorph to $\mathbb{R}^n$ seen as a smooth manifold with the standard smooth structure. This way there is a faithfull inclusion $\imath: V \to M$ by just forgetting the linear structure. Now recall that $V$ is cocomplete while $M$ is not. To see that colimits exist in $V$ let $D : I \to V$ be a diagram with a finite index category $I$. To construct the colimit, let $h_i : D_i \to \bigoplus_{j \in I} D_j$ be the inclusions and $Q$ be the submodule generated by the images of the maps $h_i \circ Dd - h_j$ for each morphism $d : j \to i$, and let $C = \bigoplus_{j\in I} D_j /Q$ be the quotient space. Then $(D_i \overset{q\circ h_i}{\to} C)_{i \in I}$ is a colimit of $D$, where $q$ is the quotient map. Counterexamples to the existence of all colimits in $M$ are given here on MO for example at: Colimits in the category of smooth manifolds Now the question is: Does the inclusion $i: V \to M$ preserves these (finite) colimits? Obviously $(D_i \overset{q\circ h_i}{\to} C)_{i \in I}$ is a cocone in $M$, but is it sill universal? - WHat is the smooth structure on $\oplus_i D_i$ and on $C$ ? I think that this is the true problem inside your question. –  Buschi Sergio Apr 22 '12 at 19:13 No. Let each $D_i$ be $n_i$-dimensional. Then $\oplus_iD_i$ is $n:= \sum_i n_i$-dimensional and hence diffeomorph to $\mathbb{R}^n$. Similar for $C$. It is again just a finite dimensional vector space an hence has the apropriate standard smooth structure. Recall that for $n \neq 4$ there simply is just one smooth structure. –  Mark.Neuhaus Apr 22 '12 at 19:21 I think your question only asks for finite colimits. In fact, $V$ is only finitely cocomplete. –  Martin Brandenburg Apr 22 '12 at 19:31 Any finite dimensional vector space carries a canonical smooth structure in the following manner: If $dim(V)= n$, we take the atlas consisting of all linear isomorphisms $\phi : V \to \mathbb{R}^n$.This collection of maps is an atlas since for any two $\phi$ and $\psi$ the change of coordinates is a linear map $\mathbb{R}^n \to \mathbb{R}^n$ and hence smooth. If $dim(V)=4$ we have in addition to require that we consider the standard smooth structure on $\mathbb{R}^4$ since there is a continuum of others. –  Mark.Neuhaus Apr 22 '12 at 19:33 Sure, but then you should write "Does the inclusion $i: V \to M$ preserves finite colimits?" above. –  Martin Brandenburg Apr 22 '12 at 19:35 The canonical map $i(\mathbb{R}^n) \coprod_M i(\mathbb{R}^m) \to i(\mathbb{R}^n \coprod_V \mathbb{R}^m)$, where the coproduct index indicates the ambient category, corresponds to the smooth map $\mathbb{R}^n \sqcup \mathbb{R}^m \to \mathbb{R}^{n+m}$. It is neither surjective nor injective (the two zero vectors are mapped to the zero vector). So $i$ doesn't preserve coproducts. The problem is already that $i$ maps the initial vector space to the point, which is the terminal manifold, but not the initial manifold ($\emptyset$). - Ok. Good point. –  Mark.Neuhaus Apr 22 '12 at 19:53 Mark. You write "Obviously $(D_i\overset{q\circ h_i}{\to}C)_{i\in I}$ is a cocone in $M$, then I ask what you think for smooth structure of these objects, of course about finite sum (that is a biproduts then a product) the answere is as you said "the product of smooth structures", but for $C$?. I put this countrexample (primarily I consider no the smooth manifolds, but topological spaces): let $d_1, 0: \mathbb{R}\to \mathbb{R}^2$ the inclusion map $x \mapsto (x, 0)$ (the $X$ axis is the image) and the $0$-costant map. The cokernel in $V$ of these maps is $\mathbb{R}$ (the cartesian axis $Y$), with the projection $\pi_2: \mathbb{R}^2\to \mathbb{R}: (x, y)\mapsto y$. But the cokernel of $i(0), i(\Delta)$ in $Top$ (topological spaces) is like a cone without a line (think the plane as a square without boundary, and by the middle horizontal line as the image of $d_1$, then make with this a (finite) cylinder without a line, then contracting the middle line to a point). I guess that this cokernel dont exist in the smooth manifolds. Anyway the cokern of $i(0), i(d_1)$ cannot be the projection (to the line $Y$) $\pi_2: \mathbb{R}^2\to i(\mathbb{R})$: Let $S\subset \mathbb{R}^3$ the rotations parabolid with equation $z= x^2+y^2$ and $f: \mathbb{R}^2\to S: (x, y) \mapsto (x^2\cdot y^2, y^2, y^2(1+x^2)$ (this is smooth, surjective, with injective restriction to the open cartesian quadrants, and send all $X$ axis on $(0,0)$). For topological dimention topics about smooth maps, cannot exist a (surjective) smooth map $h: \mathbb{R}\to S$ with $f= h\circ \pi_2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901440739631653, "perplexity": 295.0571126401027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062760.2/warc/CC-MAIN-20150827025422-00251-ip-10-171-96-226.ec2.internal.warc.gz"}
http://maps.thefullwiki.org/Mass%E2%80%93energy_equivalence
# Mass–energy equivalence: Map ### Map showing all locations mentioned on Wikipedia article: In physics, mass–energy equivalence is the concept that the mass of a body is a measure of its energy content. The mass of a body as measured on a scale is always equal to the total energy inside, divided by a constant c2 that changes the units appropriately: :E = mc^2 \,\! where E is energy, m is mass, and c is the speed of light in a vacuum, which is . Mass–energy equivalence was proposed in Albert Einstein's 1905 paper, "Does the inertia of a body depend upon its energy-content?", one of his Annus Mirabilis ("Miraculous Year") Papers. Einstein was not the first to propose a mass–energy relationship, and various similar formulas appeared before Einstein's theory with incorrect numerical coefficients and an incomplete interpretation. Einstein was the first to propose the simple formula and the first to interpret it correctly: as a general principle which follows from the relativistic symmetries of space and time. In the formula, c2 is the conversion factor required to convert from units of mass to units of energy. The formula does not depend on a specific system of units. Using the International System of Units, joules are used to measure energy, kilograms for mass, meters per second for speed. Note that 1 joule equals 1 kg·m2/s2. In unit-specific terms, E (in joules) = m (in kilograms) multiplied by (299,792,458 m/s)2. In natural units, the speed of light is set equal to 1, and the formula becomes an identity. ## Conservation of mass and energy The concept of mass–energy equivalence unites the concepts of conservation of mass and conservation of energy, allowing rest mass to be converted to other forms of energy, like kinetic energy, heat, or light. Kinetic energy or light can also be converted to particles which have mass. The total amount of mass–energy in a closed system remains constant because energy cannot be created or destroyed and, in all of its forms, trapped energy has mass. According to the theory of relativity, mass and energy as commonly understood are two names for the same thing, and one is not changed to the other. Rather, neither one appears without the other. When energy changes type and leaves a system, it takes its mass with it. ### Fast-moving objects and systems of objects When an object is pushed in the direction of motion, it gains momentum and energy, but when the object is already traveling near the speed of light, it cannot move much faster, no matter how much energy it absorbs. Its momentum and energy continue to increase without bounds, whereas its speed approaches a constant value—the speed of light. This implies that in relativity the momentum of an object cannot be not a constant times the velocity, nor can the kinetic energy be a constant times the square of the velocity. The relativistic mass is defined as the ratio of the momentum of an object to its velocity, and it depends on the motion of the object. If the object is moving slowly, the relativistic mass is nearly equal to the rest mass and both are nearly equal to the usual Newtonian mass. If the object is moving quickly, the relativistic mass is greater than the rest mass by an amount equal to the mass associated with the kinetic energy of the object. As the object approaches the speed of light, the relativistic mass becomes infinite, because the kinetic energy becomes infinite and this energy is associated with mass. The relativistic mass is always equal to the total energy (rest energy plus kinetic energy) divided by c2. Because the relativistic mass is exactly proportional to the energy, relativistic mass and relativistic energy are nearly synonyms; the only difference between them is the units. If length and time are measured in natural units, the speed of light is equal to 1, and even this difference disappears. Then mass and energy have the same units and are always equal, so it is redundant to speak about relativistic mass, because it is just another name for the energy. This is why physicists usually reserve the useful short word "mass" to mean rest-mass. For things made up of many parts, like a nucleus, planet, or star, the relativistic mass is the sum of the relativistic masses (or energies) of the parts, because energies are additive in closed systems. This is not true in systems which are open, however, if energy is subtracted. For example, if a system is bound by attractive forces and the work they do in attraction is removed from the system, mass will be lost. Such work is a form of energy which itself has mass, and thus mass is removed from the system, as it is bound. For example, the mass of an atomic nucleus is less than the total mass of the protons and neutrons that make it up, but this is only true after the energy (work) of binding has been removed in the form of a gamma ray (which in this system, carries away the mass of binding). This mass decrease is also equivalent to the energy required to break up the nucleus into individual protons and neutrons (in this case, work and mass would need to be supplied). Similarly, the mass of the solar system is slightly less than the masses of sun and planets individually. The relativistic mass of a moving object is bigger than the relativistic mass of an object that isn't moving, because a moving object has extra kinetic energy. The rest mass of an object is defined as the mass of an object when it is at rest, so that the rest mass is always the same, independent of the motion of the observer: it is the same in all inertial frames. For a system of particles going off in different directions, the invariant mass of the system is the analog of the rest mass, and is the same for all observers. It is defined as the total energy (divided by c2) in the center of mass frame (where by definition, the system total momentum is zero). A simple example of an object with moving parts but zero total momentum, is a container of gas. In this case, the mass of the container is given by its total energy (including the kinetic energy of the gas molecules), since the system total energy and invariant mass are the same in the reference frame where the momentum is zero, and this reference frame is also the only frame in which the object can be weighed. ## Meanings of the mass–energy equivalence formula Mass–energy equivalence states that any object has a certain energy, even when it isn't moving. In Newtonian mechanics, a motionless body has no kinetic energy, and it may or may not have other amounts of internal stored energy, like chemical energy or thermal energy, in addition to any potential energy it may have from its position in a field of force. In Newtonian mechanics, all of these energies are much smaller than the mass of the object times the speed of light squared. In relativity, all of the energy that moves along with an object adds up to the total mass of the body, which measures how much it resists deflection. Each potential and kinetic energy makes a proportional contribution to the mass. Even a single photon traveling in empty space has a relativistic mass, which is its energy divided by c2. If a box of ideal mirrors contains light, the mass of the box is increased by the energy of the light, since the total energy of the box is its mass. In relativity, removing energy is removing mass, and the formula m = E/c2 tells you how much mass is lost when energy is removed. In a chemical or nuclear reaction, the mass of the atoms that come out is less than the mass of the atoms that go in, and the difference in mass shows up as heat and light which has the same relativistic mass as the difference (and also the same invariant mass in the center of mass frame of the system). In this case, the E in the formula is the energy released and removed, and the mass m is how much the mass goes down. In the same way, when any kind of energy is added, the increase in the mass is equal to the added energy divided by c2. For example, when water is heated in a microwave oven, the oven adds about of mass for every joule of heat added to the water. An object moves with different speed in different frames, depending on the motion of the observer, so the kinetic energy in both Newtonian mechanics and relativity is frame dependent. This means that the amount of energy, and therefore the amount of relativistic mass, that an object is measured to have depends on the observer. The rest mass is defined as the mass that an object has when it isn't moving. It also applies to the invariant mass of systems when the system as a whole isn't "moving" (has no net momentum). The rest and invariant masses are the smallest possible value of the mass of the object or system. They are conserved quantities, so long as the system is closed. The rest mass is almost never additive: the rest mass of an object is not the sum of the rest masses of its parts. The rest mass of an object is the total energy of all the parts, including kinetic energy, as measured by an observer that sees the center of the mass of the object to be standing still. The rest mass adds up only if the parts are standing still and don't attract or repel, so that they don't have any extra kinetic or potential energy. The other possibility is that they have a positive kinetic energy and a negative potential energy that exactly cancels. The difference between the rest mass of a bound system and of the unbound parts is exactly proportional to the binding energy of the system. A water molecule weighs a little less than two free hydrogen atoms and an oxygen atom; the minuscule mass difference is the energy that is needed to split the molecule into three individual atoms (divided by c2), and which was given off as heat when the molecule formed (this heat had mass). Likewise, a stick of dynamite weighs a little bit more than the fragments after the explosion, so long as the fragments are cooled and the heat removed; the mass difference is the energy/heat that is released when the dynamite explodes (when it escapes, the mass associated with it escapes, but total mass is conserved). The change in mass only happens when the system is open, and the energy escapes. If a stick of dynamite is blown up in a hermetically sealed chamber, the mass of the chamber and fragments, the heat, sound, and light would still be equal to the original mass of the chamber and dynamite. This would in theory also happen, even with a nuclear bomb, if it could be kept in a chamber which did not rupture. ### Massless particles In relativity, all energy moving along with a body adds up to the total energy, which is exactly proportional to the relativistic mass. Even a single photon, graviton, or neutrino traveling in empty space has a relativistic mass, which is its energy divided by c2. But the rest mass of a photon is slightly subtler to define in terms of physical measurements, because a photon is always moving at the speed of light—it is never at rest. If you run away from a photon, having it chase you, by moving fast enough in the same direction, when the photon catches up to you the photon would be seen as having less energy, and even less the faster you were traveling when it caught you. As you approach the speed of light, the photon looks redder and redder, by doppler shift (although for a photon the Doppler shift is relativistic), and the energy of a very long-wavelength photon approaches zero. This is why a photon is massless; this means that the rest mass of a photon is zero. A massless particle in relativity is the limit of a particle with very small mass, but which is moving so close to the speed of light, so that it has a non-negligible total energy. Two photons moving in different directions can't both be made to have arbitrarily small total energy by changing frames, by chasing them. The reason is that in a two-photon system, the energy of one photon is decreased by chasing it, but the energy of the other will increase. Two photons not moving in the same direction have an inertial frame where the combined energy is smallest, but not zero. This is called the center of mass frame or the center of momentum frame; these terms are almost synonyms (the center of mass frame is the special case of a center of momentum frame where the center of mass is put at the origin). If you move at the same direction and speed as the center of mass of the two photons, the total momentum of the photons is zero. Their combined energy E in this frame gives them, as a system, a mass equal to the energy divided by c2. This mass is called the invariant mass of the pair of photons together. It is the smallest mass and energy the system may be seen to have by any observer. If the photons formed by the collision of a particle and an antiparticle, the invariant mass is the same as the total energy of the particle and antiparticle (their rest energy plus the kinetic energy), in the center of mass frame, where they will automatically be moving in equal and opposite directions (since they have equal momentum in this frame). If the photons are formed by the disintegration of a single particle with a well-defined rest mass, like the neutral pion, the invariant mass of the photons is equal to rest mass of the pion. In this case, the center of mass frame for the pion is just the frame where the pion is at rest, and the center of mass doesn't change. After the two photons are formed, their center of mass is still moving the same way the pion did, and their total energy in this frame adds up to the mass energy of the pion. So the invariant mass of the photons is equal to the pion's rest energy. So by calculating the invariant mass of pairs of photons in a particle detector, pairs can be identified that were probably produced by pion disintegration. #### Are photons massless? The photon is currently believed to be strictly massless, but this is an experimental question. If the photon is not a strictly massless particle, it would not move at the exact speed of light. Its speed would be lower and depend on its frequency. Relativity would be unaffected by this; the "speed of light", c, would then not be the actual speed at which light moves, but a constant of nature which is the maximum speed that any object could theoretically attain. It would still be the speed of gravitons, but it would not be the speed of photons. A massive photon would have other effects as well. Coulomb's law would be modified and the electromagnetic field would have an extra physical degree of freedom. These effects yield more sensitive experimental probes of the photon mass than the frequency dependence of the speed of light. If Coulomb's law is not exactly valid, then that would cause the presence of an electric field inside a hollow conductor when it is subjected to an external electric field. This thus allows one to test Coulomb's law to very high precision. . A null result of such an experiment has set a limit of m\lesssim 10^{-14} eV.. Sharper upper limits have been obtained in experiments designed to detect effects caused by the Galactic vector potential. Although the galactic vector potential is very large because the galactic magnetic field exists on very long length scales, only the magnetic field is observable if the photon is massless. In case of a massive photon, the mass term \frac{1}{2} m^2 A_{\mu}A^{\mu} would affect the galactic plasma. The fact that no such effects are seen implies an upper bound on the photon mass of m 3\times 10^{-27} eV. The galactic vector potential can also be probed directly by measuring the torque exerted on a magnetized ring.. Such methods were used to obtain the sharper upper limit of 10^{-18}eV. given by the Particle Data Group These sharp limits from the non-observation of the effects caused by the galactic vector potential have been shown to be model dependent. If the photon mass is generated via the Higgs mechanism then the upper limit of m\lesssim 10^{-14} eV from the test of Coulomb's law is valid. ## Consequences for nuclear physics Max Planck pointed out that the mass–energy equivalence formula implied that bound systems would have a mass less than the sum of their constituents, once the binding energy had been allowed to escape. However, Planck was thinking about chemical reactions, where the binding energy is too small to measure. Einstein suggested that radioactive materials such as radium would provide a test of the theory, but even though a large amount of energy is released per atom, only a small fraction of the atoms decay. Once the nucleus was discovered, experimenters realized that the very high binding energies of the atomic nuclei should allow calculation of their binding energies from mass differences. But it was not until the discovery of the neutron in 1932, and the measurement of its mass, that this calculation could actually be performed (see nuclear binding energy for example calculation). A little while later, the first transmutation reactions (such as 7Li + p → 2 4He) verified Einstein's formula to an accuracy of ±0.5%. The mass–energy equivalence formula was used in the development of the atomic bomb. By measuring the mass of different atomic nuclei and subtracting from that number the total mass of the protons and neutrons as they would weigh separately, one gets the exact binding energy available in an atomic nucleus. This is used to calculate the energy released in any nuclear reaction, as the difference in the total mass of the nuclei that enter and exit the reaction. In quantum chromodynamics, the modern theory of the nuclear force, most of the mass of the proton and the neutron is explained by special relativity. The mass of the proton is about eighty times greater than the sum of the rest masses of the quarks that make it up, while the gluons have zero rest mass. The extra energy of the quarks and gluons in a region within a proton, as compared to the energy of the quarks and gluons in the QCD vacuum, accounts for over 98% of the mass. The internal dynamics of the proton are complicated, because they are determined by the quarks exchanging gluons, and interacting with various vacuum condensates. Lattice QCD provides a way of calculating the mass of the proton directly from the theory to any accuracy, in principle. The most recent calculations claim that the mass is determined to better than 4% accuracy, arguably accurate to 1% (see Figure S5 in Dürr et al.). These claims are still controversial, because the calculations cannot yet be done with quarks as light as they are in the real world. This means that the predictions are found by a process of extrapolation, which can introduce systematic errors. It is hard to tell whether these errors are controlled properly, because the quantities that are compared to experiment are the masses of the hadrons, which are known in advance. These recent calculations are performed by massive supercomputers, and, as noted by Boffi and Pasquini: “a detailed description of the nucleon structure is still missing because ... long-distance behavior requires a nonperturbative and/or numerical treatment..."More conceptual approaches to the structure of the proton are: the topological soliton approach originally due to Tony Skyrme and the more accurate AdS/QCD approach which extends it to include a string theory of gluons, various QCD inspired models like the bag model and the constituent quark model, which were popular in the 1980s, and the SVZ sum rules which allow for rough approximate mass calculations. These methods don't have the same accuracy as the more brute force lattice QCD methods, at least not yet. But all these methods are consistent with special relativity, and so calculate the mass of the proton from its total energy. ## Practical examples Einstein used the CGS system of units (centimeters, grams, seconds, dynes, and ergs), but the formula is independent of the system of units. In natural units, the speed of light is defined to equal 1, and the formula expresses an identity: E = m. In the SI system (expressing the ratio E / m in joules per kilogram using the value of c in meters per second): E / m = c2 = (299,792,458 m/s)2 = 89,875,517,873,681,764 J/kg (≈9.0 × 1016 joules per kilogram) So one gram of mass is equivalent to the following amounts of energy: 89.9 terajoules 24.9 million kilowatt-hours (≈25 GW·h) 21.5 billion kilocalories (≈21 Tcal) Conversions used: 1956 International (Steam) Table (IT) values where one calorie ≡ 4.1868 J and one BTU ≡ 1055.05585262 J. Weapons designers’ conversion value of one gram TNT ≡ 1000 calories used. 21.5 kilotons of TNT-equivalent energy (≈21 kt) 85.2 billion BTUs Any time energy is generated, the process can be evaluated from an E = mc2 perspective. For instance, the "Gadget"-style bomb used in the Trinity test and the bombing of Nagasaki had an explosive yield equivalent to 21 kt of TNT. About 1 kg of the approximately 6.15 kg of plutonium in each of these bombs fissioned into lighter elements totaling almost exactly one gram less, after cooling [The heat, light, and electromagnetic radiation released in this explosion carried the missing one gram of mass.] This occurs because nuclear binding energy is released whenever elements with more than 62 nucleons fission. Another example is hydroelectric generation. The electrical energy produced by Grand Coulee Dam’s turbines every 3.7 hours represents one gram of mass. This mass passes to the electrical devices which are powered by the generators (such as lights in cities), where it appears as a gram of heat and light. Turbine designers look at their equations in terms of pressure, torque, and RPM. However, Einstein’s equations show that all energy has mass, and thus the electrical energy produced by a dam's generators, and the heat and light which result from it, all retain their mass, which is equivalent to the energy. The potential energy—and equivalent mass—represented by the waters of the Columbia River as it descends to the Pacific Ocean would be converted to heat due to viscous friction and the turbulence of white water rapids and waterfalls were it not for the dam and its generators. This heat would remain as mass on site at the water, were it not for the equipment which converted some of this potential and kinetic energy into electrical energy, which can be moved from place to place (taking mass with it). Whenever energy is added to a system, the system gains mass. A spring's mass increases whenever it is put into compression or tension. Its added mass arises from the added potential energy stored within it, which is bound in the stretched chemical (electron) bonds linking the atoms within the spring. Raising the temperature of an object (increasing its heat energy) increases its mass. If the temperature of the platinum/iridium "international prototype" of the kilogram—the world’s primary mass standard—is allowed to change by 1°C, its mass will change by 1.5 picograms (1 pg = 1 × 10-12 g).Assuming a 90/10 alloy of Pt/Ir by weight, a Cp of 25.9 for Pt and 25.1 for Ir, a Pt-dominated average Cp of 25.8, 5.134 moles of metal, and 132 J.K-1 for the prototype. A variation of ±1.5 picograms is of course, much smaller than the actual uncertainty in the mass of the international prototype, which is ±2 micrograms. A spinning ball will weigh more than a ball that is not spinning. Note that no net mass or energy is really created or lost in any of these scenarios. Mass/energy simply moves from one place to another. These are some examples of the transfer of energy and mass in accordance with the principle of mass–energy conservation. Note further that in accordance with Einstein’s Strong Equivalence Principle (SEP), all forms of mass and energy produce a gravitational field in the same way.Earth’s gravitational self-energy is 4.6 × 10-10 that of Earth’s total mass, or 2.7 trillion metric tons. Citation: The Apache Point Observatory Lunar Laser-Ranging Operation (APOLLO), T. W. Murphy, Jr. et al. University of Washington, Dept. of Physics ( 132 kB PDF, here.). So all radiated and transmitted energy retains its mass. Not only does the matter comprising Earth create gravity, but the gravitational field itself has mass, and that mass contributes to the field too. This effect is accounted for in ultra-precise laser ranging to the Moon as the Earth orbits the Sun when testing Einstein’s general theory of relativity. According to E=mc2, no closed system (any system treated and observed as a whole) ever loses mass, even when rest mass is converted to energy. This statement is more than an abstraction based on the principle of equivalence—it is a real-world effect. All types of energy contribute to mass, including potential energies. In relativity, interaction potentials are always due to local fields, not to direct nonlocal interactions, because signals can't travel faster than light. The field energy is stored in field gradients or, in some cases (for massive fields), where the field has a nonzero value. The mass associated with the potential energy is the mass–energy of the field energy. The mass associated with field energy can be detected, in principle, by gravitational experiments, by checking how the field attracts other objects gravitationally. The energy in the gravitational field itself is different. There are several consistent ways to define the location of the energy in a gravitational field, all of which agree on the total energy when space is mostly flat and empty. But because the gravitational field can be made to vanish locally by choosing a free-falling frame, it is hard to avoid making the location dependent on the observer's frame of reference. The gravitational field energy is the familiar Newtonian gravitational potential energy in the Newtonian limit. ## Efficiency In nuclear reactions, typically only a small fraction of the total mass–energy is converted into heat, light, radiation and motion, into a form which can be used. When an atom fissions, it loses only about 0.1% of its mass, and in a bomb or reactor not all the atoms can fission. In a fission based atomic bomb, the efficiency is only 40%, so only 40% of the fissionable atoms actually fission, and only 0.04% of the total mass appears as energy in the end. In nuclear fusion, more of the mass is released as usable energy, roughly 0.3%. But in a fusion bomb (see nuclear weapon yield), the bomb mass is partly casing and non-reacting components, so that again only about 0.03% of the total mass is released as usable energy. In theory, it should be possible to convert all the mass in matter into heat and light, but none of the theoretically known methods are practical. One way to convert all rest-mass into usable energy is to annihilate matter with antimatter. But antimatter is rare in our universe, and must be made first. Making the antimatter requires more energy than would be released. Since most of the mass of ordinary objects is in protons and neutrons, in order to convert all the mass in ordinary matter to useful energy, the protons and neutrons must be converted to lighter particles. In the standard model of particle physics, the number of protons plus neutrons is nearly exactly conserved. Still, Gerardus 't Hooft showed that there is a process which will convert protons and neutrons to antielectrons and neutrinos. This is the weak SU(2) instanton proposed by Belavin Polyakov Schwarz and Tyupkin. This process, can in principle convert all the mass of matter into neutrinos and usable energy, but it is normally extraordinarily slow. Later it became clear that this process will happen at a fast rate at very high temperatures, since then instanton-like configurations will be copiously produced from thermal fluctuations. The temperature required is so high that it would only have been reached shortly after the big bang. Many extensions of the standard model contain magnetic monopoles, and in some models of grand unification, these monopoles catalyze proton decay, a process known as the Callan–Rubakov effect. This process would be an efficient mass–energy conversion at ordinary temperatures, but it requires making monopoles and anti-monopoles first. The energy required to produce monopoles is believed to be enormous, but magnetic charge is conserved, so that the lightest monopole is stable. All these properties are deduced in theoretical models—magnetic monopoles have never been observed, nor have they been produced in any experiment so far. The third known method of total mass–energy conversion is using gravity, specifically black holes. Stephen Hawking theorized that black holes radiate thermally with no regard to how they are formed. So it is theoretically possible to throw matter into a black hole and use the emitted heat to generate power. According to the theory of Hawking radiation, however, the black hole used will radiate at a higher rate the smaller it is, producing usable powers at only small black hole masses, where usable may for example be something greater than the local background radiation. It is also worth noting that the ambient irradiated power would change with the mass of the black hole, increasing as the mass of the black hole decreases, or decreasing as the mass increases, at a rate where power is proportional to the inverse square of the mass. In a "practical" scenario, mass and energy could be dumped into the black hole to regulate this growth, or keep its size, and thus power output, near constant. ## Background E=mc2 where m stands for rest mass (invariant mass) m_0, applies most simply to single particles viewed in an inertial frame where they have no momentum. But it also applies to ordinary objects composed of many particles so long as the particles are moving in different directions so the "net" or total momentum is zero. The rest mass of the object includes contributions from heat and sound, chemical binding energies, and trapped radiation. Familiar examples are a tank of gas, or a hot poker. The kinetic energy of their particles, the heat motion and radiation, contribute to their weight on a scale according to E=mc2. The formula is the special case of the relativistic energy–momentum relationship: :E^2 - (pc)^2 = (m_0 c^2)^2.\, This equation gives the rest mass of an object which has an arbitrary amount of momentum and energy. The interpretation of this equation is that the rest mass is the relativistic length of the energy–momentum four-vector. If the equation E=mc2 is used with the rest mass or invariant mass of the object, the E given by the equation will be the rest energy of the object, and will change according to the object's internal energy, heat and sound and chemical binding energies (all of which must be added or subtracted from the object), but will not change with the object's overall motion (in the case of systems, the motion of its center of mass). However, if a system is closed, its invariant mass does not vary between different inertial observers (different inertial frames), and is also constant, and conserved. If the equation E=mc2 is used with the relativistic mass of the object, the energy will be the total energy of the object, which is also conserved so long as no energy is added to or subtracted from the object, However, like the kinetic energy, this total energy will depend on the velocity of the object, and is different in different inertial frames. Thus, this quantity is not invariant between different inertial observers, even though it is constant over time for any single observer. As in the case of rest energy, these relationships for total energy are also true for systems of objects, so long as the system is closed. Mass–Velocity Relationship In developing special relativity, Einstein found that the kinetic energy of a moving body is :K.E. = \frac{m_0 c^2}\sqrt{1-\frac{v^2}{c^2}} - m_0 c^2, with v the velocity, and m_0 the rest mass. He included the second term on the right to make sure that for small velocities, the energy would be the same as in classical mechanics: :K.E. = \frac{1}{2}m_0 v^2 + ... Without this second term, there would be an additional contribution in the energy when the particle is not moving. Einstein found that the total momentum of a moving particle is: :P = \frac{m_0 v}\sqrt{1-\frac{v^2}{c^2}}. and it is this quantity which is conserved in collisions. The ratio of the momentum to the velocity is the relativistic mass, m. :m = \frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}} And the relativistic mass and the relativistic kinetic energy are related by the formula: :K.E. = m c^2 - m_0 c^2. \, Einstein wanted to omit the unnatural second term on the right-hand side, whose only purpose is to make the energy at rest zero, and to declare that the particle has a total energy which obeys: : E = m c^2 \, which is a sum of the rest energy m_0 c^2 and the kinetic energy. This total energy is mathematically more elegant, and fits better with the momentum in relativity. But to come to this conclusion, Einstein needed to think carefully about collisions. This expression for the energy implied that matter at rest has a huge amount of energy, and it is not clear whether this energy is physically real, or just a mathematical artifact with no physical meaning. In a collision process where all the rest-masses are the same at the beginning as at the end, either expression for the energy is conserved. The two expressions only differ by a constant which is the same at the beginning and at the end of the collision. Still, by analyzing the situation where particles are thrown off a heavy central particle, it is easy to see that the inertia of the central particle is reduced by the total energy emitted. This allowed Einstein to conclude that the inertia of a heavy particle is increased or diminished according to the energy it absorbs or emits. ### Relativistic mass After Einstein first made his proposal, it became clear that the word mass can have two different meanings. The rest mass is what Einstein called m, but others defined the relativistic mass with an explicit index: :m_{\mathrm{rel}} = \frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}}\,\, . This mass is the ratio of momentum to velocity, and it is also the relativistic energy divided by (it is not Lorentz-invariant, in contrast to m_0). The equation holds for moving objects. When the velocity is small, the relativistic mass and the rest mass are almost exactly the same. • E=mc2 either means E=m0c2 for an object at rest, or E=mrelc2 when the object is moving. Also Einstein (following Hendrik Lorentz and Max Abraham) used velocity—and direction-dependent mass concepts (longitudinal and transverse mass) in his 1905 electrodynamics paper and in another paper in 1906. However, in his first paper on E=mc2 (1905) he treated m as what would now be called the rest mass. Some claim that (in later years) he did not like the idea of "relativistic mass."  When modern physicists say "mass", they are usually talking about rest mass, since if they meant "relativistic mass", they would just say "energy". Considerable debate has ensued over the use of the concept "relativistic mass" and the connection of "mass" in relativity to "mass" in Newtonian dynamics. For example, one view is that only rest mass is a viable concept and is a property of the particle; while relativistic mass is a conglomeration of particle properties and properties of spacetime. A perspective that avoids this debate, due to Kjell Vøyenli, is that the Newtonian concept of mass as a particle property and the relativistic concept of mass have to be viewed as embedded in their own theories and as having no precise connection. ### Low-speed expansion We can rewrite the expression E = γm0c2 as a Taylor series: E = m_0 c^2 \left[1 + \frac{1}{2} \left(\frac{v}{c}\right)^2 + \frac{3}{8} \left(\frac{v}{c}\right)^4 + \frac{5}{16} \left(\frac{v}{c}\right)^6 + \ldots \right]. For speeds much smaller than the speed of light, higher-order terms in this expression get smaller and smaller because v/c is small. For low speeds we can ignore all but the first two terms: E \approx m_0 c^2 + \frac{1}{2} m_0 v^2 . The total energy is a sum of the rest energy and the Newtonian kinetic energy. The classical energy equation ignores both the m0c2 part, and the high-speed corrections. This is appropriate, because all the high-order corrections are small. Since only changes in energy affect the behavior of objects, whether we include the m0c2 part makes no difference, since it is constant. For the same reason, it is possible to subtract the rest energy from the total energy in relativity. By considering the emission of energy in different frames, Einstein could show that the rest energy has a real physical meaning. The higher-order terms are extra correction to Newtonian mechanics which become important at higher speeds. The Newtonian equation is only a low-speed approximation, but an extraordinarily good one. All of the calculations used in putting astronauts on the moon, for example, could have been done using Newton's equations without any of the higher-order corrections. ## History While Einstein was the first to have correctly deduced the mass–energy equivalence formula, he was not the first to have related energy with mass. But nearly all previous authors thought that the energy which contributes to mass comes only from electromagnetic fields. ### Newton: Matter and light In 1717 Isaac Newton speculated that light particles and matter particles were inter-convertible in "Query 30" of the Opticks, where he asks: Since Newton did not understand light as the motion of a field, he was not speculating about the conversion of motion into matter. Since he did not know about energy, he could not have understood that converting light to matter is turning work into mass. ### Electromagnetic rest mass There were many attempts in the 19th and the beginning of the 20th century—like those of J. J. Thomson (1881), Oliver Heaviside (1888), and George Frederick Charles Searle (1897)—to understand how the mass of a charged object depends on the electrostatic field. Because the electromagnetic field carries part of the momentum of a moving charge, it was also suspected that the mass of an electron would vary with velocity near the speed of light. Searle calculated that it is impossible for a charged object to supersede the velocity of light because this would require an infinite amount of energy. Following Thomson and Searle (1896), Wilhelm Wien (1900), Max Abraham (1902), and Hendrik Lorentz (1904) argued that this relation applies to the complete mass of bodies, because all inertial mass is electromagnetic in origin. The formula of the mass–energy-relation given by them was m=(4/3)E/c^2. Wien went on by stating, that if it is assumed that gravitation is an electromagnetic effect too, then there has to be a strict proportionality between (electromagnetic) inertial mass and (electromagnetic) gravitational mass. This interpretation is in the now discredited electromagnetic worldview, and the formulas that they discovered always included a factor of 4/3 in the proportionality. For example, the formulas given by Lorentz in 1904 for the pre-relativistic longitudinal and transverse masses were (in modern notation): In July 1905 (published 1906), nearly at the same time when Einstein found the simple relation from relativity, Poincaré was able to explain the reason that the electromagnetic mass calculations always had a factor of 4/3. In order for a particle consisting of positive or negative charge to be stable, there must be some sort of attractive force of non-electrical nature which keeps it together. If the mass–energy of this force field is included in a way which is consistent with relativity theory, the attractive contribution adds an amount -(1/3)E/c^2 to the energy of the bodies, and this explains the discrepancy between the pure electromagnetic theory and relativity. ### Inertia of energy and radiation James Clerk Maxwell (1874) and Adolfo Bartoli (1876) found out that the existence of tensions in the ether like the radiation pressure follows from the electromagnetic theory. However, Lorentz (1895) recognized that this led to a conflict between the action/reaction principle and Lorentz's ether theory. Poincaré In 1900 Henri Poincaré studied this conflict and tried to determine whether the center of gravity still moves with a uniform velocity when electromagnetic fields are included. He noticed that the action/reaction principle does not hold for matter alone, but that the electromagnetic field has its own momentum. The electromagnetic field energy behaves like a fictitious fluid ("fluide fictif") with a mass density of E/c^2 (in other words m = E/c2). If the center of mass frame is defined by both the mass of matter and the mass of the fictitious fluid, and if the fictitious fluid is indestructible—it is neither created or destroyed—then the motion of the center of mass frame remains uniform. But electromagnetic energy can be converted into other forms of energy. So Poincaré assumed that there exists a non-electric energy fluid at each point of space, into which electromagnetic energy can be transformed and which also carries a mass proportional to the energy. In this way, the motion of the center of mass remains uniform. Poincaré said that one should not be too surprised by these assumptions, since they are only mathematical fictions. But Poincaré's resolution led to a paradox when changing frames: if a Hertzian oscillator radiates in a certain direction, it will suffer a recoil from the inertia of the fictitious fluid. In the framework of Lorentz ether theory Poincaré performed a Lorentz boost to the frame of the moving source. He noted that energy conservation holds in both frames, but that the law of conservation of momentum is violated. This would allow a perpetuum mobile, a notion which he abhorred. The laws of nature would have to be different in the frames of reference, and the relativity principle would not hold. Poincaré's paradox was resolved by Einstein's insight that a body losing energy as radiation or heat was losing a mass of the amount m=E/c^2. The Hertzian oscillator loses mass in the emission process, and momentum is conserved in any frame. Einstein noted in 1906 that Poincaré's solution to the center of mass problem and his own were mathematically equivalent (see below). Poincaré came back to this topic in "Science and Hypothesis" (1902) and "The Value of Science" (1905). This time he rejected the possibility that energy carries mass: "... [the recoil] is contrary to the principle of Newton since our projectile here has no mass, it is not matter, it is energy". He also discussed two other unexplained effects: (1) non-conservation of mass implied by Lorentz's variable mass \gamma m, Abraham's theory of variable mass and Kaufmann's experiments on the mass of fast moving electrons and (2) the non-conservation of energy in the radium experiments of Madame Curie. Abraham and Hasenöhrl Following Poincaré, Max Abraham in 1902 introduced the term "electromagnetic momentum" to maintain the action/reaction principle. Poincaré's result was verified by him, whereby the field density of momentum per cm3 is E/c^2 and E/c per cm2. In 1904, Friedrich Hasenöhrl specifically associated inertia with radiation in a paper, which was according to his own words very similar to some papers of Abraham. Hasenöhrl suggested that part of the mass of a body (which he called apparent mass) can be thought of as radiation bouncing around a cavity. The apparent mass of radiation depends on the temperature (because every heated body emits radiation) and is proportional to its energy, and he first concluded that m=(8/3)E/c^2. However, in 1905 Hasenöhrl published a summary of a letter, which was written by Abraham to him. Abraham concluded that Hasenöhrl's formula of the apparent mass of radiation is not correct, and on the basis of his definition of electromagnetic momentum and longitudinal electromagnetic mass Abraham changed it to m=(4/3)E/c^2, the same value for the electromagnetic mass for a body at rest. Hasenöhrl re-calculated his own derivation and verified Abraham's result. He also noticed the similarity between the apparent mass and the electromagnetic mass. However, Hasenöhrl stated that this energy–apparent-mass relation only holds as long a body radiates, i.e. if the temperature of a body is greater than 0 K. However, Hasenöhrl did not include the pressure of the radiation on the cavity shell. If he had included the shell pressure and inertia as it would be included in the theory of relativity, the factor would have been equal to 1 or m=E/c^2. This calculation assumes that the shell properties are consistent with relativity, otherwise the mechanical properties of the shell including the mass and tension would not have the same transformation laws as those for the radiation. Nobel Prize-winner and Hitler advisor Philipp Lenard claimed that the mass–energy equivalence formula needed to be credited to Hasenöhrl to make it an Aryan creation. ### Einstein: Mass–energy equivalence Albert Einstein did not formulate exactly the formula in his 1905 Annus Mirabilis paper "Does the Inertia of a Body Depend Upon Its Energy Content?"; rather, the paper states that if a body gives off the energy L in the form of radiation, its mass diminishes by L/c2. (Here, "radiation" means electromagnetic radiation, or light, and mass means the ordinary Newtonian mass of a slow-moving object.) This formulation relates only a change Δm in mass to a change L in energy without requiring the absolute relationship. Objects with zero mass presumably have zero energy, so the extension that all mass is proportional to energy is obvious from this result. In 1905, even the hypothesis that changes in energy are accompanied by changes in mass was untested. Not until the discovery of the first type of antimatter (the positron in 1932) was it found that all of the mass of pairs of resting particles could be converted to radiation. First correct derivation (1905) Einstein considered a body at rest with mass M. If the body is examined in a frame moving with nonrelativistic velocity v, it is no longer at rest and in the moving frame it has momentum P = Mv. Einstein supposed the body emits two pulses of light to the left and to the right, each carrying an equal amount of energy E/2. In its rest frame, the object remains at rest after the emission since the two beams are equal in strength and carry opposite momentum. But if the same process is considered in a frame moving with velocity v to the left, the pulse moving to the left will be redshifted while the pulse moving to the right will be blue shifted. The blue light carries more momentum than the red light, so that the momentum of the light in the moving frame is not balanced: the light is carrying some net momentum to the right. The object hasn't changed its velocity before or after the emission. Yet in this frame it has lost some right-momentum to the light. The only way it could have lost momentum is by losing mass. This also solves Poincaré's radiation paradox, discussed above. The velocity is small, so the right moving light is blueshifted by an amount equal to the nonrelativistic Doppler shift factor . The momentum of the light is its energy divided by c, and it is increased by a factor of v/c. So the right moving light is carrying an extra momentum \Delta P given by: \Delta P = {v \over c}{E \over 2c}.\, The left-moving light carries a little less momentum, by the same amount \Delta P. So the total right-momentum in the light is twice \Delta P. This is the right-momentum that the object lost. 2\Delta P = v {E\over c^2}.\, The momentum of the object in the moving frame after the emission is reduced by this amount: P' = Mv - 2\Delta P = \left(M - {E\over c^2}\right)v.\, So the change in the object's mass is equal to the total energy lost divided by c^2. Since any emission of energy can be carried out by a two step process, where first the energy is emitted as light and then the light is converted to some other form of energy, any emission of energy is accompanied by a loss of mass. Similarly, by considering absorption, a gain in energy is accompanied by a gain in mass. Einstein concludes that all the mass of a body is a measure of its energy content. 1906—Relativistic center-of-mass theorem Like Poincaré, Einstein concluded in 1906 that the inertia of electromagnetic energy is a necessary condition for the center-of-mass theorem to hold. On this occasion, Einstein referred to Poincaré's 1900 paper and wrote: In Einstein's more physical, as opposed to formal or mathematical, point of view, there was no need for fictitious masses. He could avoid the perpetuum mobile problem, because on the basis of the mass–energy equivalence he could show that the transport of inertia which accompanies the emission and absorption of radiation solves the problem. Poincaré's rejection of the principle of action–reaction can be avoided through Einstein's E=mc^2, because mass conservation appears as a special case of the energy conservation law. ### Others During the nineteenth century there were several speculative attempts to show that mass and energy were proportional in various discredited ether theories. In particular, the writings of Samuel Tolver Preston,Bjerknes: S. Tolver Preston's Explosive Idea E = mc2. and a 1903 paper by Olinto De Pretto, presented a mass–energy relation. De Pretto's paper received recent press coverage when Umberto Bartocci discovered that there were only three degrees of separation linking De Pretto to Einstein, leading Bartocci to conclude that Einstein was probably aware of De Pretto's work. Preston and De Pretto, following Le Sage, imagined that the universe was filled with an ether of tiny particles which are always moving at speed c. Each of these particles have a kinetic energy of mc2 up to a small numerical factor. The nonrelativistic kinetic energy formula did not always include the traditional factor of 1/2, since Leibniz introduced kinetic energy without it, and the 1/2 is largely conventional in prerelativistic physics. By assuming that every particle has a mass which is the sum of the masses of the ether particles, the authors would conclude that all matter contains an amount of kinetic energy either given by E = mc2 or 2E = mc2 depending on the convention. A particle ether was usually considered unacceptably speculative science at the time, and since these authors didn't formulate relativity, their reasoning is completely different from that of Einstein, who used relativity to change frames. Independently, Gustave Le Bon in 1905 speculated that atoms could release large amounts of latent energy, reasoning from an all-encompassing qualitative philosophy of physics.Bizouard: Poincaré E = mc2 l’équation de Poincaré, Einstein et Planck. It was quickly noted after the discovery of radioactivity in 1897, that the total energy due to radioactive processes is about one million times greater than that involved in any known molecular change. However, it raised the question where this energy is coming from. After eliminating the idea of absorption and emission of some sort of Lesagian ether particles, the existence of a huge amount of latent energy, stored within matter, was proposed by Ernest Rutherford and Frederick Soddy in 1903. Rutherford also suggested that this internal energy is stored within normal matter as well. He went on to speculate in 1904: Einstein mentions in his 1905 paper that mass–energy equivalence might perhaps be tested with radioactive decay, which releases enough energy (the quantitative amount known roughly even by 1905) to possibly be "weighed," when missing. But the idea that great amounts of usable energy could be liberated from matter, however, proved initially difficult to substantiate in a practical fashion. Because it had been used as the basis of much speculation, Rutherford himself, rejecting his ideas of 1904, was once reported in the 1930s to have said that: "Anyone who expects a source of power from the transformation of the atom is talking moonshine." This changed dramatically after the demonstration of energy released from nuclear fission after the atomic bombings of Hiroshima and Nagasaki in 1945. The equation E = mc2 became directly linked in the public eye with the power and peril of nuclear weapons. The equation was featured as early as page 2 of the Smyth Report, the official 1945 release by the US government on the development of the atomic bomb, and by 1946 the equation was linked closely enough with Einstein's work that the cover of Time magazine prominently featured a picture of Einstein next to an image of a mushroom cloud emblazoned with the equation. Einstein himself had only a minor role in the Manhattan Project: he had cosigned a letter to the U.S. President in 1939 urging funding for research into atomic energy, warning that an atomic bomb was theoretically possible. The letter persuaded Roosevelt to devote a significant portion of the wartime budget to atomic research. Without a security clearance, Einstein's only scientific contribution was an analysis of an isotope separation method based on the rate of molecular diffusion through pores, a now-obsolete process that was then competitive and contributed a fraction of the enriched uranium used in the project. While E = mc2 is useful for understanding the amount of energy released in a fission reaction, it was not strictly necessary to develop the weapon. As the physicist and Manhattan Project participant Robert Serber put it: "Somehow the popular notion took hold long ago that Einstein's theory of relativity, in particular his famous equation E = mc2, plays some essential role in the theory of fission. Albert Einstein had a part in alerting the United States government to the possibility of building an atomic bomb, but his theory of relativity is not required in discussing fission. The theory of fission is what physicists call a non-relativistic theory, meaning that relativistic effects are too small to affect the dynamics of the fission process significantly." However the association between E = mc2 and nuclear energy has since stuck, and because of this association, and its simple expression of the ideas of Albert Einstein himself, it has become "the world's most famous equation".David Bodanis, E = mc2: A Biography of the World's Most Famous Equation (New York: Walker, 2000). While Serber's view of the strict lack of need to use mass–energy equivalence in designing the atomic bomb is correct, it does not take into account the pivotal role which this relationship played in making the fundamental leap to the initial hypothesis that large atoms could split into approximately equal halves. In late 1938, while on the winter walk on which they solved the meaning of Hahn's experimental results and introduced the idea that would be called atomic fission, Lise Meitner and Otto Robert Frisch made direct use of Einstein's equation to help them understand the quantitative energetics of the reaction which overcame the "surface tension-like" forces holding the nucleus together, and allowed the fission fragments to separate to a configuration from which their charges could force them into an energetic "fission." To do this, they made use of "packing fraction," or nuclear binding energy values for elements, which Meitner had memorized. These, together with use of E = mc2 allowed them to realize on the spot that the basic fission process was energetically possible: ...We walked up and down in the snow, I on skis and she on foot. ...and gradually the idea took shape... explained by Bohr's idea that the nucleus is like a liquid drop; such a drop might elongate and divide itself... We knew there were strong forces that would resist, ..just as surface tension. But nuclei differed from ordinary drops. At this point we both sat down on a tree trunk and started to calculate on scraps of paper. ...the Uranium nucleus might indeed be an unstable drop, ready to divide itself... But, ...when the two drops separated they would be driven apart by electrical repulsion, about 200 MeV in all. Fortunately Lise Meitner remembered how to compute the masses of nuclei... and worked out that the two nuclei formed... would be lighter by about one-fifth the mass of a proton. Now whenever mass disappears energy is created, according to Einstein's formula E = mc2, and... the mass was just equivalent to 200 MeV; it all fitted! ## References 2. S. J. Plimpton and W. E. Lawton, A Very Accurate Test of Coulomb's Law of Force Between Charges, Phys. Rev. 50, 1066 - 1071 (1936) article 3. E. R. Williams , J. E. Faller, and H. A. Hill, New Experimental Test of Coulomb's Law: A Laboratory Upper Limit on the Photon Rest Mass, Phys. Rev. Lett. 26, 721 - 724 (1971) article 4. G.V. Chibisov, Sov. Phys. Uspekhi, 19, 624 (1976) 5. R. Lakes, Experimental Limits on the Photon Mass and Cosmic Magnetic Vector Potential, Phys. Rev. Lett. 80, 1826 - 1829 (1998) article 6. C. Amsler et al. (Particle Data Group), Review of Particle Physics, Phys. Lett. B667, 1 (2008) article Summary Table 7. E. Adelberger, G. Dvali, and A. Gruzinov, Photon-Mass Bound Destroyed by Vortices, Phys. Rev. Lett. 98, 010402 (2007) article preprint 8. See this news report and links 9. The 6.2 kg core comprised 0.8% gallium by weight. Also, about 20% of the Gadget’s yield was due to fast fissioning in its natural uranium tamper. This resulted in 4.1 moles of Pu fissioning with 180 MeV per atom actually contributing prompt kinetic energy to the explosion. Note too that the term "Gadget"-style is used here instead of "Fat Man" because this general design of bomb was very rapidly upgraded to a more efficient one requiring only 5 kg of the Pu/gallium alloy. 10. Assuming the dam is generating at its peak capacity of 6,809 MW. 11. There is usually more than one possible way to define a field energy, because any field can be made to couple to gravity in many different ways. By general scaling arguments, the correct answer at everyday distances, which are long compared to the quantum gravity scale, should be minimal coupling, which means that no powers of the curvature tensor appear. Any non-minimal couplings, along with other higher order terms, are presumably only determined by a theory of quantum gravity, and within string theory, they only start to contribute to experiments at the string scale. 12. G. 't Hooft, "Computation of the Effects Due to a Four Dimensional Pseudoparticle.", Physical Review D14:3432–3450. 13. A. Belavin, A. M. Polyakov, A. Schwarz, Yu. Tyupkin, "Pseudoparticle Solutions to Yang Mills Equations", Physics Letters 59B:85 (1975). 14. F. Klinkhammer, N. Manton, "A Saddle Point Solution in the Weinberg Salam Theory", Physical Review D 30:2212. 15. Rubakov V. A. "Monopole Catalysis of Proton Decay", Reports on Progress in Physics 51:189–241 (1988). 16. S.W. Hawking "Black Holes Explosions?" Nature 248:30 (1974). 17. . English translation. 18. . 19. See e.g. Lev B.Okun, The concept of Mass, Physics Today 42 (6), June 1969, p. 31–36, http://www.physicstoday.org/vol-42/iss-6/vol42no6p31_36.pdf 20. . 21. . 22. . 24. . 25. . 26. . 28. . 29. MathPages: Who Invented Relativity? 30. Christian Schlatter: Philipp Lenard et la physique aryenne. 31. . 32. Helge Kragh, "Fin-de-Siècle Physics: A World Picture in Flux" in Quantum Generations: A History of Physics in the Twentieth Century (Princeton, NJ: Princeton University Press, 1999. 33. Preston, S. T., Physics of the Ether, E. & F. N. Spon, London, (1875). 34. De Pretto, O. Reale Instituto Veneto Di Scienze, Lettere Ed Arti, LXIII, II, 439–500, reprinted in Bartocci. 35. Umberto Bartocci, Albert Einstein e Olinto De Pretto—La vera storia della formula più famosa del mondo, editore Andromeda, Bologna, 1999. 36. mathsyear2000. 37. . 38. John Worrall, review of the book Conceptions of Ether. Studies in the History of Ether Theories by Cantor and Hodges, The British Journal of the Philosophy of Science vol 36, no 1, Mar 1985, p. 84. The article contrasts a particle ether with a wave-carrying ether, the latter was acceptable. 39. Le Bon: The Evolution of Forces. 40. Cover. Time magazine, July 1, 1946. 41. Isaacson, Einstein: His Life and Universe. 42. Robert Serber, The Los Alamos Primer: The First Lectures on How to Build an Atomic Bomb (University of California Press, 1992), page 7. Note that the quotation is taken from Serber's 1992 version, and is not in the original 1943 Los Alamos Primer of the same name. 43. http://homepage.mac.com/dtrapp/people/Meitnerium.html A quote from Frisch about the discovery day. Accesssed April 4, 2009.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115516543388367, "perplexity": 472.7320924648111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210058.26/warc/CC-MAIN-20180815102653-20180815122653-00454.warc.gz"}
http://physics.stackexchange.com/questions/39419/proof-of-equality-of-the-integral-and-differential-form-of-maxwells-equation?answertab=active
# Proof of equality of the integral and differential form of Maxwell's equation Just curious, can anyone show how the integral and differential form of Maxwell's equation is equivalent? (While it is conceptually obvious, I am thinking rigorous mathematical proof may be useful in some occasions..) - Are you familiar with Green's and Stoke's theorems? –  Jerry Schirmer Oct 9 '12 at 16:48 @JerrySchirmer yes. (and I just want to see more rigorous proof; conceptually, this is somehow obvious, I guess.) –  Paul Reubens Oct 9 '12 at 17:05 Just integrate the divergences and the curls using green's and stokes theorems and you will get the integral form. –  Prathyush Oct 9 '12 at 19:16 Comment to the question(v2): Could you narrow down which part of the standard textbook treatment you don't find rigorous? –  Qmechanic Oct 9 '12 at 22:36 Actually, if you treat the differential operators in the classical sense, the integral form and the differential form are not equivalent. The integral form is more general since it is also valid for discontinuous material behavior. –  Tobias Apr 12 at 16:29 Well, as the people said in the comments, the Theorems of Green, Stokes and Gauss will do the job, and are about as mathematically rigorous as you could hope for here! The two different sets of formula follow directly. I don't want to write all four of them out, you should be able to do them yourself, but for example, let's consider the Gauss Law. Starting with the integral form, we have (ignoring physical constants) $$\int_{\partial \Omega} \vec{E} . d\vec{S} = \int_{\Omega} \rho\space dV$$ Then by Gauss, we have $$\int_{V} \mbox{div} \vec{F} \space dV = \int_{S} \vec{F} .d \vec{S}$$ Hence, we can replace $$\int_{\partial \Omega} \vec{E} . d\vec{S} \rightarrow \int_{\Omega} \mbox{div} \vec{E} \space dV$$ to give $$\int_{\Omega} \mbox{div} \vec{E} \space dV = \int_{\Omega} \rho\space dV$$ or dropping the integrals, $$\mbox{div} \vec{E} = \rho\space$$ which is the differential form. You should try to derive the other three. This may be helpful in showing you where to start, and where you want to get to. As for proofs of Green's, Stoke's and Guass Theorems, I recall learning them for some maths exams some years ago, but I wouldn't know where to begin now! Look at any differential geometry course or book and they should be somewhere early on. I can assure you though that the mathematicians have rigirous proofs for them, so we do not need to be shy in using the results of the theorems! - Let me know if you get the other three out. Another good exercise, when you've finished that, is to derive each of the integral forms of the laws from physical arguement. –  Flint72 Apr 12 at 14:45 Doesn't Green's theorem and Gauss' theorem follow as a consequence of Stokes' theorem which in turn follows from Poincaré duality? –  JamalS May 14 at 5:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9585400223731995, "perplexity": 383.5726809046718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888283.14/warc/CC-MAIN-20140722025808-00047-ip-10-33-131-23.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/73292-integrals.html
Math Help - Integrals 1. Integrals does the integral $ \int_0^1 {\frac{{e^x }}{{\sqrt {1 - \cos (x)} }}} $ converge or diverge??? and if you can show how do I see it! 2. Apply limit comparison test with $\int_0^1\frac{dx}{\sqrt x},$ which converges, hence $\underset{x\to 0}{\mathop{\lim }}\,\frac{{{e}^{x}}}{\sqrt{1-\cos x}}\div \frac{1}{\sqrt{x}}=\underset{x\to 0}{\mathop{\lim }}\,\frac{\sqrt{x}{{e}^{x}}}{\sqrt{1-\cos x}}=\underset{x\to 0}{\mathop{\lim }}\,\sqrt{\frac{x}{1-\cos x}}\cdot {{e}^{x}}=0,$ since $\frac{x}{1-\cos x}\to0$ as $x\to0.$ Finally, the integral converges.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997537732124329, "perplexity": 699.8117326056255}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737927030.74/warc/CC-MAIN-20151001221847-00244-ip-10-137-6-227.ec2.internal.warc.gz"}
https://fr.maplesoft.com/support/help/maple/view.aspx?path=StudyGuides%2FMultivariateCalculus%2FChapter9%2FExamples%2FSection9-5%2FExample9-5-2
Example 9-5-2 - Maple Help Chapter 9: Vector Calculus Section 9.5: Line Integrals Example 9.5.2 Obtain the line integral of the scalar function , taken along the line segment from $\left(1,2,3\right)$ to $\left(5,3,2\right)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 45, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942195415496826, "perplexity": 2355.1492405957656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711394.73/warc/CC-MAIN-20221209080025-20221209110025-00074.warc.gz"}
https://www.jiskha.com/questions/547697/1-let-7-4-be-a-point-on-the-terminal-side-of-theta-find-the-exact-values-of
# Math 1. Let (-7, 4) be a point on the terminal side of (theta). Find the exact values of sin(theta), csc(theta), and cot(theta). 2. Let (theta) be an angle in quadrant IV such that sin(theta)=-2/5. Find the exact values of sec(theta) and tan(theta). 3. Let (theta) be an angle in quadrant II such that csc(theta)=7/4. Find the exact values of tan(theta) and cos(theta). 4. Use a cofunction to write an expression equal to csc(3pi/8). Thank You <3 1. 👍 0 2. 👎 1 3. 👁 4,896 1. 1. (-7,4) is in quadrants II, so x=-7, y = 4 by Pythagoras, r = √(49 = 16) = √65 sinØ = y/r = 4/√65 cscØ = r/y = √65/4 cotØ = x/y = -7/4 2. since sinØ = -2/5 and Ø is in IV, y = -2, r = 5, then x^2 + 4 = 25 x = √21 etc. can you finish the rest? 1. 👍 1 2. 👎 3 ## Similar Questions 1. ### Trigonometry Let (7,-3) be a point on the terminal side of theta. Find the exact values of cos of theta, sec of theta and cot of theta? 2. ### Math Let (7,-3) be a point on the terminal side of theta. Find the exact values of sin of theta, csc of theta and cot of theta? 3. ### math Let theta be an angle in quadrant IV such that sin(theta)=-(2)/(5). Find the exact values of sec(theta) and tan(theta). 4. ### trig If θ is an angle in standard position and its terminal side passes through the point (20,-21), find the exact value of sec(θ). 1. ### precalc The point (3, -4) is on the terminal side of an angle(theta) . What is cos(theta) 2. ### Math Suppose theta is an angle in standard position with cos theta > 0. In which quadrants could the terminal side of theta lie? Select two answers. I II III IV I don't know what this means:( If cos theta = 0.8 and 270 4. ### trig the terminal side of an angle theta in standard position coincides with the line y=5x and lies in quadrant 3. find the six trigonometric functions of theta. 1. ### trig find the exact value of cos(theta) if the terminal side of angle(theta) contains the point (-3,5)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113883137702942, "perplexity": 2863.0222559767026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00412.warc.gz"}
http://www.computer50.org/kgill/mark1/RobertTau/node8.html
Next: The logical operations Up: Alan Turing's Manual for Previous: Examples of programmes on # The multiplier and the double length accumulator We shall now begin to describe the various respects in which the full machine differs from the reduced machine. Most of these differences are essentially independent. One may satisfactorily learn the effect of each difference as if added to the reduced machine in the absence of the others, and having learnt these effects will be in a position to manage the whole machine. We begin by considering the effect of adding a multiplier to the machine. If we regard a standard number as consisting of forty binary digits then the product of two such numbers will occupy eighty digits. For this reason it is necessary to have an eighty digit accumulator. This decision in itself requires us to adopt a more sophisticated attitude to our numbers and our rows of digits. In the reduced machine it was almost possible to regard the long lines as representing integers, but it was necessary to admit that they were really to be reckoned modulo . Long lines with a 1' in the most significant place could be regarded as representing negative numbers, provided that it is accepted that positive numbers greater than cannot be represented. With the double length accumulator similar considerations apply with greater complexity. Numbers held in the store must be reckoned modulo as before, but numbers in the accumulator must be reckoned modulo . In order to be able to express these matters clearly it is necessary to have notations which draw the essential distinctions, though these distinctions may appear pedantic, and though in a majority of applications the notation is not needed in all its detail. We distinguish therefore between rows of digits' and numbers. I do not think that either of these expressions needs much elaboration. By numbers I shall mean real numbers. The content of any part of the machine will be a row of digits or an assembly of such rows, and not a number (with the exception of the multiplicand). Additions and multiplications are however performed on numbers and not rows of digits. However, in order that the processes of the machine may be described in terms of these operations it is necessary to be able to relate rows of digits to numbers, and vice-versa. It would be sufficient in theory to be able to connect one number with each row, in such a way that all rows got different numbers. In practice four possible conventions present themselves particularly forcibly. We assume that our row of digits is of length and the th digit is . • The plus-convention: The associated number is • The plus-or-minus convention: The associated number is • The fractional plus-convention: The associated number is • The fractional plus-or-minus convention: The associated number is We shall use all of these conventions, both in connection with the store lines, and with the accumulator. To convert a row into a number according to one of these conventions one writes respectively , , , or as a suffix after the content of the row. As regards the converse process, that of defining rows of digits in terms of numbers it will suffice to be able to take any sequence of consecutive digits from the binary expansion of a number. Accordingly we say that for any real number , and integers , for which , is the row of digits forming the coefficients of the th to the th powers of two in the binary expansion of . If possible this expansion is to be terminating. A number of statements are made below in illustration of these conventions. 1. provided A further convention which may be used in this connection is the use of the symbol . This is somewhat analogous to the use of , in analysis. One uses to mean some function such that there exists a positive satisfying for all sufficiently large '. There is also an understanding that the functions and constants may be different at every appearance of . The use of is similar. means simply some quantity satisfying ' and again may be different on every appearance. Thus for instance from one cannot conclude for the two appearances of might have the values and . Using this convention 2) above could be written provided In terms of these conventions we may explain the properties of the multiplier as follows. To describe the state of the Mark II machine we give the states of the stores and control as in the reduced machine, but the accumulator is an eighty digit one. We also have to describe the state of the multiplicand'. This is considered to be an integer which may take any value from to . In this respect the multiplicand is exceptional. The contents of all other parts of the machine are considered as rows of digits. Having explained this a number of further function symbols should become intelligible, viz. those marked m in the list below. [The appendices to the manual do contain a list of instructions (in fact, two; one as part of a quick-reference sheet), but oddly, neither is marked as described. From those lists, the function symbols directly relevant to the multiplier are as follows, where D denotes the value of the multiplier: Function symbol Equations /C /K /¼ /D /N /F There are several additional function symbols whose properties can only be explained (or must be redefined) with reference to the 80-bit accumulator of the real machine, as opposed to the 40-bit accumulator of Turing's `reduced machine''. If we define and , then these are as follows: Function symbol Equations /E /A , /S /I , /U , T/ TA , T: TI T¼ TN TF TC TK Further supplementary tables of function codes will be added after subsequent sections, except where the relevant instructions are specifically described in the original text.] We may allow certain abbreviations as admissible, viz. (a) Where it is clear that a row of digits is meant, and the number of these digits is known one may write for , where is an expression representing a real number, e.g., the equation for /F may be abbreviated from to . (b) When it is evident that a real number is meant, and it is irrelevant whether a suffix or is used (or irrelevant whether or is used) one may omit the suffix (or use only) e.g. the equation for /F may be abbreviated further to . [Turing makes little use of the notation introduced here in the rest of the manual, except briefly towards the end when discussing systematic errors in programs.] Next: The logical operations Up: Alan Turing's Manual for Previous: Examples of programmes on Robert S. Thau 2000-02-13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8582441806793213, "perplexity": 687.4636321602471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00052-ip-10-60-113-184.ec2.internal.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2019_AMC_8_Problems/Problem_21&diff=138129&oldid=136935
# Difference between revisions of "2019 AMC 8 Problems/Problem 21" ## Problem 21 What is the area of the triangle formed by the lines , , and ? ## Solution 1 First we need to find the coordinates where the graphs intersect. We want the points x and y to be the same. Thus, we set and get Plugging this into the equation, , and intersect at , we call this line x. Doing the same thing, we get Thus also. , and intersect at , we call this line y. It's apparent the only solution to is Thus, and intersect at , we call this line z. Using the Shoelace Theorem we get: So our answer is We might also see that the lines and are mirror images of each other. This is because, when rewritten, their slopes can be multiplied by to get the other. As the base is horizontal, this is a isosceles triangle with base 8, as the intersection points have distance 8. The height is so Warning: Do not use the distance formula for the base then use heron's formula. It will take you half of the time you have left! ## Solution 2 Graphing the lines, using the intersection points we found in Solution 1, we can see that the height of the triangle is 4, and the base is 8. Using the formula for the area of a triangle, we get which is equal to .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9244233965873718, "perplexity": 285.54337923773704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495901.0/warc/CC-MAIN-20210115134101-20210115164101-00368.warc.gz"}
https://works.bepress.com/panos_kevrekidis/136/
Article SYMMETRY-BREAKING BIFURCATION IN NONLINEAR SCHRODINGER/GROSS-PITAEVSKII EQUATIONS SIAM JOURNAL ON MATHEMATICAL ANALYSIS • , University of Massachusetts - Amherst Publication Date 2008 Abstract We consider a class of nonlinear Schrödinger/Gross–Pitaeveskii (NLS-GP) equations, i.e., NLS with a linear potential. NLS-GP plays an important role in the mathematical modeling of nonlinear optical as well as macroscopic quantum phenomena (BEC). We obtain conditions for a symmetry-breaking bifurcation in a symmetric family of states as ${\cal N}$, the squared $L^2$ norm (particle number, optical power), is increased. The bifurcating asymmetric state is a “mixed mode” which, near the bifurcation point, is approximately a superposition of symmetric and antisymmetric modes. In the special case where the linear potential is a double well with well-separation $L$, we estimate ${\cal N}_{cr}(L)$, the symmetry breaking threshold. Along the “lowest energy” symmetric branch, there is an exchange of stability from the symmetric to the asymmetric branch as ${\cal N}$ is increased beyond ${\cal N}_{cr}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9253198504447937, "perplexity": 1081.620712628951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812665.41/warc/CC-MAIN-20180219131951-20180219151951-00115.warc.gz"}
http://math.stackexchange.com/questions/874/how-to-get-an-equation-that-output-the-end-point-of-an-angle-line-in-rectangle
# How to get an equation that output the end point of an angle line in rectangle? When drawing an angle line (45 degrees) in a rectangle from a general point $p = (x,y)$ that located on the right or the top line of the rectangle. How can I find the intersection point $p2$ of this line with the rectangle? In other words, I want to write the target point, $p2$, with my current information: $x, y, w, h$. (This variables are described in the picture below). The point $(0,0)$ is in the top-right corner. - Could you rephrase your question? I can't tell what is being asked. Does the line passing through ??? and (x,y) have slope 1? – Larry Wang Jul 28 '10 at 0:55 The line is going from p (a point in the right or top line) to p2 a point in the left or bottom line. p coordinates are x and y. How can I represent p2 by x,y,w,h? – stacker Jul 28 '10 at 1:02 I would just construct the line, given the fact that you have it's point ((x,y)) by assumption, and you have it's slope $m=1$. Then find the point that this line intersects the rectangle. – JacksonFitzsimmons Dec 24 '15 at 5:19 In your example if$(x,y)=(-5,0)$ Given we have the slope $m=1$, we can use the point-slope equation of a line to find an explicit equation for the line. In this example the line is given by $y=x+5$. The line that describes the left hand side of your picture is given by $x=-10$. These curves intersect at $(-10,-5)$. – JacksonFitzsimmons Dec 24 '15 at 5:22 In general you'll have to look at each rectangle as a separate case, but you could find a single formula (or maybe two formulae) that completely gives you $P_2$ as a function of $P_1$ if you consider only one rectangle. – JacksonFitzsimmons Dec 24 '15 at 5:23 Alright, I'm not 100% sure I'm understanding this correctly. You say that p can be located on the right or top line and that p2 can be located on the bottom or left line. Do you mean the rectangle can be rotated? If that's the case, the question should say that p can be on the right or bottom line of the rectangle. Also, are you looking for two separate answers or one that works both when p2 is on the bottom and on the left? If you do mean that the rectangle can be rotated, and want two different answers, it's pretty simple. First I'll deal with when p2 is on the bottom and p is on the right. Since p2 is on the bottom line we know the y-coordinate is h, according to the diagram. We also know that p is (0,y). Because of the 45 degree angle, we know that the distance between p's y-coordinate and the lower right corner is the same as the distance between the lower right corner and p2's x-coordinate, which in this case is p2's x-coordinate. Therefore, the coordinates of p2 are (h-y, h). If p2 is on the left and p is on the bottom, it's very similar. Since p2 is on the left, it's x-coordinate is h. Because p is on the x-axis, it's (x,0). Because of the 45 degree angle, the distance between the lower left corner and p is the same as the distance between the lower left corner and p2, which this time gives us p2's y-coordinate. Therefore the coordinates of p2 are (h,h-x). Hopefully I understood your intentions correctly. If not, I hope you can use my misunderstandings to further improve your question. - The rectangle cannot be rotated. The p is a general point, that his x and y coordinates, are on the right or the top line. – stacker Jul 28 '10 at 17:20 If it is just $45$ degrees, then the answer is not very difficult. Center a coordinate system at the bottom left hand corner of the rectangle. Hence the coordinates of the (???) point are $(q,0)$ for some $q<w$. Note that the because theta is $45$ degrees, $y=w-q$ (Isosceles right triangle). Hence $q=w-y$, and our point is simply $(w-y, 0)\dots$ - I edited the question to define (0,0). I don't know why you're assume that p2 always on the bottom line. It can be also on the left line. – stacker Jul 28 '10 at 2:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936436772346497, "perplexity": 254.70995497142022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276543.81/warc/CC-MAIN-20160524002116-00072-ip-10-185-217-139.ec2.internal.warc.gz"}
http://math-mprf.org/journal/articles/id1151/
Large Deviations for Random Matrix Ensembles in Mesoscopic Physics P. Eichelsbacher, M. Stolz 2008, v.14, Issue 2, 207-232 ABSTRACT In his seminal 1962 paper on the "threefold way", Freeman Dyson classified the spaces of matrices that support the random matrix ensembles deemed relevant from the point of view of classical quantum mechanics. Recently, Heinzner, Huckleberry and Zirnbauer have obtained a similar classification based on less restrictive assumptions, thus taking care of the needs of modern mesoscopic physics. Their list is in one-to-one correspondence with the infinite families of Riemannian symmetric spaces as classified by Cartan. The present paper develops the corresponding random matrix theories, with a special emphasis on large deviation principles. Keywords: random matrices,symmetric spaces,large deviations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8357431888580322, "perplexity": 1052.964667921614}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363149.85/warc/CC-MAIN-20211205065810-20211205095810-00044.warc.gz"}
https://www.physicsforums.com/threads/a-rotating-pulsar.96857/
# Homework Help: A Rotating Pulsar 1. Oct 26, 2005 ### Skomatth A pulsar is a rapidly rotating neutron star that emits a radio beam the way a lighthouse emits a light beam. We receive a radio pulse for each rotation of the star. The period T of rotation is found by measuring the time between pulses. The pulsar in the Crab nebula has a period of rotation of T=.033s that is increasing at the rate of 1.26 x 10^-5 s/y a)What is the pulsar's angular acceleration? I know that T=2pi/w when omega is constant. Does it make sense to that that T(t)=2pi/w(t) ? If this is correct then I can get the answer, but even if it is correct I'm not sure why it works. 2. Oct 26, 2005 ### Danger I can't do the math. Just remember that the beams are emitted from the magnetic poles of the star, so you can expect that once in a while the orientation will be such that we receive two pulses per revolution. I don't know if any currently known ones are like that, though. 3. Oct 27, 2005 ### Skomatth Judging by the context of the problem this is irrelevant. 4. Oct 27, 2005 ### Staff: Mentor Makes sense to me. Note that the rate of change of the period is so slow that for all practical purposes the angular speed hardly changes during one revolution. Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9099709987640381, "perplexity": 680.002270623026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155702.33/warc/CC-MAIN-20180918205149-20180918225149-00154.warc.gz"}
https://www.isid.ac.in/~statmath/?module=ViewSeminarDetails&Id=214
# Seminar at SMU Delhi September 1, 2015 (Tuesday) , 3:30 PM at Webinar Speaker: T. N. Shorey, IIT Bombay Title: Product of factorials being a factorial Abstract of Talk A conjecture of Hickerson states that 16! is the largest factorial that can be written as a product of factorials. We shall confirm it under Baker's explicit abc conjecture. We shall also give some unconditional results. This is a joint work with Saranya Nair.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92961585521698, "perplexity": 3931.315769896135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594808.94/warc/CC-MAIN-20210423131042-20210423161042-00252.warc.gz"}
http://devmaster.net/posts/6662/converting-normalmap-to-heightmap
0 102 Jan 02, 2005 at 05:06 Is there a way to convert a normal map to a height map? I found a paper on this, but it talks about an iterative method. Has anyone found a direct method? #### 3 Replies 0 101 Jan 02, 2005 at 10:11 If you’re using directx you can use the D3DXComputeNormalMap function, otherwise, here 0 102 Jan 02, 2005 at 18:45 Oops, stupid typo. Thanks for your answer. I meant NormalMap to HeightMap, but I wrote it the other way around. So, now that I fixed the typo, is there a way to convert? 0 167 Jan 03, 2005 at 04:40 It seems to me you could just start at the upper-left corner and add offsets calculated from the normal map values as you move right and down. Just the opposite of the way normal maps are calculated from height maps (by taking differences between adjacent elements). You would want to do it in floating point, keeping track of the maximum and minimum height reached, and then scale to [0, 255] range when done. I don’t know how accurate this would be at reproducing the original height map. Might take some tweaking.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271530866622925, "perplexity": 714.3221371591923}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
https://joelmoreira.wordpress.com/2013/10/06/banach-density-with-respect-to-a-single-folner-sequence/
## Banach density with respect to a single Folner sequence In this short post I show that in any countable amenable group ${G}$ the (left) upper Banach density of a set ${E\subset G}$ can be obtained by looking only at translations of a given Følner sequence. Definitions and the precise statement are given below. This result is well know among experts but this fact doesn’t seem to be explicitly stated in the literature. The proof usually uses some functional analytic machinery, but the proof in this post is purely combinatorial in nature, which may give some additional information (as functional analytic tools are usually based on the axiom of choice, which prevents by principle the deduction of quantitative bounds). Definition 1 Let ${G}$ be a countable group. A sequence ${(F_N)}$ of finite sets is a (left) Følner sequence if for all ${g\in G}$ we have $\displaystyle \lim_{N\rightarrow\infty}\frac{|F_N\cap gF_N|}{|F_N|}=1$ A countable group with a Følner sequence is called amenable and we will only deal with such groups. The canonical example is ${G={\mathbb Z}}$ with the Følner sequence formed by the sets ${F_N=\{1,2,\dots,N\}}$. Every solvable group (and in particular every abelian group) is amenable. The following proposition follows directly from the definition of Følner sequence: Proposition 2 Let ${G}$ be a countable amenable group, let ${(F_N)}$ be a Følner sequence and let ${(x_n)}$ be any sequence taking values on ${G}$. The sequence ${(F_Nx_N)}$ is a Følner sequence. The Følner ${(F_Nx_N)}$ will be called a shift of ${(F_N)}$. In the example when ${G={\mathbb Z}}$ we obtain the Følner sequences ${\big(\{x_N+1,x_N+2,\dots,x_N+N\}\big)_{N\in{\mathbb N}}}$ by shifts of ${F_N=\{1,2,\dots,N\}}$. Definition 3 Let ${G}$ be an amenable group and ${(F_N)}$ a Følner sequence on ${G}$. Let ${E\subset G}$. • The upper density of ${E}$ with respect to ${(F_N)}$ is: $\displaystyle \bar d_{(F_N)}(E)=\limsup_{N\rightarrow\infty}\frac{|E\cap F_N|}{|F_N|}$ • The upper Banach density of ${E}$ is: $\displaystyle d^*(E)=\sup\left\{\bar d_{(F_N)}(E): (F_N)\text{ is a Følner sequence in }G\right\}$ • The upper Banach density of ${E}$ with respect to ${(F_N)}$ is: $\displaystyle d_{(F_N)}^*(E)=\sup\left\{\bar d_{(G_N)}(E): (G_N)\text{ is a shift of }(F_N)\right\}$ The last definition is not standard, and the following theorem, whose proof is the main purpose of this post, explains why: Theorem 4 The upper Banach density with respect to a Følner sequence is the same as the upper Banach density. More precisely, let ${G}$ be an amenable group, let ${E\subset G}$ and let ${(F_N)}$ be any Følner sequence on ${G}$. Then ${d_{(F_N)}^*(E)=d^*(E)}$. It follows directly from the definitions that, for any Følner sequence ${(F_N)_{N\in{\mathbb N}}}$ in ${G}$ we have ${d_{(F_N)}^*(E)\leq d^*(E)}$. Thus it suffices to prove that, given any other Følner sequence ${(G_N)_{N\in{\mathbb N}}}$ in ${G}$ we have ${\bar d_{(G_N)}(E)\leq d_{(F_N)}^*(E)}$. The idea of the proof is to tile the each set ${G_N}$ (or, more precisely, an approximation for ${G_N}$) when ${N}$ is large, with shifts from ${F_N}$. From now on we fix the Følner sequences ${(F_N)_{N\in{\mathbb N}}}$ and ${(G_N)_{N\in{\mathbb N}}}$ in ${G}$ and a set ${E\subset G}$. Also we define $\displaystyle \delta:=d_{(F_N)}^*(E)$ Lemma 5 For each ${\epsilon>0}$ there exists ${m\in{\mathbb N}}$ such that for all ${x\in G}$ we have $\displaystyle |E\cap F_mx|\leq(\delta+\epsilon)|F_m|$ The main point of this lemma is that ${m}$ does not depend on ${x}$. Proof: The proof goes by contradiction. Assume for each ${m\in{\mathbb N}}$ there is some ${b_m\in G}$ such that ${|E\cap F_mb_m|>(\delta+\epsilon)|F_m|}$. Then the upper density of ${E}$ with respect to shift ${(F_mb_m)}$ of the Følner sequence ${(F_N)}$ would be larger than ${\delta}$, which is a contradiction. $\Box$ The following lemma gives us an asymptotically perfect tilling of ${(G_N)}$ by shifts of a finite set ${F}$. Lemma 6 Let ${(G_N)}$ be a Følner sequence and let ${F\subset G}$ be a finite set. For each ${N\in{\mathbb N}}$, define $\displaystyle A_N(F):=\{x\in G_N:Fx\subset G_N\}$ Define, for each ${n\in G_N}$, the number $\displaystyle h_n(F)=\left|F^{-1}n\cap A_N(F)\right|=\left|\left\{x\in A_N(F):n\in Fx\right\}\right|$ (note that ${h_n(F)}$ also depends on ${N}$, we do not make this explicit to avoid notation even more cumbersome). Finally let ${B_N(F):=\{n\in G_N:h_n(F)=|F|\}}$. Then ${|B_N(F)|/|G_N|\rightarrow1}$ as ${N\rightarrow\infty}$. The set ${B_N\subset G_N}$ is constructed so that it can be tiled by shifts of ${F}$. This lemma shows that ${B_N}$ occupies essentially all of ${G_N}$. Proof: Note that for ${n\in G_N}$ we have $\displaystyle n\in B_N(F)\iff F^{-1}n\subset A_N(F)\iff FF^{-1}n\subset G_N$ Let ${\tilde F=FF^{-1}}$ and note that ${\tilde F}$ is a finite set. Rephrasing we have that ${n\notin B_N(F)\iff (\exists g\in \tilde F)gn\notin G_N}$. Thus we have $\displaystyle G_N\setminus B_N(F)=\bigcup_{g\in\tilde F}\{n\in G_N:gn\notin G_N\}$ Since ${(G_N)}$ is a Følner sequence we get that $\displaystyle \lim_{N\rightarrow\infty}\frac{|\{n\in G_N:gn\notin G_N\}|}{|G_N|}=0$ for all ${g\in G}$. Thus, taking the union over the finite set ${\tilde F}$ we obtain that $\displaystyle \lim_{N\rightarrow\infty}\frac{|G_N\setminus B_N|}{|G_N|}=\lim_{N\rightarrow\infty}\frac1{|G_N|}\left|\bigcup_{g\in\tilde F}\{n\in G_N:gn\notin G_N\}\right|=0$ From this we conclude that ${|B_N(F)|/|G_N|\rightarrow1}$ as desired. $\Box$ As a Corollary we deduce that the density of ${E}$ with respect to ${(G_N)}$ can be calculated by looking at the intersections of ${E}$ with ${B_N(F)}$. Corollary 7 For any finite set ${F\subset G}$ we have $\displaystyle \bar d_{(G_N)}(E)=\limsup_{N\rightarrow\infty}\frac{|E\cap B_N(F)|}{|G_N|}$ We now prove Theorem 4. Let ${\epsilon>0}$ and let ${m}$ be given by Lemma 5. Let ${N}$ be very large and let ${A_N(F_m)}$, ${h_n(F_m)}$ and ${B_N(F_m)}$ be all as in Lemma 6. We now have: $\displaystyle \frac1{|G_N|}\sum_{x\in A_N(F_m)}\frac{|E\cap F_mx|}{|F_m|}\leq\frac{|A_N|}{|G_N|}(\delta+\epsilon)\leq\delta+\epsilon$ On the other hand: $\displaystyle \begin{array}{rcl} \displaystyle\sum_{x\in A_N(F_m)}\frac{|E\cap F_mx|}{|F_m|}&=&\displaystyle\sum_{x\in A_N(F_m)}\sum_{n\in F_mx}\frac{1_E(n)}{|F_m|}=\sum_{n\in G_N}1_E(n)\frac{h_n(F_m)}{|F_m|}\\&\geq&\displaystyle\sum_{n\in B_N(F_m)}1_E(n)=|E\cap B_N(F_m)| \end{array}$ Putting both together we get $\displaystyle \frac{|E\cap B_N(F_m)|}{|G_N|}\leq\delta+\epsilon$ By Corollary 7 we conclude that ${\bar d_{(G_N)}(E)\leq\delta+\epsilon}$. Since ${\epsilon}$ was arbitrarily chosen we conclude the proof of Theorem 4.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 111, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9972785711288452, "perplexity": 85.65195036598014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805977.0/warc/CC-MAIN-20171120090419-20171120110419-00593.warc.gz"}
https://papers.nips.cc/paper/2014/hash/6d9cb7de5e8ac30bd5e8734bc96a35c1-Abstract.html
#### Authors Kishan Wimalawarne, Masashi Sugiyama, Ryota Tomioka #### Abstract <p>We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear-)rank of the tensor controls the amount of sharing of information among tasks. Two types of convex relaxations have recently been proposed for the tensor multilinear rank. However, we argue that both of them are not optimal in the context of multitask learning in which the dimensions or multilinear rank are typically heterogeneous. We propose a new norm, which we call the scaled latent trace norm and analyze the excess risk of all the three norms. The results apply to various settings including matrix and tensor completion, multitask learning, and multilinear multitask learning. Both the theory and experiments support the advantage of the new norm when the tensor is not equal-sized and we do not a priori know which mode is low rank.</p>
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9335131049156189, "perplexity": 560.7735932485307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00190.warc.gz"}
http://www.theincidentalcorridor.com/2016/06/29/the-incidental-corridor-no-422/
# The Incidental Corridor No.422 By being conscious of something we imprison ourselves and are, thusly, drawn into empirical consciousness. This advent from the “I” thrown-into-the-world, finds it is no longer what it was but now belongs to a world consciousness. Consciousness is the cause of itself as it crawls out of the past and, through its intentions, receives into itself what it immediately intuits. Its intention is self-intending and acts out of a spontaneous notion of itself. We are our own “Big Bang.” Liked this post? Follow this blog to get more. ## 2 thoughts on “The Incidental Corridor No.422” 1. The Mind is a stream of consciousness . . . it reflects what is alien and what is known . . . flowing from observation to observation . . . infinite iterations of knowing . . . infinite revisions . . . infinite reconciliations . . . infinite escapes . . . infinite conclusions . . . we arrive where we began . . . we begin again . . . we end again. And how do we begin? And how do we end? 1. As I believe in the spontaneity of consciousness, there could never be an end that could be observed as an end. Nor the same for a beginning. We could “mark” a beginning by reflecting. But, once we reflect we leave the world as an outward consciousness and turn to a world that includes only the “I”. For, there is no I in the un-reflected consciousness. So, true, we arrive where we began which is the beginning of the end that never ends. A hint of eternal recurrence in everything.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8209657669067383, "perplexity": 1853.256586479099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592972.60/warc/CC-MAIN-20171217035328-20171217061328-00249.warc.gz"}
https://www.physicsforums.com/threads/2-pulleys-2-mass-confuse.38121/
2 pulleys 2 mass= confuse 1. Aug 4, 2004 dibilo 2 seperate ropes A and B and 2 pulleys 1 and 2 are assembled together with 2 masses as shown. puley 1 is supported by rope B and each rope is tied seperately to the heavier mass. Assuming ideal ropes and pulleys, what are the acceleration of each mass?.. (M moves 3 times as fast as 2M).. the answers are g/11 and 3g/11 respectively but i am at a lost as to go about solving these sort of problems can someone please give me some ideal, hints or guidelines to start me off. thx in advance. and sry for the really bad illustration Attached Files: • 1.jpg File size: 4.8 KB Views: 414 Last edited: Aug 4, 2004 2. Aug 4, 2004 Staff: Mentor Welcome to PF! Here are a few hints for you. First identify all the forces acting on the two masses and the two pulleys. Then apply Newton's 2nd law to each mass and to pulley #1. That will give you 3 easy equations with three unknowns: solve for the acceleration of the masses. The real "trick" in these kinds of problems is to identify the acceleration constraints imposed by the connecting ropes. For example, in this problem if the acceleration of the 2M mass is "a" upwards, then the acceleration of the M mass is "3a" downwards. You seem to know that already, since you state "M moves 3 times as fast as 2M". (Did you figure that out or was it given?) Why don't you take a crack at it with these hints and see how you do. Post your work and you'll get more help if you need it. Again, start by identifying the forces and drawing a separate diagram for each mass with the forces shown. 3. Aug 4, 2004 dibilo this is all i figured out so far. i know that on the left side of pulley 1, Mg-T=M(3a).. and pulley 1 will exert a force of 2T on the left side of pulley 2. 2M is being pulled by 2 upward forces one of them is T(force from M) and a downward force which is T-2Mg=2Ma, ... but the way the pulley was constructed is confusing me. how do i know how much force is exerted on pulley 1(RHS) by 2M? pls give more guidance on this.. off to school now.. thx :) 4. Aug 4, 2004 maverick280857 Hi dibilo Welcome to PF! Yes the real trick as our good ole Doc has suggested, is the application of constraints. To figure out the constraints you need a bit of visualization. But once you've done that, you're on your way to solving the force equations. The constraints yield acceleration of one body in terms of the other. Of course they're no magic tool to solve the problem and like everything, they must be used with discretion. So I have a piece of advice for you: don't figure out which body is moving n times as faster (or slower) than which other body without thought...think about the problem first, write down the constraint equations and see if they make sense to you. Then confirm if the body in question is indeed moving the way you would expect it to (n times faster, slower, whatever) Then of course you can proceed to solve all the equations together to get what you want. Enjoy physics Cheers Vivek 5. Aug 4, 2004 maverick280857 Constraints: The key idea is that the overall length of the rope remains constant. Now try writing the equation of constraint. At this crucial step you should choose a consistent sign convention for positive and negative displacements and make sure you don't forget about it later for the sign of the accelerations will reveal which direction the body is moving in according to YOUR convention. Now that you have the equation(s) of constraint, you can easily get the acceleration relationship can't you? Think over this and try applying all this to your problem...(I have deliberately not told you how to do this exactly...but I think this post, the previous and Doc's post all have enough hints to see you through the problem ) Cheers Vivek 6. Aug 4, 2004 TenaliRaman When we had mechanics in our course , i think i probably had the longest method for solving these kind of questions but it sure used to work. I used the separate the system into different parts and analyse each part then i used to draw a diagram (a sort of a dependency diagram) to see as to how one sub-system affects the other(the links used to have info on the factors affected).Finally i used to finish off the problem.(ofcourse with more practice, i started doing it much faster). 1>separate the system into smaller subsystems. (in this case u can have 4 sub-systems ... 2 subsystems related to masses and 2 related to pulleys) 2>analyse each system .. (if possible have separate sheets for each and do it simultaneously ... if u don't know some values ... assign some variables to it). 3>try to see the dependencies betn the sub-systems. in this case it is easier RHS MASS<-->RHSPULLEY<--->LHSPULLEY<--->LHSMASS. (try to see what are the dependency factors) At this stage u should be able to determine as to why "(M moves 3 times as fast as 2M).." if u have come till here the 4th step is 4>finish the problem ... cuz once u have got that .... and did all the analysis i said ... u probably have everything u need to solve it. whatever i said so far or DocAL said or vivek said are the samething ... but i just gave u a procedure that i followed to solve many mechanics problems and it *might* help you. ofcourse i put out this thing since u wanted some guidelines on solving *such* problems. ofcourse this post bears not much importance other than that :) -- AI 7. Aug 5, 2004 Staff: Mentor You need to approach the problem systematically. No shortcuts. First identify all the forces, giving them labels. Let $T_1$ be the tension in the cord around pulley 1 and $T_2$ be the tension in the cord around pulley 2. So what forces act on the 2M mass? There are two upward forces: $T_1$ and $T_2$. And there is one downward force: the weight = 2Mg. (Note: "M" does not pull directly on "2M"! The only things touching 2M are the two ropes: they do the pulling, not M.) Now it's time to apply Newton's 2nd law. But before we do, we need to adopt a consistent sign convention. Which way will the 2M mass accelerate? Let's say we have no idea (after all, that's part of what we are going to find out). So we guess: Let's assume it has an acceleration "a" upwards. And so we pick a sign convention in which up is positive. (If we guess wrong, "a" will turn out negative.) So Newton's 2nd law tells us: $T_1 + T_2 - 2Mg = 2Ma$. Make sense? So you tell me: What are the forces acting on the M mass? What's the magnitude and direction of the acceleration of the M mass? (Remember we are assuming that the 2M mass accelerates up with acceleration "a": be consistent.) Now apply Newton's 2nd law. 8. Aug 5, 2004 dibilo ahh finally back from school, 1st of all thx for all the replies. yup after some drawing of free body diagram, i got the same equation given by you. if i assume 2M has an upward acceleration, then M will have a -ve acceleration (a') which gives me -Ma'=T3-Mg , whereby T3 is the upward force on the pulley on pulley 1, i have an upward force which i named Ta and 2 downward ones. the left one which is connected to M which is T3 ,RHS=T1 which gives Ta=T3+T1 on pulley 2, 1 upward force (Tb) which is the resultant of, LHS, Ta and RHS, T2. which gives Tb=Ta+T2 now i have a question, does the RHS of pulley 1 = to the LHS, as in T3=T1 which in turn means T3=T1=T2 ? if its so then i can solve it, but i've tried and failed to get a'=3a... which means i am wrong in that assumtion right? if thats the case could you pls give me some guidance on how do i get T1. thx in advance. 9. Aug 5, 2004 Staff: Mentor If 2M has an acceleration of "a" upward, then M will have an acceleration of "3a" downward. The tension is the rope attached to M is $T_1$. So: $T_1 - Mg = M(-3a) = -3Ma$ There are only two tensions in this problem: $T_1$ and $T_2$, as defined in my earlier post. (You do realize that the tension is equal on both ends of a rope going around an ideal pulley, right?) On pulley 1, the upward force is $T_2$, the downward force is $2T_1$. So: $T_2 = 2T_1$. Forget pulley 2. I find this puzzling. I thought you already figured out that "a'=3a"? Or did you just guess? I would figure out that relationship before writing equations. Assuming you know the acceleration constraints, then you have three equations and three unknowns: $T_1$, $T_2$, and "a". Solve for a. This will tell you the acceleration of both masses: a & 3a. On the other hand, if you don't know how to find the acceleration constraints, work on that first. 10. Aug 5, 2004 dibilo yeah solved..thx to all. i was kindda stuck coz i didnt know an ideal pulley have forces even on both sides :tongue2: still a good lesson learned. btw, a'=3a was given, but if its not, then how do i figure out myself? through visualization? or from the fact that there is 3 times the force acting on 2M, (on M=T1, on 2M=T1+T2=3T1). Last edited: Aug 5, 2004 11. Aug 5, 2004 Staff: Mentor how to find the acceleration constraint No need for mystic revelation or visualization. Here's one way of working out the acceleration contraints. Find the relative acceleration between the masses and pulleys, one piece at a time. (If you don't know something, just give it a label and move on.) Again, I will start by assuming that mass B has an acceleration "a". Just for fun, I will assume mass B accelerates downward. (It doesn't matter.) When I analyze the problem, here's what I find: (1) acceleration of mass B with respect to pulley #2 = a (down) (2) acceleration of pulley 1 with respect to pulley #2 = a (up) (3) acceleration of mass A with respect to pulley #1= x (up) (4) acceleration of mass B with respect to pulley #1= x (down) Don't go any further until this makes sense. Now combine equations 2 & 3 to find the acceleration of mass A with respect to pulley 2: (5) a' = x + a Now combine equations 2 & 4 with 1: (6) -a = a - x a' -a = 2a ===> a' = 3a (QED) Make sense? Think this stuff over. 12. Aug 5, 2004 maverick280857 Hmm...all this stuff shouldn't look complicated. So I suggest (as I did earlier) that you should write the constraint using the fact that the length of the segment of the string is constant. Then differentiate both sides with respect to time to get the acceleration relationships. Plug them into newton's laws and you're through. (Use the same +ve direction for accelerations and net forces). But please don't do it clerically--you are solving a physics problem..visualization is a must. How you do it now, is upto you as you have been given quite a few approaches. 13. Aug 6, 2004 dibilo but is this method workable? like if i see that there is 2 or 3 times the force acting on a body 'b' w.r.t 'a' and i assume that 'a' have 2 or 3 times the acceleration w.r.t to 'b' 14. Aug 6, 2004 Staff: Mentor It may look complicated when it's all written out, but it's not really. That's exactly where these acceleration constraints come from! I suspect you'll end up doing exactly what I did. Good point. I was half-joking when I said "visualization" was not needed. Of course it's needed--it's essential. But don't try to look at the system of pulleys as one big mess and hope the answer jumps out at you. Break it down into managable pieces. 15. Aug 6, 2004 Staff: Mentor No, I don't think that method is workable. For one thing, how do you know how much force is on each mass until you apply Newton's law and figure it out? And another thing: The net force on A is 3/2 the net force on B. Is that obvious to you? It's not to me. (Of course, everything becomes obvious once you've figured it out. ) 16. Aug 7, 2004 maverick280857 Doc's right. Just keep the things simple...breakups help in physics ;-) Similar Discussions: 2 pulleys 2 mass= confuse
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8665114045143127, "perplexity": 874.7521389517234}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186530.52/warc/CC-MAIN-20170322212946-00147-ip-10-233-31-227.ec2.internal.warc.gz"}
http://www.computer.org/csdl/proceedings/focs/1984/0591/00/0715933-abs.html
Subscribe Singer Island, FL Oct. 24, 1984 to Oct. 26, 1984 ISBN: 0-8186-0591-X pp: 332-337 S. Moran , Technion ABSTRACT Combinatorial techniques for extending lower bounds results for decision trees to general types of queries are presented. We consider problems, which we call order invariant, that are defined by simple inequalities between inputs. A decision tree is called k-bounded if each query depends on at most k variables. We make no further assumptions on the type of queries. We prove that we can replace the queries of any k-bounded decision tree that solves an order invariant problem over a large enough input dornain with k-bounded queries whose outcome depends only on the relative order of the inputs. As a consequence, all existing lower bounds for comparison based algorithms are valid for general k-bounded decision trees, where k is a constant. We also prove an /spl Omega/(n log n) lower bound for the element uniqueness problem and several other problems for any k-bounded decision tree, such that k - )(n/sup c/) and c < 1/2. This lower bound is tight since that there exist n/sup 1/2/-bounded decision trees of complexity 0(n) that solve the element uniqueness problem. All the lower bounds mentioned above are shown to hold for nondeterministic and probabilistic decision trees as well. CITATION S. Moran, M. Snir, U. Manber, "Applications Of Ramsey's Theorem To Decision Trees Complexity", FOCS, 1984, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science 1984, pp. 332-337, doi:10.1109/SFCS.1984.715933 23 ms (Ver 2.0) Marketing Automation Platform
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528223872184753, "perplexity": 935.7380656937571}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701153998.27/warc/CC-MAIN-20160205193913-00204-ip-10-236-182-209.ec2.internal.warc.gz"}
https://papers.nips.cc/paper/2020/file/ad71c82b22f4f65b9398f76d8be4c615-MetaReview.html
NeurIPS 2020 ### Meta Review The paper shows a model-free algorithm with an improved regret bound for finite-state finite-horizon MDP problems. The new bound closes the gap with the best model-based result. This is a nice theoretical contribution.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9784377813339233, "perplexity": 2310.6859360167505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488538041.86/warc/CC-MAIN-20210623103524-20210623133524-00445.warc.gz"}